Monday, September 30, 2019

Isolation in Frankenstein

The Isolation of Victor Frankenstein Isolation and loneliness can do great injustices to the human brain. People are programed to function in cohabitation with others of their kind, to form relationships with them. So, when these relationships fail or seem to be absent from one’s life, the aloneness can ache. In Frankenstein by Mary Shelley, the reader sees the developing isolation of Victor Frankenstein, which can be attributed to his personality and upbringing, as well as his unwavering obsession with his scientific success. Certain people seem to have something in their genetic make up which makes them more social than others.These people seem to interact with crowds at ease and, as the social butterflies within their peers, tend to avoid isolation. Victor Frankenstein is not one of these people. It is not necessarily a fault of Victor, but merely a reality. As he would explain, it simply â€Å"was my temper to avoid a crowd, and to attach myself fervently to a few (19). à ¢â‚¬  This personality trait contributed to the increasing isolation Victor became subject to. The few he so fervently attached himself to exclusively included his own family and Clerval, all of whom stayed behind upon his departure to Ingolstadt.Victor explained, â€Å"I was indifferent†¦ to my schoolfellows in general (19). † So, once he was away at school, for the first time feeling the absence of his â€Å"familiar faces†, he felt alone and â€Å"totally unfitted for the company of strangers (25). † Victor’s struggle with his natural â€Å"repugnance to new countenances (25)† led to him feeling truly alone for the first time in his life. Ultimately, the natural ways of Victor combined with his comfortable and domestic upbringing had left him sheltered and timid. This reality made the culture shock of leaving home a lonely one.Another factor that contributed to Frankenstein’s isolation was his fixation on his learning and scientific endeavors. Victor agreed with the theory that â€Å"If the study to which you apply yourself has a tendency to weaken your affections, and to destroy your taste for those simple pleasures†¦ that study is certainly unlawful†¦ not benefitting the human mind (34). † However, this is precisely what his experiments do to him. Victor loses track of time, forgets all his simple pleasures, and neglects all of his other responsibilities. He no longer took time to appreciate nature or keep in touch with his family.He was so engrossed in his work that he said, â€Å"I grew alarmed at the wreck I perceived I had become†, bothered by â€Å"slow fevers and nerves to a most painful degree† (34). Frankenstein allowed his ever increasing desire for knowledge and progress to control all aspects of his life and isolate him from all the outer workings of his world. Even upon the success of all he had been working towards, his isolation grew even more extreme. At that poi nt, he had not only become completely secluded to the instruments of his laboratory, but had created a terrifying creature he feared he would never escape.Victor had become blinded by his scientific curiosity and cut himself off from the world for the sake of accomplishing his goals. He found himself neck deep in worries, feeling utterly alone. Victor Frankenstein subjects himself to isolation throughout the novel. He allows himself, personally susceptible to self isolation, something to fixate on. It is this combination that leaves him missing his family and eventually void of a connection with the world beyond his laboratory. And, as previously stated, the ache of this isolation can do great injustices to the human brain, shoving towards his dismal destiny.

Sunday, September 29, 2019

Being Fat a Big Issue

Being Fat a Big Issue Daniel Gutierrez English 1430, Fall 2010, Section 02 Professor A. Hepner October 14th, 2010 Daniel Gutierrez A. Hepner ENG 1300-01 October 14th, 2010 Being Fat a Big Issue Being fat is one of the biggest issues lately. Our society has created a stereotype of how people should look and it is not exactly a fat boy/girl. People who are fat have suffered every day for how they look and many times our society ignores fat people’s feelings.Obese people have been suffering depression and discrimination for being fat, even though sometimes it is a disease or compulsive eating (eating disorder) that makes them fat, there are also some irresponsible cases of eating unhealthily and it not working out. Obese people today tend to be discriminated for being fat in our society. Overweight people are not different to us; they are people like you and me, for that reason we should not judge them. They are part of our society and they deserve respect and understanding. Ms.C laudia Gomez said, â€Å"It is hard for us when I take the bus and everybody looking us like if we are different or if we are funny, they don’t know how painful it is. † In addition, there are some studies to show that depression can be responsible for overweight especially in women (Overweight and Depression). Moreover, some obese people have eating disorder like compulsive overeating. Compulsive overeating is an addiction to food in big quantities. People suffering compulsive overeating used to eat to hide their emotions, to avoid what they feel inside or their life problems.As Susie Orbach said in her essay, Fat as a Feminist Issue, â€Å"Women suffering from the problem of compulsive eating endure double anguish: feeling out of step with the rest of society, and believing that it is all their own fault†¦. (201)† Overweight people have a disease which means people having extra body weight from muscle, bone and fat (What are Overweight and Obesity). There are some options to lose weight such as: surgery, diets by a nutritionist, exercise and some medicines. According to Medline Plus one of the common options for very obese people is the Gastric by Pass Surgery.After this surgery the patients will not be capable of eating like they ate before. This is an alternative to lose weight faster but also the patients have to follow a diet and do exercise (Gastric Bypass Surgery). Furthermore, obese people have not been practicing healthy habits. The most common unhealthy habits among overweight people are: they do not do exercise, they eat too frequently, usually eat more than one time at the same meal, they stay away from lightest activities (like use the stairs or walk a little), and they eat when they are not truly hungry (Frisch).Likewise, overweight people might be more responsible. Everybody knows what we can do or what we can’t. If I know that I’m gaining weight, I also know that I have to take care of what I eat and do exercise. But many obese people know that they are fat and they still eat unhealthy food and also they do not do exercise. I think this is happening because we like to blame the circumstances (depression, divorce, childhood, etc). But at the same time they also know that being overweight may not be their fault but they are responsible to remain so, because what we do is our choice.Even though there are many advertisements that encourage us to eat unhealthy, nobody is forcing us to eat that food. Also you are the only one who can make the simple decision of taking the elevator or go up stairs. Being obese or overweight is an irresponsible act that makes people sick and this affects everybody because this is a public health problem that should not be. In his essay, What You Eat Is Your Business, Radley Balko maintains that, â€Å"the best way to alleviate the obesity â€Å"public health† crisis is to remove obesity from the realm of public health (157). As he said, here some p eople would probably say that people should be responsible with their own health because we are the only ones who make the choice of living healthy or not. On the other hand, some fat people are proud of how they are. Mr. Alvarado who is weighting around 310 pounds describes himself as a big man and he said, â€Å"I don’t feel bad because I’m fat, I am happy how I am, a big man. I’m comfortable with my weight, I am healthy and I don’t want to change because the society says that people should be thin. In my opinion, over the years and following the bad habits that he has, I am not sure if he is going to be healthy. He maybe is proud of how he is but if he does not start to eat healthy and do exercise, he is going to see the consequences in a few years. In conclusion, being obese or overweight is an issue but it is also a disease. Although, it is unhealthy and unsightly, the hardest is some of them feel like they cannot fit in our society. In my opinion, we should not judge them; we should help them to make the correct decision to have a healthy life and also we have to change the stereotype that the society has showed us.Also, I used to think all obese people were unhappy to be fat. But my recent researches show me that some obese people simply do not care what people say about them and they are happy how they are. Works Cited Pages: †¢ â€Å"Compulsive Overeating† Something Fishy, Website on Eating Disorders. Web. October 07. 2010. http://www. something-fishy. org/whatarethey/coe. php †¢ â€Å"Gastric Bypass Surgery. † Shabir Bhimji MD, PhD, Specializing in Cardiothoracic and Vascular Surgery, Midland , TX Review provided by VeriMed Healthcare Network.Also reviewed by David Zieve, MD, MHA, Medical Director, A. D. A. M. Medine Plus, Trusted Healthy Information for You. Update Date, May 17. 2010. Web. October 07. 2010. http://www. nlm. nih. gov/medlineplus/ency/article/007199. htm †¢ â€Å"Overweight an d Depression. † Women’s Health Resource, Taking Care of Your Body. Web. October 07. 2010. http://www. wdxcyber. com/overweight-depression. html †¢ â€Å"What are Overweight and Obesity† Overweight and Obesity. Revised March. 2010. Web. October 07. 2010. http://www. nhlbi. nih. ov/health/dci/Diseases/obe/obe_whatare. html †¢ Balko, Radley. â€Å"What You Eat Is Your Business. † They Say / I say with Readings. Ed. Graff, Birkenstein, Durst. New York-London. 2009. 157-161. Print. †¢ Frisch, Louann. â€Å"7 Bad Habits of Overweight People. † Community and Resources. 24 Hours Fitness. Web. October 07. 2010. http://www. 24hourfitness. com/resources/weight_loss/articles/bad_habits. html †¢ Orbach, Susie. â€Å"Fat as a Feminist Issue. † They Say / I say with Readings. Ed. Graff, Birkenstein, Durst. New York-London. 2009. 200-205. Print.

Saturday, September 28, 2019

To compare leadership styles and management styles of three Essay

To compare leadership styles and management styles of three universities in The USA and also to explore the gradual changes on management styles - Essay Example The theoretical framework of this study will anchor on various theories of organizational management. The theories shall be considered in terms of how they affect matters of policy and practice of management in the identified institutions. Case reviews on matters of educational management have revealed changing trends in leadership and management across the globe (Bush, 2010, p. 45). There is evidence of a determined shift from the traditional systems of leadership and management, which were more rigid as compared to the current ones that seem to be more flexible. The traditional systems mainly involved a vertical structure in leadership, wherein the leadership was at the top, making important policy and administrative decisions that affected the operations at all levels of the universities administration (Bush, 2010). Educational institutions that adopted exclusivist policies of leadership favored this system. The current systems have evolved to embrace horizontal structures of leadership where policy matters and decision-making are handled at multiple points of the organization’s structure (McCaffery, 2010). These more developed kinds of leadership and management entail some aspect of devolution where power i s distributed evenly across various academic institutions. Both systems have important strengths and weaknesses. The increasing clamor for liberties and the advocacy for rights of the minorities have had significant impact on the levels and nature of leadership in American universities (Bush & Coleman, 2000). Gender and ethnicity are some of the factors that have been brought within the umbrella of the management and leadership structures of American universities (McCaffery, 2010). Such leadership styles have impacted positively on the nature of leadership by embracing certain qualities that are essentially aimed towards

Friday, September 27, 2019

Globalization and Ethnicity in the United Kingdom Essay

Globalization and Ethnicity in the United Kingdom - Essay Example The effect of globalization-led migration is the growing diversity of any given state and the heightened pressure of supporting and fairly representing the interest of a state’s citizen living abroad. On the other side, globalization has resulted in some of the societies that exist as enclaves within the boundaries of a given state to champion for their independence, secession, and right to govern themselves. Prior to the World Wars, the United Kingdom was the most dominant nation on earth and the superpower of the nineteenth century. The UK had engaged in colonial conquest in various parts of the globe. Having been the major power in the nineteenth century, Britain had occupied many parts of the world spreading their language and culture. In contemporary world setting, immigration to the United Kingdom has been tremendous with many people especially from third world countries and former colonies making the highest numbers of immigrants. Many third world countries have been af fected by political turmoil at one time in their history, resulting in political asylum and influx of refugees into the UK (Beck 2000). Â  Former British colonies like India, Pakistan, Nigeria, and South Africa among many others form the largest numbers of refugees and political asylum individuals living in the UK. Between 1991 and 2001, about half of the population increase in the UK was a result of the presence of immigrants born outside the UK. The UK thus has a very high number of immigrants who constitute the general population (Delanty 2008). Most of these people are not the UK born and some have spent the most part of their childhood in other countries and some even adulthood. Culture is dynamic, and culture in most cases dictates individuals’ behavior and ways of life.

Thursday, September 26, 2019

Manipulating Data Essay Example | Topics and Well Written Essays - 500 words

Manipulating Data - Essay Example It is the result of an old code that is modified over the years plenty of times. Another aspect is that changing one part of the code would have unpredictable effects on all the other parts of the program, just as a bowl of spaghetti where pulling one can affect all the other strands. Thus the complex structure is named after spaghetti. Spaghetti code is caused mainly by inexperienced programmers following their mandates and creating a complex program which is being modified by several other people previously. Structured programming however decreases the chance of spaghetti code (Dixit, 2007, p. 92). The structured programming was a method formed in 1966 as a logical programming method which is a precursor to the object-oriented programming. This programming method is aimed to improve the quality, clarity, and development time of computer programs through the extensive use of block structures and subroutines instead of simple tests such as GOTO statements resulting in spaghetti code which makes it difficult to maintain and follow (Agarwal, 2009, p. 253). Modular programming has been functioning since the 1970s as a technique which subdivides a computer program in various other sub-programs. It separates the computer programs into individual and independent modules. It is a separate software component which is used with many other applications and functions in the system. The functions which are similar are grouped together while the separate functions are grouped as separate units. Object-oriented programming can be used with modular programming as it allows multiple programmers to work on divided programs independently (Mitchell, 2003, p. 239). Object-oriented programming is the method which is most commonly used today. It provides a programming model based on objects as it integrates the code and data by using objects. An object can be the abstract data type which has a state and behavior both. These objects can also be like real

Instructional Strategies Essay Example | Topics and Well Written Essays - 1500 words

Instructional Strategies - Essay Example There are a variety of categories and disciplines we could address but here our main concentration is on the possible usefulness of the DIRECT INSTRUCTION (DI) method of teaching in today's classrooms. According to the Baltimore Curriculum Project fact sheet (1997), "DI is an embracing model in instructional strategies which is filled with carefully structured and edited lessons that are backed by texts and worksheets." In DI the educator works with a group of students who are performing at roughly the same level. Through exceedingly careful organization, direction and pacing a rapport is formed with the students that facilitates the creation of a healthy, interactive learning environment. This type of interaction is finely crafted to focus on the subject and the pace of the learning activities. The students in these activities respond to questions as both an individual and a group further ensuring that the method of instruction "leaves no individual unengaged" (Baltimore Curriculum Project, 1997). Direct instruction is primarily based on previous theories of instruction which strive to eliminate misinterpretation by the students of goals, necessary skills, and instructions. The theory of DI is purported to greatly accelerate and improve academic performance as well as specific learning when applied in the prescribed manner. Direct instruction has also shown promise in correcting certain affective behaviors that can lead to academic problems. The resulting DI theory emphasizes the use of a small group in which teachers and aides conduct face-to-face (or one-on-one) instruction. This allows educators to carefully articulated lessons so that specific cognitive skills are broken down into small units and/or action sequences. The research of Mr. Siegfried Engelmann and Dr. Wesley Becker is not only a focal point in DI discussions, it also prescribes the correct method for using DI. Their work provides educators with five areas by which all class activities can be organized: GOAL SETTING: Educators emphasize the importance of setting goals for school work. Students are required to write and explain their goals which will ead them to complete the task set before them. Educators and other students provide regular reassurance on the progress toward meeting these goals as well as hints for improvement. ASSIGNMENTS: Educators should endeavor to break the ultimate task into small, manageable parts. Students should be encouraged to further devise personally manageable parts that will lead to successful completion of the task. The true key here is to set a pace that is comfortable to the individual and the class as a whole while ensuring timely completion of the task. Such structuring should lead to a better understanding of the ultimate goal as well as provide more immediate success and feedback. EXPLANATION: The variation in explanation lies at the heart of what makes DI unique. Examples that relate more closely to real life and/or appeal to the students(s) make the subject clearer and personal. Students more readily engage in learning activities that they find personally linked. If an activity seems like fun or useful to the student, it is now personal and worth doing. OUTSOURCING: Frequently asking

Tuesday, September 24, 2019

The broad thematic perspective Movie Review Example | Topics and Well Written Essays - 500 words

The broad thematic perspective - Movie Review Example The overall cost of the movie was around  £900,000 which is equivalent to  £11.4 million today. It was first broadcasted on ITV in 1973. The documentary interviewed significant members of Axis and Allied campaigns, comprising eyewitness accounts by enlisted men, civilians, politicians, and officers among others. Major historians were Stephen Ambrose and Adolf Galland (Ambruster 17). The series â€Å"The World at War,† entails a DVD set by Jeremy Isaac explaining the priorities given to examinations and interviews with surviving assistants and aides other than recognized figures. Karl Wolff who was Heinrich Himmler’s adjutant was the most difficult interviewee to persuade and locate. During the examination, he accepted to be among the significant witnesses in mass genocide, in Himmler’s presence. In the later part of the series, Isaacs showed satisfaction with the entails of the series. He also added that the content entailed unclassified information in reference to British code-breaking. The documentary is listed among the top programmes in British television under the compilation of British Film Institute in 2000 (Ambruster 5). This is a situation whereby nations look for alternative means of solving conflicts. This is a crucial theme that prevails in all of the four clips. Violence was as a result of conflict of ideas and ways of doing things. But in this case, violence evolves as a result of conflict of interest. The subjects were made to do things they were not willing by their masters. Peace prevailed after the submission of the Nazi forces who were Germans. Massive killings such as those illustrated in â€Å"Whirlwind: Bombing Germany† which is the twelve episode are some of the activities that came to a stop leading to prevalence of peace. The episode emphasizes on massive bombings by the British and American army in Germany. Interviews from witnesses such as Albert Speer, William Reid and James Stewart explain how innocent lives were taken

Monday, September 23, 2019

Art history Essay Example | Topics and Well Written Essays - 1250 words - 2

Art history - Essay Example He is perhaps best known as a portrait painter but critics have found it difficult to categorize him much more accurately beyond that. The Encyclopaedia Britannica cites his â€Å"large-scale Photo-Realist† portraits as his most famous achievement while the Oxford Dictionary of Art labels his early work as â€Å"Abstract Expressionist† and his later work as â€Å"Superrealism†. Many commentators (Sultan, 2003) have regarded his work in the medium of print to be even more significant than his paintings and it is certainly true that photography and print media have influenced his painting, as well as being major works of art in their own right. Throughout his life Chuck close has given interviews and collaborated on many books and television programmes (see for example Finch, 2007) which gives critics a good insight into his life and thought. His childhood was in some ways difficult, because of illness in the family and his own learning difficulties. Nowadays he would have no doubt been diagnosed with dyslexia and coached out of his rather individual way of seeing things, but as it was, he used his disability with words to focus on what he was good at, namely art. He has an exceptional awareness of his own artistic development and an uncanny talent for finding new techniques. While still a student Close was fascinated by prints and photography, citing Jasper Johns as an early influence (Sultan 10). He was a student in the 1960s, and experienced the blossoming of Pop Art first hand. The work of Johns and Warhol opened up a whole new field of exploration where the boundaries between collage and paint, between commercial silk screen printing and traditional fine art painting seemed to be merged. Multiple repetitions of the same subject were made in different colors and on a huge scale, highlighting these artists’ ability to frame even very ordinary items in unusual ways and change our perception of these items. Images such as the cans of soup and the Marylin

Sunday, September 22, 2019

Technology review Essay Example | Topics and Well Written Essays - 750 words - 3

Technology review - Essay Example Turnitin.com: turnitin.com is an effective plagiarism checker which means its work is to verify and check for any plagiarism that prevails in the paper. It is a very reliable website for instructors which aim to compare the work submitted to the teachers with works on the web in order to check the reliability and uniqueness of the work. It is easily available and online communities can be made through which the instructor can evaluate and access the work. K5learning.com: k5 learning again is an interactive platform for young learners who seek to develop their skills through fun. K5 is a website which accesses young learners and enhance their abilities. Online tests are taken, assessments are done and help is provided to those who need reinforcement so that an effective learning mechanism is generated. Freevideolectures.com: it is an online resource for many learners out there who can easily access online the lectures of reputable teachers regarding various topic of study. This site enables learners to gain knowledge regarding various subjects having lectures of almost 30 respectable universities. This websites offers a vast range of courses and lectures which students aid from to develop better understanding in a convenient and cost effective manner. Tutorphil.com: is an online site to overcome the workload and ease down the frustration level among students. It provides learning methodologies and gives advices and tips to overcome the problems while structuring an essay. It is an interactive site which helps to device proper essays with ease and convenience. At this point I would like to choose my three sites which according to me are the most reliable to generate effective learning. My three sites which I will focus on in my essay are lore.com, turnitin.com and freevideolectures.com. If these three sites are integrated then a very convenient mechanism can be generated which will aid in learning. The reason why I think

Saturday, September 21, 2019

Life Is Easier Essay Example for Free

Life Is Easier Essay Living today is more comfortable and easier than when your grandparents were children. Use specific reasons and examples to support your answer. Recently, my grandparents often recall how difficult their lives were when they were young, claiming that my generation has much easier lives than they had. I agree with them. In fact, life today is much more comfortable and easier than it was in my grandparents’ youth for some reasons. First, technology has made modern-day life much more comfortable than in the past. During my grandparents’ time, life was rough and hard because all the work was done without any modern tool, so they had to do their laundry by hand, walked from one place to another by feet. Furthermore, there was limited in entertainment choices in the past. They could at best listen to the radio or perhaps watch a black-and-white movie for pleasure. Today, however, living has become a lot easier thanks to technological developments. We launder our clothes with washing machine, uses buses, subways, or cars to move around. We also enjoy home theater systems, DVDs, and video games. Technology has definitely improved our lives. In addition, people today have more leisure time than they did before. People no longer have to work very long hours like my grandparents did. Since my grandparents were farmers, they had to work in the rice field all day long even without resting on weekends. In contrast, many people today, including my parents, simply work from nine to five on weekdays and take weekends off. They therefore have much free time than my grandparents did, so they can spend more time on leisure activities. They go to the movies, go to the gym, or take trips. All these activities have positive affect on their quality of living. In conclusion, people today have more comfortable and easier lives than in the past. This is the results of technological developments and the extra leisure time available. These factors will make our live even more comfortable in the future.

Friday, September 20, 2019

Factors for Consideration when Starting a Business

Factors for Consideration when Starting a Business Below are some reasons why a person would like to set up his /her own business. Control If a person has its own business; it gives them more control over how much money they make and how much they will work to gain more. When a person starts his business it is solely dependent on the person how to start and maintain it in future. In short the control comes in the persons hand and there is no one elses control over him. Choice When a person starts his/her own business, it gives him/her freedom to whatever they want to do. For instance, it gives you a choice whether you want to do business and job together or deal in various products of your own choice. Moreover, being a freelancer or an independent contractor gives a person freedom regarding which jobs they choose to take. Business Decisions If a person has his own business the success or failure of the business solely depends on your business decisions. For example, having your own business means you make the choices instead of having them made for you by an employer and the choices made decides if u fail or succeed, which in turn tells whether your decision was worth for a business. It gives you motivation to improve yourself in future and makes you a better businessman. Satisfaction Some people start their own business for themselves because they have a skill or a product to offer. They enjoy being obsessive about what they do. Moreover, starting your own business gives a person satisfaction as whatever they do they do for themselves and they can work hard to earn as much money as they want and there is no fixed salary. Job loss When a person loses his job, he may take his work experience and professional contacts and start his own business to make him less vulnerable to risks. Get creative Starting up a new business, provides and entrepreneur to bring his creativity out. If a person has considered going it alone, he will have thought out how you would do things your way. Being an entrepreneur gives him the freedom to express himself and develop his concept in any way he chooses. Of course, there are always financial constraints, but the ability to be as creative as you like is far more appealing than a one-dimensional job. Very profitable If you think that its just large corporations that make big profits, you would be wrong. There are countless stories of entrepreneurs hitting on a great idea, exploiting it well and being well on their way to their first million by the end of the year. Although the start-up process can be tough, with long hours and little money not uncommon, if you run your business well, the rewards can be huge. And, from a purely selfish point of view, you will get most of the profits yourself. Unfortunately, many of the businesses are more likely to fail in its initial years because of the difficulties that commonly plague struggling companies suffer from. These difficulties occur in every business in its early years as the business is new to everyone even to the management itself. Some of these reasons are elucidated below. Insufficient Capital A common fatal mistake for many failed businesses is having insufficient operating funds. Business owners underestimate how much money is needed and they are forced to close before they even have had a fair chance to succeed. They also may have an unrealistic expectation of incoming revenues from sales. It is imperative to ascertain how much money your business will require; not only the costs of starting, but the costs of staying in business. It is important to take into consideration that many businesses take a year or two to get going. This means you will need enough funds to cover all costs until sales can eventually pay for these costs. Lack of planning Successful businesses just don not happen instead they need planning to make it happen. They are the outcome of deliberate and well-executed business plans. Many businessmen are so eager to get started that they neglect business planning and start a business with a dream and an idea. That might motivate you to get it started but not necessarily to succeed. If a person wants to setup his business it is necessary to have a business plan first and then start. Overspending Many startups spend their seed money before cash has begun to flow in at a positive rate. This often happens because of misconception about how business operates. If youre just starting out in business, seek out seasoned veterans you can bounce your ideas off of prior to making big financial commitments. Inadequate funding Another common reason for small business failure is a absence of adequate funding, especially during the critical start-up period. Inadequate funding severely limits the capacity and looms the ability to grow beyond the initial stage of life. One should resist himself from the urge to start until he have attained all of the funding which they know they need to do it right. Bad marketing Quite often a person creates a business that sells the best product at the best price but still fails because no one knows it exists. Marketing about the product is critical if the business is going to have any chance of becoming the flourishing venture. Unreliable suppliers The ability to maintain proper levels of inventory is directly relative to the quality of a persons relationships with reliable suppliers. Developing effective supply channels can take some time but it is quite necessary in a business as whatever you sell should be good enough to attract customers to buy it. Staffing imbalances Labour is the biggest expense for most businesses. Therefore, it only makes sense that its worth your time to make sure that your company employs the right amount of people. For example, employing too many employees will have a bad effect on the business capital and employing very few employees will result in performance will. Making the perfect balance is not easy, but the rewards are well worth the effort. Ineffective sales performance Sales are a key element in the success of any businesses. Poor sales, on the other hand, are an indication that your business might be in jeopardy. A person should maintain a close eye on sales patterns and trends, and hire the best sales staff, which they can afford to keep the money regularly coming in. 1.3 The important factors which are necessary to start up a business are:- Knowledge/Expertise Any business needs some amount of basic knowledge and experience. It is very essential that the owner is aware about the business he plans to start. Knowledge and expertise about the product or service are keys to a successful business. Moreover, if a person has limited knowledge the owner may not be able to sustain the business and can be fooled by the vendors, suppliers and competitors. Expert knowledge is especially required if the field of business is a niche field. For instance, the construction or software industry would require more knowledge as against a retail business selling a particular brand of clothes or shoes. Location Deciding an ideal location for the business is a strategic and an important one. A good location goes a long way in making the business successful. The location needs to be carefully chosen. Some places have advantages over the others. A location should be explored on the basis where the raw materials can be easily sourced, the manpower would be easily available and it is easy to save on transportation costs. Moreover, choosing a location depends upon the nature of the business. For instance, a retail business should to be located in a well-populated area and one which is easily accessible. Competition Before entering new business, information about market competition needs to be found out. In case a product is a monopoly then the competition will not matter. Otherwise the success of the business will depend upon the demand and supply gap. Thus if there is a huge demand then you can enter the business in spite of the market competition. Otherwise you will need to be stronger than the competitors to gain an entry. Normally existing firms will always have an advantage due to the experience they have and because they may be well equipped. The question which needs to be answered is What is unique about the product / service which will be offered to survive the market competition? Information such as who are the competitors, what is their market strategy and what factors are required to compete with them are important. Financing/Capital After identifying the initial costs required for starting the business, it is necessary to look for the sources of funding like bank loan or sponsors. It is very essential to have adequate funds to start a business as lack of funds will have a bad impact on the business which may lead to the failure of it. Laws, Rules, Regulation Setting up a new business would require compliance with various laws regulations. Each country is governed by separate laws and regulations which require that any new business be registered with certain authorities and meets certain compliance. Thus registration of the name of the company may be required with Ministry of Commerce for instance. Further details need to be provided regarding the workforce and certain deductions may be required from the staff (such as tax) which would need to be deposited with the respected Government bodies. Awareness is required of such rules and regulations. It is always better to consult a lawyer before setting up a new business in an unknown environment. There are certain accounting / consultancy firms which would have a division giving advice on legal and statutory compliance. In case of lack of expertise it is better to approach a lawyer / accounting / consultancy firms. 1. 4 The heart of the issue with Human Resources is the skills-base of the business. What skills does the business already possess? Are they sufficient to meet the needs of the chosen strategy? Could the skills-base be flexed / stretched to meet the new requirements? An audit of human resources would include assessment of the following factors: Existing staffing resources Numbers of staff by function, location, grade, experience, qualification, remuneration Existing rate of staff loss (natural wastage) Overall standard of training and specific training standards in key roles Assessment of key intangibles e.g. morale, business culture Changes required to resources What changes to the organisation of the business are included in the strategy (e.g. change of location, new locations, new products)? What incremental human resources are required? How should they be sourced? (alternatives include employment, outsourcing, joint ventures etc.) 1.5 1.6 A business finance source is a way a business can obtain funding, either for start-up or operating expenses. There are many different types of sources like sales, loans, and investors. Each has different terms, benefits and disadvantages. Business owners tend to use two or more different sources in order to fund their business. Business finance sources fall into two main categories:- Internal funding Internal funding comes from the profits made by the business by sale of products or assets. These are the funds which the company has on its own like the income of the company, the money they have of their own etc. External funding External funding comes from lenders and investors. The most common external finance sources are loans. Short and long-term loans require borrowers to repay funds at an interest rate for a set period of time. Overdraft loans allow a borrower to spend a certain amount of money, and the lender charges interest on the overdraft amount. Before deciding which method is best for a company, business owners should consider a variety of factors. The cost of the business finance source usually is the most important factor considered. Owners look at the interest rates and payment plans to determine the profitability of obtaining a certain funding source. Businesses that have a history financial stability may want to consider an internal source of revenue before opting for an external source. Its also important to determine how long the business will need additional funding. A short-term loan would be best for projects that would only take a short time to complete. Business finance start-up generally refers to the cost to start a new business. It includes determining, calculating, and obtaining start-up costs, as well as managing those finances effectively to ensure the profitability of a new business. The first steps to business finance start-up are to determine and estimate the amount of funds needed to open a business. These start-up expenses may include one-time fees, such as permits and licenses needed to operate the business. Initial costs may also include on-going fees, such as rent and utility payments. Business owners usually only include the necessary expenses when determining the total cost to start-up. In order to estimate the amount of funds needed for the business, owners should set up worksheets that list each expense and how much it costs. 1.7 The advantages of setting up a business under private limited company as opposed to sole trader ship are:- Liability The principal benefit of trading via a limited company has always been the limited liability given to the companys officers and shareholders. As a sole trader or other non-limited business, personal assets can be at risk in the event of a failure of the business, but this is not the case for a limited company. As long as the business is operated legally and within the terms of the Companies Act, directors or shareholders personal assets are not at risk in the event of a winding up or receivership. And as often happens on occasion, such events are not always under our own control. There is no obligation for a limited company to commence trading within any set time period after its incorporation. The principal benefit of trading as a limited company has always been the limited liability of the companys officers and shareholders. As a sole trader or other non-limited business, personal assets can be at risk in the event of a failure of the business, but this is not the case for a limited company. Gives confidence- Operating as a limited company often gives suppliers and customers a sense of confidence in a business. Larger organisations in particular will prefer not to deal with non-limited businesses. Also, many of the costs associated with managing and operating a limited company are not much more than with a non-limited business. Tax advantages- If you trade as a sole trader, partner or partnership, your income will be taxed as proprietors income, regardless of how much profit is retained as working capital. Interest on loans to the business is also taxed as income. Furthermore, partners are personally and jointly liable for partnership tax and if a partner dies, the surviving partners are responsible for partnership tax. Creditors can claim all your property to satisfy debts, and if this is insufficient, you may be declared bankrupt. An undercharged bankrupt is forbidden to start another business or to become a director of a limited company. Separate Entity Due to its very nature, a limited company is deemed to be a separate legal entity from its owners. This has several advantages, including the fact that the company will exist beyond the life of its members. If they retire or die, the company will continue to exist and operate. This ensures security for employees and other members and also is an advantage which other legal forms of business are not subject to. Ownership and Control In the case of Private Limited Companies, the Directors are also usually the main shareholders of the Company. Thus both the ownership and control of the business remain in their hands. Decisions can be made quickly and easily, with little fuss, allowing for a more successful business management platform. Company Name Part of registering a Limited Company, includes the registration of a Company name. This name will help identify the business in the marketplace, separating it from other Companies and protecting it. 1.8 A sole trader is a business form that allows one person to be solely responsible for the financial transactions of the business. The benefits and disadvantages of this responsibility are numerous and should be weighed carefully. Another term for a sole trader is a sole proprietor. Precisely, it refers to the person responsible for the daily organization of the firm and for its profits and losses. Being a sole proprietor as it in legally known in the U.S.is a benefit to many people who are looking to start their own business. It is one of the most common types of businesses that can be formed and involves only one person as the responsible entity of the businessholding that person completely accountable for any debt or liability that the business might incur. Although sole traders have many advantages, they also have many significant disadvantages. Liability The main disadvantage to being a sole trader is the liability that the business owner yields. Being held responsible for any lawsuits or potential damages is not only dangerous to the sole traders business, but it can be detrimental to her personal life as well. Unlike modern business corporationssuch as LLCswhich allow the business to be a separate entity, preventing anyone from holding your personal assets responsible for your business, sole traders are personally responsible for their business. Responsibility One of the main disadvantages to many people for running a sole proprietorship is the complete responsibility that the sole trader has. Completely variant from business to business, since each business has its own type of operations, sharing responsibility takes a huge burden off of most business owners. That is one reason for the popularity of Limited Liability Corporations, Limited Liability Partnerships, and partnerships. These businesses each allow some owners to share or take less responsibility, leaving them to grow and improve their businesses more thoroughly than if they did not have complete responsibility. Lack of Investors When it comes to being a sole trader, the business owner can have a difficult time for growing. Not only because of the lack of time that she has because of her tremendous responsibility, but due to investors lack of interest in a sole proprietorship. Companies are more apt to invest in corporations that have the potential to expand. They also prefer the other benefits of corporations, such as their legal structure and lack of personal accountability. Lack of investors can mean lack of growth for many companieswhich can leave many sole traders running a stagnant business.

Thursday, September 19, 2019

The Impact of Different Life Crises :: Crises Crisis Death Life Essays

The Impact of Different Life Crises Stress and everyday annoyances are not crises. Situations that interfere with normal activity, inspire feelings of panic or defeat, and bring about deep emotional reactions are crises. A crisis is a 'turning point'; or a crucial time that will make a difference for better or worse. The Chinese word for 'crisis'; is made up of two characters -- one means despair and the other means opportunity. When a person experiences crisis, there will either be a negative outcome or a positive one. The direction of the outcome depends on a number of factors such as -- physical and emotional health of the individual, support from others, childhood upbringing, past experience with similar situations, and the duration of the crisis situation. I propose to focus specifically on the life crises with which the elderly population faces, notably the loss of a spouse or companion, retirement, and contending with a terminal illness. Through examining the latter crises and their potential to influence the he alth of an elderly individual, I expect to learn of means by which the elderly may give way to in order not to become overwhelmed with the changes. Different life crises have different impacts. In many cases, however, it may be possible to anticipate crises and prepare for them. It may also be useful to recognize the impact of crises that have occurred so that one can take account of them appropriately. Holmes and Rahe with the Social Readjustment Scale have done some very interesting work in this area. This allocates a number of 'Life Crisis Units'; to different events, so that one can evaluate them and take action accordingly (Niven 99). While this approach is obviously a simplification of complex situations, using LCUs can give one a useful start in adjusting to life crises. With regards to the elderly population, namely the events 'death of a spouse';, 'personal illness or injury';, and 'retirement'; rate 100, 53, and 45 LCUs respectively. One of the most powerful stressors in one's life, particularly in the elderly population, is the loss of a loved one or a close relationship through the death of a spouse or companion . In the two years following bereavement, widowed people are more susceptible to illness and physical ailments, and their mortality rate is higher than expected. Bereaved people may be vulnerable to illness in part because, feeling unhappy, they do not sleep well, they stop eating properly, and they consume more drugs and cigarettes.

Wednesday, September 18, 2019

Understanding Haemophilia :: Health Medicine

Understanding Haemophilia In the human body, each cell contains 23 pairs of chromosomes, one of each pair inherited through the egg from the mother, and the other inherited through the sperm of the father. Of these chromosomes, those that determine sex are X and Y. Females have XX and males have XY. In addition to the information on sex, 'the X chromosomes carry determinants for a number of other features of the body including the levels of factor VIII and factor IX.'1 If the genetic information determining the factor VIII and IX level is defective, haemophilia results. When this happens, the protein factors needed for normal blood clotting are effected. In males, the single X chromosome that is effected cannot compensate for the lack, and hence will show the defect. In females, however, only one of the two chromosomes will be abnormal. (unless she is unlucky enough to inherit haemophilia from both sides of the family, which is rare.)2 The other chromosome is likely to be normal and she can therefore compensate for this defect. There are two types of haemophilia, haemophilia A and B. Haemophilia A is a hereditary disorder in which bleeding is due to deficiency of the coagulation factor VIII (VIII:C)3. In most of the cases, this coagulant protein is reduced but in a rare amount of cases, this protein is present by immunoassay but defective.4 Haemophilia A is the most common severe bleeding disorder and approximately 1 in 10,000 males is effected. The most common types of bleeding are into the joints and muscles. Haemophilia is severe if the factor VIII:C levels are less that 1 %, they are moderate if the levels are 1-5% and they are mild if they levels become 5+%.5 Those with mild haemophilia bleed only in response to major trauma or surgery. As for the patients with severe haemophilia, they can bleed in response to relatively mild trauma and will bleed spontaneously. In haemophiliacs, the levels of the factor VIII:C are reduced. If the plasma from a haemophiliac person mixes with that of a normal person, the Partial thromboplastin time (PTT) should become normal. Failure of the PTT to become normal is automatically diagnostic of the presence of a factor VIII inhibitor. The standard treatment of the haemophiliacs is primarily the infusion of factor VIII concentrates, now heat-treated to reduce the chances of transmission of AIDS.6 In the case of minor bleeding, the factor VIII:C levels should only be raised to 25% with one infusion.

Tuesday, September 17, 2019

Free Essay - Review of Nathaniel Hawthornes The Scarlet Letter :: Scarlet Letter essays

  Review of the Scarlet Letter   The novel opens with an explanation of how the romance of The Scarlet Letter came to be presented as a story in its existing form.   Having always wanted to be a â€Å"literary man†, Nathaniel Hawthorne talks about his three-year stint as a Surveyor in the Salem Custom House.   Mostly filled with older gentlemen, the workplace was a very political, Whig-influenced environment and charged with Puritan history.   After brief character sketches of the personalities in the Custom House, Hawthorne then explains how he came upon a special package among the piles of papers.   It contained a red cloth with the letter â€Å"A† embroidered in gold thread and a manuscript by Jonathan Pue (the man who once held Hawthorne’s job).   Finding the story extremely interesting, the author thus retells the story of Hester Prynne from Massachusetts’s Puritan history.   The first chapter begins with Hester being led to the scaffold where she is to be publicly shamed for having committed adultery.   Hester is forced to wear the letter â€Å"A† on her gown at all times as punishment for her crime.   She has stitched a large scarlet â€Å"A† onto her dress with gold thread, giving the letter an air of elegance.   Hester carries Pearl, her daughter, with her.   On the scaffold she is asked to reveal the name of Pearl’s father, but she refuses.   In the crowd, Hester recognizes her husband from Amsterdam, Roger Chillingworth.   Chillingworth visits Hester after she is returned to the prison.   He tells her that he will find out who the man was, and that he will read the truth on the man’s heart.   He then forces her to promise never to reveal his own identity to anyone else.   Hester moves into a cottage bordering the woods.   She and Pearl live there in relative solitude.   Hester earns her money by doing stitchwork for local dignitaries, but often spends her time helping the poor and sick.   Pearl grows up to be wild, in the sense that she refuses to obey her mother.   Roger Chillingworth earns a reputation as being a good physician.   He uses his reputation to get transferred into the same home as Arthur Dimmesdale, an ailing minister.   Chillingworth eventually discovers that Dimmesdale is the true father of Pearl, at which point he spends every moment trying to torment the minister.   One night Dimmesdale is so overcome with shame about hiding his secret that he walks to the scaffold where Hester was publicly humiliated.

Change over Time Renaissance Essay

Because of the Middle ages and the crusades, the Renaissance and the humanistic art and literature and the Protestant Reformation and the splitting of the Catholic Church. Those three ages brought upon important historical events which we all know and study. The Middle Ages and the crusades where first and then came the Renaissance with humanistic art and then the Protestant Reformation and the splitting of the Catholic Church. Middle Ages were one of the most destructive and important times in our history with the bubonic plague and the crusades. The bubonic plague was the cause of invading and unhealthy living when mice and small fleas from a town where transported on to boat with survivors and brought to Europe. It came into Europe because of Mongol invaders, and it killed over 100 million of the world’s population. The crusades began their â€Å"heavenly war† because of Pope urban urging the English and all his subjects to go out to war and recapture the holy land. ————————————————- Around the 1400’s after all the crusades and plague had gone by, the kings work had been done so his subjects were at peace and feudalism was through, so the cultural movement of middle ages(salvation) was gone and was replaced by a new artistic movement called Humanism. Leonardo da Vinci was one of the most fascinating people of the 14th century as he was not only an artist but an inventor, a scientist, an engineer and a botanist among other things, he would often praise the human intellect to achieve things( as was the theme for this time period). Galileo Galilei is also fascinating as he is the first scientist to say the earth revolves around the sun, thus â€Å"defying† the word of god, which no one in the time of the Middle Ages would do as they wanted and believed that they had to work to get to heaven and never blaspheme the word of god. ————————————————- ————————————————- ————————————————- ————————————————- The protestant reformation was a reforming act that was led by Martin Luther and many other early Protestants who thought that you should go to heaven for your acts and not by working towards it unlike the past two ages. The ninety-five theses were written by Martin Luther and were 95 ideas on how to change the Christian church to make it sacred and unsecular again. Since the church only made slight changes to their rules, Martin Luther and his followers created protestism, If it was in the Middle Ages, Martin luther would’ve been slain, and if in the Renaissance he would have been shunned. ———————————————— ————————————————- As These passages tell you there where major changes in Europe have shaped the way the world is today. The Middle ages , the Reniassan ce and the protestant reformation all made the world different . The crusades, humanistic art and ninety five theses are very important. ————————————————-

Monday, September 16, 2019

Evolution of Microprocessor

American University CSIS 550 History of Computing Professor Tim Bergin Technology Research Paper: Microprocessors Beatrice A. Muganda AU ID: 0719604 May 3, 2001 -2- EVOLUTION OF THE MICROPROCESSOR INTRODUCTION The Collegiate Webster dictionary describes microprocessor as a computer processor contained on an integrated-circuit chip. In the mid-seventies, a microprocessor was defined as a central processing unit (CPU) realized on a LSI (large-scale integration) chip, operating at a clock frequency of 1 to 5 MHz and constituting an 8-bit system (Heffer, 1986).It was a single component having the ability to perform a wide variety of different functions. Because of their relatively low cost and small size, the microprocessors permitted the use of digital computers in many areas where the use of the preceding mainframe—and even minicomputers— would not be practical and affordable (Computer, 1996). Many non-technical people associate microprocessors with only PCs yet there are thousands of appliances that have a microprocessor embedded in them— telephone, dishwasher, microwave, clock radio, etc. In these items, the microprocessor acts primarily as a controller and may not be known to the user.The Breakthrough in Microprocessors The switching units in computers that were used in the early 1940s were the mechanical relays. These were devices that opened and closed as they did the calculations. Such mechanical relays were used in Zuse’s machines of the 1930s. -3- Come the 1950s, and the vacuum tubes took over. The Atanasoff-Berry Computer (ABC) used vacuum tubes as its switching units rather than relays. The switch from mechanical relay to vacuum tubes was an important technological advance as vacuum tubes could perform calculations considerably faster and more efficient than relay machines.However, this technological advance was short-lived because the tubes could not be made smaller than they were being made and had to be placed close to eac h other because they generated heat (Freiberger and Swaine, 1984). Then came the transistor which was acknowledged as a revolutionary development. In â€Å"Fire in the Valley†, the authors describe the transistor as a device which was the result of a series of developments in the applications of physics. The transistor changed the computer from a giant electronic brain to a commodity like a TV set.This innovation was awarded to three scientists: John Bardeen, Walter Brattain, and William Shockley. As a result of the technological breakthrough of transistors, the introduction of minicomputers of the 1960s and the personal computer revolution of the 1970s was made possible. However, researchers did not stop at transistors. They wanted a device that could perform more complex tasks—a device that could integrate a number of transistors into a more complex circuit. Hence, the terminology, integrated circuits or ICs.Because physically they were tiny chips of silicon, they ca me to be also referred to as chips. Initially, the demand for ICs was typically the military and aerospace -4- industries which were great users of computers and who were the only industries that could afford computers (Freiberger and Swaine, 1984). Later, Marcian â€Å"Ted† Hoff, an engineer at Intel, developed a sophisticated chip. This chip could extract data from its memory and interpret the data as an instruction. The term that evolved to describe such a device was â€Å"microprocessor†.Therefore, the term â€Å"microprocessor† first came into use at Intel in 1972 (Noyce, 1981). A microprocessor was nothing more than an extension of the arithmetic and logic IC chips corporating more functions into one chip (Freiberger and Swaine, 1984). Today, the term still refers to an LSI single-chip processor capable of carrying out many of the basic operations of a digital computer. Infact, the microprocessors of the late eighties and early nineties are full-sclae 32-b it and 32-bit address systems, operating at clock cycles of 25 to 50 MHz (Heffer, 1986).What led to the development of microprocessors? As stated above, microprocessors essentially evolved from mechanical relays to integrated circuits. It is important to illustrate here what aspects of the computing industry led to the development of microprocessors. (1) Digital computer technology In the History of Computing class, we studied, throughout the semester, how the computer industry learned how to make large, complex digital computers capable of processing more data and also how to build and use smaller, less -5- expensive computers.The digital computer technology had been growing steadily since the late 1940s. (2) Semiconductors Like the digital computer technology, semiconductors had also been growing steadily since the invention of the transistor in the late 1940s. The 1960s saw the integrated circuit develop from just a few transistors to many complicated tasks, all of the same chip. (3) The calculator industry It appears as if this industry grew overnight during the 1970s from the simplest of four-function calculators to very complex programmable scientific and financial machines.From all this, one idea became obvious—if there was an inexpensive digital computer, there would be no need to keep designing different, specialized integrated circuits. The inexpensive digital computer could simply be reprogrammed to perform whatever was the latest brainstorm, and there would be the new product (Freiberger and Swaine, 1984). The development of microprocessors can be attributed to when, in the early 1970s, digital computers and integrated circuits reached the required levels of capability.However, the early microprocessor did not meet all the goals: it was too expensive for many applications, especially those in the consumer market, and it -6- could not hold enough information to perform many of the tasks being handled by the minicomputers of that time. How a m icroprocessor works According to Krutz (1980), a microprocessor executes a collection of machine instructions that tell the processor what to do. Based on the instructions, a microprocessor does three basic things: †¢ Using its ALU (Arithmetic/Logic Unit), a microprocessor can perform mathematical operations like addition, subtraction, multiplication and division.Modern microprocessors contain complete floating point processors that can perform extremely sophisticated operations on large floating point numbers. †¢ A microprocessor can move data from one memory location to another. A microprocessor can make decisions and jump to a new set of instructions based on those decisions. There may be very sophisticated things that a microprocessor does, but those †¢ are its three basic activities. Put simply, it fetches instructions from memory, interprets (decodes) them, and then executes whatever functions the instructions direct.For example, if the microprocessor is capable of 256 different operations, there must be 256 different instruction words. When fetched, each instruction word is interpreted differently than any of the other 255. Each type of microprocessor has a unique instruction set (Short, 1987). -7- Archictecture of a microprocessor This is about as simple as a microprocessor gets. It has the following characteristics: †¢ an address bus (that may be 8, 16 or 32 bits wide) that sends an address to memory; †¢ a data bus (that may be 8, 16 or 32 bits wide) that can send data to memory or receive data from memory; †¢ RD (Read) and WR (Write) line to tell the memory whether it wants to set or get the addressed location; †¢ a clock line that lets a clock pulse sequence the processor; and a reset line that resets the program counter to zero (or whatever) and restarts execution. †¢ A typical microprocessor, therefore, consists of: logical components—enable it to function as a programmable logic processor; program co unter, stack, and instruction register—provide for the management of a program; the ALU—provide for the manipulation of data; and a decoder & timing and control unit—specify and coordinate the operation of other components.The connection of the microprocessors to other units—memory and I/O devices—is done with the Address, Data, and control buses. -8- Generation of microprocessors Microprocessors were categorized into five generations: first, second, third, fourth, and fifth generations. Their characteristics are described below: First-generation The microprocessors that were introduced in 1971 to 1972 were referred to as the first generation systems. First-generation microprocessors processed their instructions serially—they fetched the instruction, decoded it, then executed it.When an instruction was completed, the microprocessor updated the instruction pointer and fetched the next instruction, performing this sequential drill for each ins truction in turn. Second generation By the late 1970s (specifically 1973), enough transistors were available on the IC to usher in the second generation of microprocessor sophistication: 16-bit arithmetic and pipelined instruction processing. Motorola’s MC68000 microprocessor, introduced in 1979, is an example. Another example is Intel’s 8080. This generation is defined by overlapped fetch, decode, and execute steps (Computer 1996).As the first instruction is processed in the execution unit, the second instruction is decoded and the third instruction is fetched. The distinction between the first and second generation devices was primarily the use of newer semiconductor technology to fabricate the chips. This new -9- technology resulted in a five-fold increase in instruction, execution, speed, and higher chip densities. Third generation The third generation, introduced in 1978, was represented by Intel’s 8086 and the Zilog Z8000, which were 16-bit processors with minicomputer-like performance.The third generation came about as IC transistor counts approached 250,000. Motorola’s MC68020, for example, incorporated an on-chip cache for the first time and the depth of the pipeline increased to five or more stages. This generation of microprocessors was different from the previous ones in that all major workstation manufacturers began developing their own RISC-based microprocessor architectures (Computer, 1996). Fourth generation As the workstation companies converted from commercial microprocessors to in-house designs, microprocessors entered their fourth generation with designs surpassing a million transistors.Leading-edge microprocessors such as Intel’s 80960CA and Motorola’s 88100 could issue and retire more than one instruction per clock cycle (Computer, 1996). Fifth generation Microprocessors in their fifth generation, employed decoupled super scalar processing, and their design soon surpassed 10 million transistors. I n this – 10 – generation, PCs are a low-margin, high-volume-business dominated by a single microprocessor (Computer, 1996). Companies associated with microprocessorsOverall, Intel Corporation dominated the microprocessor area even though other companies like Texas Instruments, Motorola, etc also introduced some microprocessors. Listed below are the microprocessors that each company created. (A) Intel As indicated previously, Intel Corporation dominated the microprocessor technology and is generally acknowledged as the company that introduced the microprocessor successfully into the market. Its first microprocessor was the 4004, in 1971. The 4004 took the integrated circuit one step further by ocating all the components of a computer (CPU, memory and input and output controls) on a minuscule chip. It evolved from a development effort for a calculator chip set. Previously, the IC had had to be manufactured to fit a special purpose, now only one microprocessor could be ma nufactured and then programmed to meet any number of demands. The 4004 microprocessor was the central component in a four-chip set, called the 4004 Family: 4001 – 2,048-bit ROM, a 4002 – 320-bit RAM, and a 4003 – 10-bit I/O shift register. The 4004 had 46 instructions, using only 2,300 transistors in a 16-pin DIP.It ran at a clock rate of – 11 – 740kHz (eight clock cycles per CPU cycle of 10. 8 microseconds)—the original goal was 1MHz, to allow it to compute BCD arithmetic as fast (per digit) as a 1960's era IBM 1620 (Computer, 1996). Following in 1972 was the 4040 which was an enhanced version of the 4004, with an additional 14 instructions, 8K program space, and interrupt abilities (including shadows of the first 8 registers). In the same year, the 8008 was introduced. It had a 14-bit PC. The 8008 was intended as a terminal controller and was quite similar to the 4040.The 8008 increased the 4004’s word length from four to eight bits , and doubled the volume of information that could be processed (Heath, 1991). In April 1974, 8080, the successor to 8008 was introduced. It was the first device with the speed and power to make the microprocessor an important tool for the designer. It quickly became accepted as the standard 8-bit machine. It was the first Intel microprocessor announced before it was actually available. It represented such an improvement over existing designs that the company wanted to give customers adequate lead time to design the part into new products.The use of 8080 in personal computers and small business computers was initiated in 1975 by MITS’s Altair microcomputer. A kit selling for $395 enabled many individuals to have computers in their own homes (Computer, 1996). Following closely, in 1976, was 8048, the first 8-bit single-chip microcomputer. It was also designed as a microcontroller rather than a microprocessor—low cost and small size was the main goal. For this reason, da ta was stored on-chip, while program code was external. The 8048 was eventually replaced by the very popular but bizarre 8051 and 8052 – 12 – (available with on-chip program ROMs).While the 8048 used 1-byte instructions, the 8051 had a more flexible 2-byte instruction set, eight 8-bit registers plus an accumulator A. Data space was 128 bytes and could be accessed directly or indirectly by a register, plus another 128 above that in the 8052 which could only be accessed indirectly (usually for a stack) (Computer, 1996). In 1978, Intel introduced its high-performance, 16-bit MOS processor—the 8086. This microprocessor offered power, speed, and features far beyond the second-generation machines of the mid-70’s. It is said that the personal computer revolution did not really start until the 8088 processor was created.This chip became the most ubiquitous in the computer industry when IBM chose it for its first PC (Frieberger and Swaine, 1984 ). In 1982, the 802 86 (also known as 286) was next and was the first Intel processor that could run all the software written for its predecessor, the 8088. Many novices were introduced to desktop computing with a â€Å"286 machine† and it became the dominant chip of its time. It contained 130,000 transistors. In 1985, the first multi-tasking chip, the 386 (80386) was created. This multitasking ability allowed Windows to do more than one function at a time.This 32-bit microprocessor was designed for applications requiring high CPU performance. In addition to providing access to the 32-bit world, the 80386 addressed 2 other important issues: it provided system-level support to systems designers, and it was object-code compatible with the entire family of 8086 microprocessors (Computer, 1996 ). The 80386 was made up of 6 functional units: (i) execution unit (ii) segment unit (iii) page unit (iv) decode unit (v) bus unit and (vi) prefetch unit. The 80386 had – 13 – 34 registers divide d into such categories as general-purpose registers, debug registers, and test registers.It had 275,000 transistors (Noyce, 1981). The 486 (80486) generation of chips really advanced the point-and-click revolution. It was also the first chip to offer a built-in math coprocessor, which gave the central processor the ability to do complex math calculations. The 486 had more than a million transistors. In 1993, when Intel lost a bid to trademark the 586, to protect its brand from being copied by other companies, it coined the name Pentium for its next generation of chips and there began the Pentium series—Pentium Classic, Pentium II, III and currently, 4. (B)Motorola The MC68000 was the first 32-bit microprocessor introduced by Motorola in early 1980s. This was followed by higher levels of functionality on the microprocessor chip in the MC68000 series. For example, MC68020, introduced later, had 3 times as many transistors, was about three times as big, and was significantly fas ter. Motorola 68000 was one of the second generation systems that was developed in 1973. It was known for its graphics capabilities. The Motorola 88000 (originally named the 78000) is a 32-bit processor, one of the first load-store CPUs based on a Harvard Architecture (Noyce, 1981). C) Digital Equipment Corporation (DEC) – 14 – In March 1974, Digital Equipment Corporation (DEC) announced it would offer a series of microprocessor modules built around the Intel 8008. (D) Texas Instruments (TI) A precursor to these microprocessors was the 16-bit Texas Instruments 1900 microprocessor which was introduced in 1976. The Texas Instruments TMS370 is similar to the 8051, another of TI’s creations. The only difference between the two was the addition of a B accumulator and some 16-bit support. Microprocessors TodayTechnology has been changing at a rapid pace. Everyday a new product is made to make life a little easier. The computer plays a major role in the lives of most p eople. It allows a person to do practically anything. The Internet enables the user to gain more knowledge at a much faster pace compared to researching through books. The portion of the computer that allows it to do more work than a simple computer is the microprocessor. Microprocessor has brought electronics into a new era and caused component manufacturers and end-users to rethink the role of the computer.What was once a giant machine attended by specialists in a room of its own is now a tiny device conveniently transparent to users of automobile, games, instruments, office equipment, and a large array of other products. – 15 – From their humble beginnings 25 years ago, microprocessors have proliferated into an astounding range of chips, powering devices ranging from telephones to supercomputers (PC Magazine, 1996). Today, microprocessors for personal computers get widespread attention—and have enabled Intel to become the world's largest semiconductor maker.I n addition, embedded microprocessors are at the heart of a diverse range of devices that have become staples of affluent consumers worldwide. The impact of the microprocessor, however, goes far deeper than new and improved products. It is altering the structure of our society by changing how we gather and use information, how we communicate with one another, and how and where we work. Computer users want fast memory in their PCs, but most do not want to pay a premium for it. Manufacturing of microprocessors Economical manufacturing of microprocessors requires mass production.Microprocessors are constructed by depositing and removing thin layers of conducting, insulating, and semiconducting materials in hundreds of separate steps. Nearly every layer must be patterned accurately into the shape of transistors and other electronic elements. Usually this is done by photolithography, which projects the pattern of the electronic circuit onto a coating that changes when exposed to light. Be cause these patterns are smaller than the shortest wavelength of visible light, short wavelength ultraviolet radiation must be used. Microprocessor features 16 – are so small and precise that a single speck of dust can destroy the microprocessor. Microprocessors are made in filtered clean rooms where the air may be a million times cleaner than in a typical home (PC World, 2000)). Performance of microprocessors The number of transistors available has a huge effect on the performance of a processor. As seen earlier, a typical instruction in a processor like an 8088 took 15 clock cycles to execute. Because of the design of the multiplier, it took approximately 80 cycles just to do one 16-bit multiplication on the 8088.With more transistors, much more powerful multipliers capable of single-cycle speeds become possible ( ). More transistors also allow a technology called pipelining. In a pipelined architecture, instruction execution overlaps. So even though it might take 5 clock c ycles to execute each instruction, there can be 5 instructions in various stages of execution simultaneously. That way it looks like one instruction completes every clock cycle (PC World, 2001). Many modern processors have multiple instruction decoders, each with its own pipeline.This allows multiple instruction streams, which means more than one instruction can complete during each clock cycle. This technique can be quite complex to implement, so it takes lots of transistors. The trend in processor design has been toward full 32-bit ALUs with fast floating point processors built in and pipelined execution with multiple instruction streams. There has also been a tendency toward special instructions (like the MMX – 17 – instructions) that make certain operations particularly efficient. There has also been the addition of hardware virtual memory support and L1 caching on the processor chip.All of these trends push up the transistor count, leading to the multi-million tra nsistor powerhouses available today. These processors can execute about one billion instructions per second! (PC World, 2000) ) With all the different types of Pentium microprocessors, what is the difference? Three basic characteristics stand out: †¢ †¢ †¢ Instruction set: The set of instructions that the microprocessor can execute. bandwidth: The number of bits processed in a single instruction. clock speed: Given in megahertz (MHz), the clock speed determines how many instructions per second the processor can execute.In addition to bandwidth and clock speed, microprocessors are classified as being either RISC (reduced instruction set computer) or CISC (complex instruction set computer). – 18 – Other uses of microprocessors There are many uses for microprocessors in the world today. Most appliances found around the house are operated by microprocessors. Most modern factories are fully automated—that means that most jobs are done by a computer. Au tomobiles, trains, subways, planes, and even taxi services require the use of many microprocessors. In short, there are microprocessors everywhere you go. Another common place to find microprocessors is a car.This is especially applicable to sports cars. There are numerous uses for a microprocessor in cars. First of all, it controls the warning LED signs. Whenever there is a problem, low oil, for example, it has detectors that tell it that the oil is below a certain amount. It then reaches over and starts blinking the LED until the problem is fixed. Another use is in the suspension system. A processor, controls the amount of pressure applied to keep the car leveled. During turns, a processor, slows down the wheels on the inner side of the curb and speeds them up on the outside to keep the speed constant and make a smooth turn.An interesting story appeared in the New York Times dated April 16 and goes to show that there’s no limit to what microprocessors can do and that resarc hers and scientists are not stopping at the current uses of microprocessors. The next time the milk is low in the refrigerator, the grocery store may deliver a new gallon before it is entirely gone. Masahiro Sone, who lives in Raleigh, N. C. , has won a patent for a refrigerator with an inventory processing system that keeps track of what is inside – 19 – and what is about to run out and can ring up the grocery store to order more (NY Times, 2001).Where is the industry of microprocessors going? Almost immediately after their introduction, microprocessors became the heart of the personal computer. Since then, the improvements have come at an amazing pace. The 4004 ran at 108 kHz—that's kilohertz, not megahertz—and processed only 4 bits of data at a time. Today's microprocessors and the computers that run on them are thousands of times faster. Effectively, they've come pretty close to fulfilling Moore's Law (named after Intel cofounder Gordon Moore), which states that the number of transistors on a chip will double every 18 months or so.Performance has increased at nearly the same rate (PC Magazine, 1998 ). Can the pace continue? Well, nothing can increase forever. But according to Gerry Parker, Intel's executive vice president in charge of manufacturing, â€Å"we are far from the end of the line in terms of microprocessor performance. In fact, we're constantly seeing new advances in technology, one example being new forms of lithography that let designers position electronic components closer and closer together on their chips. Processors are created now using a 0. 35-micron process.But next year we'll see processors created at 0. 25 microns, with 0. 18 and 0. 13 microns to be introduced in the years to come. † (PC Magainze, 1998) However, it's not just improvements in lithography and density that can boost performance. Designers can create microprocessors with more layers of metal tying – 20 – together the trans istors and other circuit elements. The more layers, the more compact the design. But these ultracompact microprocessors are also harder to manufacture and validate. New chip designs take up less space, resulting in more chips per wafer.The original Pentium (60/66 MHz) was 294 square millimeters, then it was 164 square millimeters (75/90/100 MHz), and now it's 91 square millimeters (133- to 200-MHz versions) (PC Magazine, 1998). When will all this end? Interestingly, it may not be the natural limits of technology that will eventually refute Moore's Law. Instead, it's more likely to be the cost of each successive generation. Every new level of advancement costs more as making microprocessor development is a hugely capital-intensive business. Currently, a fabrication plant with the capacity to create about 40,000 wafers a month costs some $2 billion.And the rapid pace of innovations means equipment can become obsolete in just a few years. Still, there are ways of cutting some costs, su ch as converting from today's 8-inch silicon wafers to larger, 300-mm (roughly 12inch) wafers, which can produce 2. 3 times as many chips per wafer as those now in use. Moving to 300-mm wafers will cost Intel about $500 million in initial capital. Still, nothing lasts forever. As Parker notes, â€Å"the PC industry is built on the assumption that we can get more and more out of the PC with each generation, keep costs in check, and continue adding more value.We will run out of money before we run out of technology. When we can't hold costs down anymore, then it will be a different business† (PC Magazine, 1998). At the beginning of last year, the buzz was about PlayStation 2 and the Emotion Engine processor that would run it. Developed by Sony and Toshiba, – 21 – experts predicted the high-tech processor would offer unprecedented gaming power and more importantly, could provide the processing power for the PlayStation 2 to challenge cheap PCs as the entry-level de vice of choice for home access to the Web.PlayStation2 is equipped with the 295MHz MIPS-based Emotion engine, Sony's own CPU designed with Toshiba Corp. , a 147MHz graphics processor that renders 75 million pixels per second, a DVD player, an IEEE 1394 serial connection, and two USB ports. Sony will use DVD discs for game titles and gives consumers the option of using the product for gaming, DVD movie playing and eventually Web surfing (PC World, 2000). Soon, instead of catching up on the news via radio or a newspaper on the way to work, commuters may soon be watching it on a handheld computer or cell phone.Early January this year, Toshiba America Electronic Components announced its TC35273XB chip. The chip has 12Mb of integrated memory and an encoder and decoder for MPEG-4, an audio-video compression standard. According to Toshiba, the integrated memory is what sets this chip apart from others. With integrated memory, the chip consumes less power, making it a good fit for portable gadgets. This chip is designed to specifically address the issues of battery life which can be very short with portable devices.The chip will have a RISC processor at its core and running at a clock speed of 70MHz (PC World, 2000). Toshiba anticipates that samples of this chip will be released to manufacturers in the second quarter, and mass production will follow in the third quarter. Shortly after this release, new handheld computers and cell phones using the chip and offering streaming media will be expected (CNET news). – 22 – It is reported in CNET news, that in February this year, IBM started a program to use the Internet to speed custom-chip design, bolstering its unit that makes semiconductors for other companies.IBM, one of the biggest makers of application-specific chips, would set up a system so that chip designs are placed in a secure environment on the Web, where a customer's design team and IBM engineers would collaborate on the blueprints and make change s in real time. Designing custom chips, which are used to provide unique features that standard processors don't offer, requires time-consuming exchanges of details between the clients that provide a basic framework and the IBM employees who do the back-end work. Using the Internet will speed the process and make plans more accurate.IBM figures that since their customers ask for better turnaround time and better customer satisfaction, this would be one way to tackle this. As a pilot program, this service was to be offered to a set of particular, selected customers initially, and then would include customers who design the so-called system-on-a-chip devices that combine several functions on one chip (CNET news). A new microprocessor unveiled in February 2000 by Japan’s NEC, offers high-capacity performance while only consuming small amounts of power, making it ideal for use in mobile devices.This prototype could serve as the model for future mobile processors. The MP98 process or contains four microprocessors on the same chip that work together in such a way that they can be switched on and off depending on the job in hand. For example, a single processor can be used to handle easy jobs, such as data entry, through a keypad, while more can be brought – 23 – online as the task demands, with all four working on tasks such as processing video. This gives designers of portable devices the best of both worlds—low power consumption and high capacity (PC World, 2000).However, it should be noted that the idea of putting several processors together on a single chip is not new as both IBM and Sun Microsystems have developed similar devices. The only difference is that MP98 is the first working example of a â€Å"fine grain† device that offers better performance. Commercial products based on this technology are likely to be seen around 2003 (PCWorld, 2000). In PCWorld, it was reported that, last September, a Japanese dentist received U. S . and Japanese patents for a method of planting a microchip into a false tooth.The one-chip microprocessor embedded in a plate denture can be detected using a radio transmitter-receiver, allowing its owner to be identified. This is useful in senior citizen’s home where all dentures are usually collected from their owners after meals, washed together and returned. In such a case, it is important to identify all the dentures to give back to their correct owners without any mistake (PC World, 2000). In March this year, Advanced Micro Devices (AMD) launched its 1. 3-GHz Athlon processor. Tests on this processor indicated that its speed surpassed Intel’s 1. GHz Pentium 4. The Athlon processor has a 266-MHz front side bus that works with systems that use 266-MHz memory. The price starts from $2,988 (PCWorld, 2001). Intel’s Pentium 4, which was launched in late 2000, is designed to provide blazing speed—especially in handling multimedia content. Dubbed Intel Net Burst – 24 – Micro-architecture, it is designed to speed up applications that send data in bursts, such as screaming media, MP3 playback, and video compression. Even before the dust had settled on NetBurst, Intel released its much awaited 1. GHz Pentium 4 processor on Monday, April 23. The is said to be the company’s highest-performance microprocessor for desktops. Currently priced at $325 in 1,000 unit quantities. The vice president and general manager of Intel was quoted as saying, â€Å"the Pentium 4 processor is destined to become the center of the digital world. Whether encoding video and MP3 files, doing financial analysis, or experiencing the latest internet technologies—the Pentium 4 processor is designed to meet the needs of all users† (PC World, 2001).Gordon Moore, co-founder of Intel, over thirty years ago, announced that the number of transistors that can be placed on a silicon would double every two years. Intel maintains that it has r emained true since the release of its first processors, the 4004, in 1971. The competition to determine who has produced the fastest and smallest processor between Intel and AMD continues. Infact, Intel Corp. predicts that PC chips will climb to more than 10GHz from today's 1GHz standard by the year 2011. However, researchers are paying increasing attention to software.That's because new generations of software, especially computing-intensive user interfaces, will call for processors with expanded capabilities and performance.

Sunday, September 15, 2019

Bloodless Surgery

Bloodless Surgery| [Type the document subtitle]| Michael Jones| Abstract There have been many court cases that has made, parent who deny their child blood transfusion, to have to get one. Most time the courts will side with the parents, but if their decision not to is life threatening, the court side with the hospital. Most times it is for religious reasons that parent don’t want their child to have blood transfusion. There are many risks associated with blood transfusion, some parents don’t want to take that risk. Some of the diseases you can get are hepatitis B and hepatitis C. HIV and AIDS can also be contracted through blood transfusion. It can even lead to death of a recipient. Is it ethical for parents to chose for their kids not to have blood transfusion. There is an alternative to blood transfusion. There are many tools and techniques to prevent the need for blood transfusion. Many doctors today are moving more towards bloodless surgery. The growth of bloodless surgery can be largely due to the number of Jehovah’s Witness patients. It is beneficial for both the patient and the hospital. More cost effective and faster recovery. I will talk about how preoperative planning is important for a successful bloodless surgery. I will touch on technique like cell savaging and Normothermia. Also introduce you to a cool tool called Cyber-Knife. I will show how Jehovah’s Witnesses and their Hospital Liaison Committee help my family when it came to bloodless surgery. Blood transfusions have been known to have many dangers. In most cases the cons outweighs the pros, causing many people to consider alternative measures. Today one of the most innovative and effective alternatives is bloodless surgery. In the event that you are faced by such a challenging yet important decision such a surgery, allow me to enlighten you on some of the statistics, procedures and benefits of bloodless surgery to assist you in making an informed decision. As we look at some of the dangers that are associated with blood transfusion alongside modern methods, equipment and benefits of bloodless surgery. We will see how these procedures have progressed over the years, and how the increase in use of bloodless surgery can be attributed to a small group of people known as Jehovah’s Witness. Witnesses as patients will not accept blood transfusion, under any circumstances. This has caused doctors to look for other solutions. The reasons why you should use bloodless surgery are the risk associated with blood transfusion. Transfusions have been used for over fifty years in clinical medicine. Within those fifty years it has become apparent that the risk such as infectious viruses, bacterial infections and even death has been linked to blood transfusion. Infectious viruses include but are limited to blood borne pathogens like hepatitis B and C. The Blood bank reports â€Å"for screened units of blood in 2007, 1 in 137,000 had hepatitis B, fewer than 1 in 1,000,000 for hepatitis C† (Nagarsheth, N. P. , Sasan, F. 2009) Blood transfusions have been associated with higher incidence of bacterial infections. â€Å"Bacterial infection was 2 percent non-transfusion patients, 15 percent for those with up to 2 units of blood red blood cells transfused, 22 percent with three to five units of blood, and 29 percent for patients transfused with 6 or more units of blood. † (Nagarsheth, N. P. , Sasan, F. 009) The more blood received in a transfusion, the more likely you are to get a postoperative infection. Many People today receive multiple transfusions. Transfusion in time develops allergenic immunization. This limits the supply of compatible blood. These numbers may seem like lottery chances, but why take the chance. Ultimately there is death. Death is not a foreign outcome of blood transfusion. Transfusion related acute l ung injury or TRALI, was first reported in the early 90’s. It’s a life threading reaction following a blood transfusion. TRALI is now known to cause many deaths each year. However, experts believe that the number of death is much higher than what is reported in relation to TRALI, because many doctors are unaware of the symptoms. The cause for such a reaction is conclusive. New scientist states â€Å"The blood that causes TRALI appears to come primarily from people who have multiple transfusions. TRALI is the top reason for blood transfusion death in the world. Jehovah’s Witnesses have benefited greatly from their faithful course. Although their reason for not having blood transfusions are not because of the negative reasons that derive from it, but because of their devout belief in God and the Bible. They obey scripture such as Acts 15; 20 which states â€Å"abstain from blood† and Leviticus 7; 26 â€Å"you must not eat any blood. † Jehovah’s Witnesses respect Gods authority and has taken their stand against blood transfusions, regardless of the outcome. If you do not agree with such a point of view, let’s examine the benefits to bloodless surgery and its advancing technology. Over the years the tools and techniques of surgery without blood transfusion has improved greatly. One tool or technique used for surgeries with a lot of blood lost is called cell salvage. This involves recovering the blood lost by a patient, cleaning it, and putting it back into the patient. This is done non-stop during surgery. â€Å"Technological advances have increased system automation†¦ offering higher processing speeds and better end product. †(Lawrence Goodnough. 2003 Vol. 4) Cell salvaging is also cost effective for the hospital and the patient. If there is a surgery with lots of blood lost, it is cheaper to use cell salvage than the units of blood used in a transfusion. Also the recovery time is faster reducing the time and money a patient spends at a hospital. How can blood loss during surgery be lowered in order to lessen the chance for need of a blood transfusion? The key is preoperative planning for a successful bloodless surgery. The first thing to be considered is the amount of red blood cell (RBC) that will lost before a transfusion is needed. This is called the transfusion threshold. Another thing that can be done before surgery is to â€Å"increase the patients RBC mass. † (Watchtower Bible and Track Society, 2004) RBC mass can be increased by injection of iron into the patient. Also erythropoietin(EPO). EPO is a protein hormone produced by the kidneys. â€Å"This synthetic hormone acts like the natural erythropoietin found in our kidneys and stimulates the bone marrow to send new, fresh red cells into the bloodstream. † (Watchtower. org)EPO is normally given 10 to 20 days before surgery. If you increase the RBC mass and lower the transfusion threshold, it allows for an even greater acceptable amount of blood loss. Normothermia is a technique used to keep the patient’s body temperature during surgery. This helps keep the blood flowing properly. Managing the patient body temperature throughout the entire process reduces the surreal shock to the body which reduces the chances of incurring infection. The patient can be warmed by a thermal suit or a machine that infuses warm fluid into the body. The position can also help reduces blood loss during surgery. Local veins pressure changes depending on the field of relativity to the heart. Low pressure goes hand in hand with blood saved. Stanford University Medical Center is a pioneer in the use of bloodless surgery in neurosurgery. â€Å"Without sawing into the skull or so much as cutting the scalp, they are curing patients whose brain and spine tumors were not long ago considered a death sentences. † (Fillon, Mike 1997) These surgeries are possible with the use of Stanford University’s computer mediated stereotaxis radio surgery known as the Cyber-knife. The Cyber-Knife is basically a robotic x-ray gun that shots small amounts of radiation into the tumor in a lot of different directions. This kills off the infected tissue without over exposing other parts of the body to radiation. Cyber-knife is a robotic arm that locks the radiation beam on to the tumor and constantly readjusts its aim in response to the patient’s natural small movement. To help doctors in providing treatment without blood transfusions, Jehovah's Witnesses have developed a helpful liaison service. Presently, more than 1,400 Hospital Liaison Committees worldwide are equipped to provide doctors and researchers with medical literature from a data base of over 3,000 articles related to bloodless medicine and surgery. Not only Jehovah's Witnesses, but all patients in general today, are less likely to be given unnecessary transfusions because of the work of the Jehovah’s Witnesses' Hospital Liaison Committees. In many surgeries which doctors felt that a transfusion was needed. The liaison committee has provided them with medical literature that shows how effective EPO can be. Some did not think that it would work fast enough to make up the amount of blood needed. A number of cases have shown how quickly EPO gets results. In one instance, on the very same day after EPO was administered, the count of new red cells was already four times normal! †(Watchtower. org) My mother and father got to see how effective the liaison committee, and blood surgery first hand. When my brother was 16 years old, we found out that he had cancer in his knee. At that time there was no hospitals with a committee or doctor that would perform bloodless surgery on Staten Island. So the hospital liaison committee located Mount Sinai Hospital that had one doctor that did do bloodless surgery. My brother was put on EPO, and was the only patient that was. For all of the doctors this was their first time use EPO, or even doing bloodless surgery. They were extremely surprised how much better he was doing than the other kid’s that were having blood transfusions. â€Å"It was really sad to see all those little kids and babies having blood pumped in to them. † That is what my mother said when I was asking her about my brother surgery. She said â€Å"Junior what the only kid that was up walking around, all the other kids was in their beds look like they was about to die. † Two things happened to my brother. First he lost all his hire because of chemotherapy. He also lost his leg because that was the only way they could remove all the cancer. It is reasonable to conclude that although blood transfusion has been around for many years. With all its side effects such as, infectious viruses bacterial infections and even death. It is quickly becoming a thing of the past! With strong scriptural basis and its practical benefits, Jehovah’s Witnesses have been the main reason for the growth of bloodless surgery. Today hospitals across the world implanted bloodless programs to help meet the demand for this growing number. Along with that, doctors have developed many techniques and tools in order to be successful in bloodless surgery. Techniques such as cell savaging and blood recovery and tools like the Cyber-knife. This have allowed for more cost effective surgeries, faster recovery, lower chance for infection and viruses. If ever surgery is something you have to undergo. I hope that I have persuaded you to make the right decision. References Cantrell, S. (2010). New normothermia measure heats up patient- temperature management. Healthcare Purchasing News, 34(3), 22-29. Retrieved from EBSCOhost. Fillon, M. (1997). Bloodless surgery. Popular Mechanics, 174(1), 48. Retrieved from EBSCOhost. Goodnough, L. , ; Shander, A. (2003). Evolution in alternatives to blood transfusion. Hematology Journal, 4(2), 87. Retrieved from EBSCOhost. Nagarsheth, N. P. , ; Sasan, F. (2009). Bloodless Surgery in Gynecologic Oncology. Mount Sinai Journal of Medicine, 76(6), 589-597. doi:10. 1002/msj. 20146 Watch Tower Bible and track society of Pennsylvania. (2004) Transfusion Alternatives, Document Series. Watchtower. org