(’12)How far is it acceptable for technology to be used only for financial benefit?


(Introduction) As the world undergoes rapid urbanization and industrialization, competition between firms is tough. Most would focus on outdoing others and maximize their profits through heavy emphasis on productivity, and the development of new technology to gain a competitive edge. Such an approach would work on the basis of short-term and immediate *financial* benefits, causing firms to often neglect the importance of producing technology that actually contribute to society, and many to say that technology should not be used for financial benefit. However, I feel that it is acceptable for technology to be used for financial benefit, as long as it not used solely for profits and is beneficial to mankind. Such is only unacceptable when it may result in detrimental consequences, such as damages to the environment/society. 

(Refute 1) Companies obtain patents as a form of reward for their efforts in research and developing new technology. However, some unscrupulous businessmen may see it as an opportunity to further develop new technology or exploit technology that may harm the environment/society. For instance, Monsanto, an American firm that specialises in agrochemical and agricultural biotechnology, developed the herbicide glyphosphate through biotechnological processes and obtained a patent for it, causing sales to soar. Seeing the profits made, they further developed the “Roundup Ready Crops” and sold it to farmers to earn more money. These crops caused the growth of “superweeds” and threaten to harm the environment. In another example, during the 1984 Bhopal Disaster, Union Carbide India Limited used technology to produce pesticides from Methyl Isocynate in order to cut down on manufacturing costs in order to gain more profits. It resulted in the disaster which not only polluted the air, water and soil but also killed thousands and many others suffered from blindness, organ failures and other bodily malfunctions. These examples have shown how firms have caused harm to the society and environment in their pursuit for more financial profits. Technology, though beneficial as it can help to reduce costs and increase output of goods, brings about detrimental impact if it is exploited by the people who use it and very often, such instances occur when monetary benefits are brought into the picture. Hence, it is unacceptable for technology to be used for financial benefits. 

(Supporting 1) However, some technology used for profits actually helps to sustain the environment, hence is acceptable. Toyota’s hybrid models is a good instance where technology, used for profit, helps to conserve the environment. Toyota’s hybrid cars are a substantial source of profits for Toyota Europe; They made a loss between 2008 and 2011, but managed to turn things around by cutting labor while consolidating production of hybrid models such as the Auris and Yaris. Now, in the most recently completed financial year, 2014, Toyota Europe reported earnings that were up 75% from the year before. At the same time, hybrid cars are environmentally friendly. Toyota calculates that as of 2015, its hybrid vehicles have resulted in approximately 58 million fewer tons of carbon dioxide emissions than would have been emitted by gasoline-powered vehicles of similar size and driving performance. Hybrid cars have also improved air quality, especially in urban areas, as they generally produce a lot less smog-forming emissions than non-hybrids. In another example, Hanergy Thin Film Power Group Ltd., the world’s biggest solar company by value, climbed to $36 billion in market value and has its profits surging by 64% in 2014 following the sale of five power projects in China. Hanergy sold the photovoltaic power plants in China for 1.42 billion yuan ($229 million), at a net gain of about 778 million yuan, according to the statement. Solar energy offers power without pollution. In its basic form, it needs no distribution grid because it comes down from the sky, and is sustainable as the sun’s light would never run out. In such cases, these technologies can be exploited for financial benefit.

(Refute 2) However, when the technology used for profit results in worsening of income inequity, as companies continue to charge higher prices, it no longer becomes acceptable. Communication technologies such as computers and smartphones used widely by people all over the world today are usually produced by a few rich companies which develop and sell them. In pursuing financial benefits, those patent-holders will charge users sky-high prices. As people become more and more reliant on such technology, they have no choice but to accept the high prices. Also, with consumerism on the rise, consumers themselves desire to own these new technologies, and are willing to buy them at higher prices, thus resulting in a widening income gap with the rich companies becoming richer and richer. For instance, Apple reported a net profit of US$18 billion in its first fiscal quarter, a major part of which comes from the sale of its iPhones; they gained 39.9% profit per product.

(Supporting 2) However, I beg to differ. Communication technology developed for profits have also increased consumers’ choices, convenience, and pleasure, hence it is acceptable. In fact, precisely because these technologies generate profits, companies are willing to continue investing in and developing them to produce even better products/products with more functions, which in turn further increases the pleasure they brings to us. Using technology for financial benefit, at least, makes more new technology available for people to enjoy. If Apple did not make profits, we may not be able to continually get new versions of iPhones and iPads anymore. In another example, before China’s economic reform in 1978, all the companies were state-owned. There was absolutely no private ownership, meaning that any money earned by the firms must be given to the government. Managers and workers in the firms got their salaries from the government, regardless of how much profit they had contributed. The result of such “zero-profit margin” was that all the firms were reluctant to innovate or to improve their products, causing China’s high-tech sector to be essentially “stagnant” in those years.

(Supporting 3) Furthermore, some technologies used for financial benefits yielded secondary benefits in other areas. Some technology used for profits actually reduced poverty, hence is acceptable. Poverty usually exists in the form of a vicious cycle that trap the poor inside generation after generation. Many other problems are closed related to poverty, one of them being malnutrition. Today, it is estimated 800 million people around the world are suffering from malnutrition. Technology provides a possible solution to the problem of poverty, even though the primary drive for the companies to develop these technologies is financial benefit. For example, there have been more than 20 “Taobao Villages” emerging in China. The online business platform developed by Alibaba provides the poor villagers with an exit from poverty. Starting by selling handicrafts online and tapping into a lucrative consumer market, the villagers no longer rely on the physically demanding agricultural work which produces meager income for livelihood. Each “Taobao Village” generates 10 million Yuan per year, and people now enjoy remarkably different standard of living from the past. However, the reason why Ma Yun, the owner of Alibaba, developed this online selling technology is for financial benefit; the “rents” that every shop owner pays have made Ma Yun the 2nd richest man in China. Yet, besides capturing profit for the technology owner, Taobao helps to improve rural economies across China. In another example, The Village Phone Programme in rural areas of Bangladesh also helps alleviates poverty. Phones are given to selected ladies in the village, and these “village phone ladies” becomes entrepreneurs as they earn money through providing phone services to other villagers. Such shows how infocomm technology can be designed and developed to become an income-generating activity. At the same time, Grameen telecom, as the phone service operator, also benefits from increased profits through villagers’ calling activities. From this, we can see that while actions taken by private firms are usually driven by profit motives, if their actions could in turn help mitigate societal issues like poverty, it will be considered a win-win situation, which is desirable to the community.




‘Science is based as much on theory as on fact.’ Is this a fair comment?


(Refute 1) Some argue that science is only based on facts. In science, ‘facts’ are observable phenomenon in a particular situation in the natural world. The word ‘science’ is derived from the Latin word scientia, which is knowledge based on demonstrable (observation) and reproducible (experiment) data. Scientists craft their experiments, record their results and make conclusions based on what they observe, or, in essence, facts. Gregor Mendel’s pea experiment is a prime example. Mendel set out to study 34 subspecies of the common garden pea, a vegetable noted for its many variations in color, length, flower, leaves and for the way each variation appears clearly defined. Over eight years, he isolated each pea trait one at a time and crossbred species to observe what traits were passed on and what traits were not from one generation to the next. Through his observations(facts), he concluded/induced two generalizations which later became known as Mendel’s Principles of Heredity/Mendelian inheritance. Hence, science is purely based on facts, not theories.

(Supporting 1) However, I bed to differ. Study of science is also based on theory. In many cases of scientific study, especially in modern world, where the things science want to discover are ever more complex and impossible/too tedious to ‘observe’ in the natural world, scientists are not able to depend on ‘facts’ anymore, and have to rely on theory. For instance, to better understand the fundamental structure of the universe, CERN, the European Organization for Nuclear Research, produced the Large Hadron Collider (LHC), the world’s largest and most powerful particle accelerator. They embarked on this project because of the Standard Model, a theory that explains the interactions between particles. The model proposed that the Higgs boson, an elusive, energetic particle, gives all matter mass. Based on this theory, the LHC aims to find the particle. Though, undoubtedly, observation also played a large part in the finding of the Higgs boson, but before the observation was made possible/LHC was built, specifically, during the building of the experiment/LHC, scientists had to rely on theory to determine exactly how the LHC should be built so that it actually works. In another example, Albert Einstein, one of history’s greatest thinkers, came up with special relativity, the most accurate model of motion at any speed, from his thought experiment of racing a light beam. A classical theory of electricity and magnetism, Maxwell’s Equations, predict the speed of light as a universal constant. When Einstein studied Maxwell’s equations, in his trust of “the truth of the equations in electrodynamics” and that they “should hold also in the moving frame of reference”, he came up with his famous thought experiment. That, if he travelled almost at the speed of light (relative, say, to Earth) a beam of light would still move away from him at velocity c. Although initially thought of as a paradox, as in Einstein’s own words, this was “in conflict with the rule of addition of speeds we knew of well in Newton mechanics” (which called for many possible speeds), another experiment at that time involving light proved that the mechanics were wrong; light always travelled at the same speed. This caused Einstein to further think, “What if the speed of light really is the same in all reference frames — what if it moves the same speed on a train or off it?” The rest of special relativity followed from that insight, as Einstein realized that the solution was found from the very concept of time, that is, that time is not absolutely defined but there is an inseparable connection between time and the signal speed. With this connection, the foregoing extraordinary difficulty could be thoroughly solved, and the present theory of special relativity was completed. As shown, Einstein had to rely on his own theories and thoughts to develop special relativity, an all-encompassing model (explaining any speed) that still holds accurate today, which would have been impossible to achieve if Einstein had based his research on observation/facts instead, as that would mean he had to experiment and observe motion of every possible substance in existence. Hence, science is also based on theory.

(Refute 2) Some argue that science is only based on facts. This is because, in every study, other than observations, scientists themselves also require/need to be equipped with prior knowledge/facts from past studies to support/craft their conclusions. Using Einstein’s example again, if not for Einstein’s knowledge of classical/traditional physics (Newton’s mechanics), Maxwell’s equations, mathematics, he would not have been able to craft his theory. 

(Supporting 2) However, I beg to differ. These people assume that knowledge equal to fact, when actually that is untrue. In science, the only fact is observation. Knowledge is tentative. Although supported by a wealth of data from repeated trials, and reliable in most situations, it is not considered the final word. This is because, Science is a human endeavor, and human perspective is limited and fallible. In any era, Knowledge will be continually changed and updated. In this sense, Knowledge is then, just a theory, and therefore Science, which is basically made up of/substantiated by scientific knowledge, is merely based on theories. Consequently, all scientific experiments are based on theory too. For instance, during Galileo’s time, many people, even scientists, when being posed a question, asked back “What did Aristotle say?” They took everything that Aristotle said as knowledge, facts that were the irrefutable truth. One such knowledge that people gained from Aristotle was that heavier objects fell faster and therefore reached the ground earlier than lighter objects if they are released from the same height. Based on this prior knowledge, Galileo carried out experiments and discovered that all objects actually fell at the same rate of increase in speed (acceleration) due to gravity. In this case, knowledge did not equal to fact; Aristotle’s claim was merely his own theory. Galileo’s conclusion too; future studies further refined and developed his theory (of gravity and acceleration). And these knowledges are also theories, which will continually be improved upon as science moves toward the future. For example, Newton’s physics/mechanics (three laws) expanded upon this idea of motion, acceleration and force, and his Three Laws of Motion are essential in science, being one of the key ideas in traditional/classical physics. However, these laws, although tested and verified countless number of times, still cannot be considered as irrefutable facts, as they only describe about 90% of the way things work/move in the universe. This knowledge starts to break down when we consider extremes; ideas such as objects moving at the speed of light, the inside of atoms, extreme temperatures, and when the objects are huge (like galaxies interacting with each other). As shown, any knowledge in science will not be able to apply to/hold true in every situation, due to limitations of human beings and great diversity of the universe; therefore they are merely theories. Hence, science is actually based on theory.

(Supporting 3) Lastly, Science needs theory. Besides knowledge, theories also include ideas that have not been validated. They are fundamentally uncertain, and such is vital to scientific studies and is the basis of great scientific discoveries. This is because, they prompt scientists to create new experiments/come up with new studies in an attempt to either affirm or refute them, and to challenge the uncertainties. This is so central to the scientific method and the advancement of science such that Science is often defined as the process of continually testing and challenging previous theories, while coming up with newer, better/more reliable ones in their place. This is evident in the scientific method, where after conducting background research, a hypothesis is constructed before any experimentation takes place. Without this hypothesis, without this theory (backed by knowledge/more theories), the experiment is aimless, and observations (facts) remain as discrete data that serve no meaning in themselves. Only by serving as evidence to support/refute a hypothesis/theory, do observations (facts) have meaning. Schrödinger’s Cat experiment is a prime example.  In the hypothetical experiment, a cat is placed in a sealed box along with a radioactive sample, a Geiger counter and a bottle of poison. If the Geiger counter detects that the radioactive material has decayed, it will trigger the smashing of the bottle of poison and the cat will be killed. This seems like a seemingly meaningless experiment, with observations (facts)― either the cat lives or dies that makes no sense by itself. However, if we bring the theory in/understand the theory involved, the Copenhagen interpretation’ of quantum mechanics, we find that Schrödinger’s experiment was actually designed to illustrate the flaws of this theory, which states that a particle exists in all states at once until observed. If the Copenhagen interpretation suggests the radioactive material can have simultaneously decayed and not decayed in the sealed environment, then it follows the cat too is both alive and dead until the box is opened, which is an illogical idea. Schrödinger hence used this to highlight the limits of the Copenhagen interpretation when applied to practical situations. The cat is actually either dead or alive, whether or not it has been observed. As shown, theories serve as the aims of experiments, and experiments are carried out specifically to improve/affirm or refute said theories in mind.  Only this brings meanings to observations/facts, and without theories, without their flaws and limitations, perhaps even science, the collection of all observations/facts of the natural world, would have appeared random, and stayed stagnant eternally. Therefore, the fundamentals of science is based on theory.

(’11)How far should medical resources be used to extend life expectancy?

Related: (’98)‘The first duty of a doctor has always been to preserve life.’ How far can this principle still be maintained?

(’03)Should medical science always seek to prolong life?


(Introduction) Shakespeare had King Lear lament the tortures of aging, while the myth of Spanish conquistador Ponce de Leon’s Fountain of Youth, Sumerian tale of “Gilgamesh” and the eternal life of the Struldbrugs in “Gulliver’s Travels” all fed the notion of overcoming aging and immortality. These may be more than fiction now. Research from the World Health Organization claimed that people are living longer, healthier lives now than they did in the past. The global life expectancy increased from around 64 years in 1990 to about 70 years in 2011. The notion of possibly continuing this trend forever becomes an increasingly relevant concern that we, as a society, should deliberate about.

(Refute 1) Some say that medical science should always seek to prolong life, as that is what medicine and medical resources are used for. That is its ultimate purpose. Whether in the fields of maintenance of health, the prevention or treatment of disease, research done develops inventions of medicines, vaccines, or techniques that eventually enable an individual to live longer. Furthermore, practitioners of medicine have a duty to preserve the lives of their patients. In many countries, doctors/physicians recite the Hippocratic Oath upon entering medical practice. One of the oldest binding documents in history, this Oath written by Hippocrates is of great historic and traditional value. Most of the Hippocratic Oath has been revised over the centuries, to encompass current values; today, it is most often cited as a single phrase, “First, do no harm”. The significance of the Hippocratic Oath hence does not reside in its specific guidelines, but rather, in its symbolism of an ideal: the selfless dedication to the preservation of human life. Therefore, aligning with the purpose of medical science and the responsibilities of a medical professional, medical science should always seek to prolong life.

(Supporting 1) I feel, while the above is indeed true, in some situations, aggressive medical treatment to prolong one’s life will only become sanctioned torture. This is especially true for patients who have little hope of surviving through/recovering from their diseases. When technology and medicine are used to simply keep them alive (life support), their suffering increases. Life support machines such as pacemakers, defibrillators or ventilator, may lead to unnecessary pain. Defibrillators reboot the heart with a powerful shock when it races uncontrollably or falls into quivering arrhythmias, sometimes causing excess shock to patients. When they did, patients would feel as if they have been kicked in the chest by a horse. In intensive care units (ICU), patients who require a combination of such devices to keep them alive will then be often consigned for weeks or even months to a medical purgatory, attached by tubes in their tracheas to ventilators, with catheters protruding from their necks, chests, abdomens or bladders. When awake, they are in constant discomfort, chronically deprived of sleep, so they are often sedated to the point that they are no longer in communication with their environment. Under such circumstances, it may be considered immoral to keep using medical resources to extend their lives. Hence, medical science should not always seek to prolong life.

(Supporting 2) Furthermore, medical science should not always seek to prolong life, because when more people live longer, there is consequently additional burden on society. Already, many developed countries face the problem of an ageing population; if life spans of people were to be extended even more than necessary, social, economic and political problems will only build up. A good case in point would be Japan, the first major nation to turn gray. Japan has one of the world’s highest life expectancy rates; women on average live to 87 and men to 80; but its public debt in 2013 hit $10 trillion, twice the nation’s GDP. As life expectancy rises, a Japanese person entering early adulthood may find that parents and grandparents both expect to be looked after. Because the only child is common in Japan’s newest generation, a big cast of aging people may turn to one young person for financial support, or care giving, or both. Politically, the young in Japan have some of the world’s worst voter-participation rates, as they think the old have the system so rigged in their favor, there’s no point in political activity. Hence, in mind of these negative consequences, medical science should not always seek to prolong life.

(Alternative view- a concession of sorts) However, since the above argument is based on the assumption that medical science only prolongs life without prolonging health, some would counter it by arguing that in the process of aiming to increase life expectancy, solutions/cures to many other problems/diseases that plague human health will also be found. Hence, medical science should always seek to prolong life. Instead of concentrating on the prevention/cure of individual diseases, for instance cancer (which is a roundabout way to prolonging life, and may lead to other repercussions, such as Alzheimer’s disease, according to Olshansky, a sociologist at the University of Chicago School of Public Health), when scientists directly aim to prolong life, they target the ageing process. As scientists investigate the details of this process with a view to finding ways to slow/prevent it at its root, they fend off the whole slew of diseases that come along with ageing. These include chronic diseases such as heart disease, stroke and Alzheimer’s, which are now more prevalent. In another example, A recent clinical trial by Novartis that tests an anti-ageing drug in healthy elderly volunteers from Australia and New Zealand also enhanced their immunity to flu by 20%. Hence, given that there is evidence that medical science which seeks to prolong life can improve health along with longer life, medical science should always seek to prolong life.

(Alternative view, cont.) As such, if science really manages to achieve the above, life might extend in a sanguine manner, with most men and women living longer in good vigor and also working longer. This will partially negate additional burdens/negative consequences on society, as pension and health-care subsidies are kept under control. In fact, many of the work being done in longevity science concerns making the later years vibrant, as opposed to simply adding time at the end. More than two decades ago, scientists discovered the daf-2 and daf-16 genes in worms that can be changed in a way to cause the invertebrates to live twice as long as is natural, and in good vigor. Now, Buck Institute, the first independent research facility dedicated to extending the human life span with better health, has quintupled the life span of laboratory worms. Their focus is on extending the ‘health span’ of an individual, so that people are still able to maintain a healthy, quality way of life, even in their last few years. Furthermore, one of the main findings of a Harvard University study published in 2013 shows that quality of life has already been improving, even as people live longer; most people now no longer are very sick for six or seven years before they die. Instead, that stretch of poor health has shrunken to about a year or so. With the exception of the year or two just before death, people are healthier than they used to be. Such is the win-win situation that longevity science manages to create. Hence, medical science should always seek to prolong life.

(Conclusion) Although it has to be acknowledged that the above views are valid, there will definitely be situations where treatment to prolong life becomes torture instead (as previously discussed). Therefore, we cannot say that medical science should always seek to prolong life; even if they ensure that the person, living longer, stays healthy and maintains a quality standard of living, there will definitely come a time where life itself fades away. Our bodies were not made to be immortal in the first place; the fact is, longevity science ultimately just adds time before our end, regardless of whether this ‘time’ is quality or not.

(’14)To what extent can the regulation of scientific or technological developments be justified?

Related: (’96)Should any limits be placed on scientific developments?

(’09)Should every country have the right to carry out unlimited scientific research?


(Introduction) Albert Einstein once stated, “If we knew what it was we were doing, it would not be called research, would it?” His quote nicely sums up the nature of scientific research; that is unpredictable, uncertain, and risky.  Given this nature of scientific research, I feel, there needs to be limits placed on it. Limits help to lower the risks of research, ensure that it still remains humane, and moderate its progress so as to match that of society’s. However, these restrictions should not be overly controlling; they should not dictate what should and should not be done at every step, otherwise, they prevent any research from happening.

(Refute 1; Regulation not justified) Some may say that there should be no limits to scientific research.  Science works best when left alone. Only when there are no limits, can scientific research reach its maximum potential/progress. Regulation prevents developments from reaching their full potential. This is because, research consists of many rounds of trial-and-errors, experiments and repeats. Thomas Edison and his team tested at least 6000 materials before they found the suitable material to be the filament of the light bulb. In Gregor Mendel’s pea experiment, he set out to study 34 subspecies of the common garden pea, a vegetable noted for its many variations in color, length, flower, leaves and for the way each variation appears clearly defined. Over eight years, he isolated each pea trait one at a time and crossbred species to observe what traits were passed on and what traits were not from one generation to the next. If regulations were present at every step to control what scientists can and cannot do, real research may never proceed. Hence, they are not justified.

(Refute 2; Regulation not justified) Some may say that there should be no limits to scientific research. This is because, any attempt to regulate the forward momentum of science will always lead to failure (of science). For new scientific discoveries, developments and research to come about, one needs to step outside the norms of what is currently scientifically appropriate. He needs to take an existing discovery and make it into something new, something different, something better. If regulations/limits were present to prevent scientists from embarking on such a new and seemingly ‘abnormal’ research, then there may not be any new scientific discoveries anymore. This is especially true in today’s context, where the things science wants to discover are ever more complex; if there were regulation and control to impede science, science may never find those things. For instance, to better understand the fundamental structure of the universe, CERN, the European Organization for Nuclear Research, produced the Large Hadron Collider (LHC), the world’s largest and most powerful particle accelerator. It consists of a 27-kilometre ring of superconducting magnets with a number of accelerating structures to boost the energy of the particles along the way. Inside the accelerator, two high-energy particle beams travel at close to the speed of light before they are made to collide, leading to fears that a black hole may be created. In fact, there were a few people who were so concerned that they filed a lawsuit against CERN in an attempt to delay the LHC’s activation. Even though their attempt failed and the LHC was activated, if there had been legal limits and restrictions placed on the production of the LHC, we may have never been able to prove the existence of the Higgs boson, believed to be essential for formation of the Universe, that the LHC found. Hence, regulations are not justified.

(Supporting 1) However, if there are no limits, and science discovers new things at a phenomenal rate, other problems may arise. If science process too quickly, we/society may not catch up to it. There will come a time where science completely goes beyond the understanding of an average human being, and this defeats the purpose of science, which is to help mankind, in the first place. As this quote puts it nicely, “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” Hence, for both science’s and society’s own good, for science to be applicable to society and for society to appreciate science, there needs to be regulation to match their speeds of progress. More specifically, there should be limits to scientific research, to slow down its speed of progress. For instance, surveys by Pew Research center found that there are wide gaps between what the public believes, and what scientists believe. 88% of scientists say that genetically modified foods are “generally safe” to eat; 37% of the public agrees. 86% of scientists believe that vaccines should be required in childhood, compared to 68% of the public. 98% of scientists say they believe humans evolved over time, compared to 65% of the public. As shown, science is constantly proceeding, but human cultures and mindsets that take a longer time to change, are not catching up. Hence, there needs to be limits to slow down scientific development, to wait for humans to catch up, so that the research is truly worthwhile.

(Supporting 2) There should be limits to scientific research, so that process of the research remains humane. Research requires experimentation, and sometimes, experimentation with living subjects. The subjects used in this case are usually animals. However, in the past, there were no clear rules/enforcements that prevented the use of human beings in experiments. The way experiments were carried out was trusted on, and depended upon, scientists’ own morality, leading to some cases of human experimentation carried out by immoral scientists. For instance, During World War II, Nazi human experimentation consisted of medical experimentation on large numbers of people in its concentration camps. They conducted experiments to learn how to treat hypothermia. One study forced subjects to endure a tank of ice water for up to three hours. Another study placed prisoners naked in the open for several hours with temperatures below freezing. The experimenters assessed different ways of rewarming survivors. In another example, the American Tuskegee Syphilis experiment that ran from 1932 to 1972 aimed to see whether syphilis affected black men differently from white men. It involved nearly 400 impoverished and poorly educated African-American men diagnosed with latent syphilis, but whom were never told they had the disease and were never treated for it, even when penicillin became a standard cure in 1947. Any treatments they were getting were actually placebos, aspirin or mineral supplements. When the study ended in 1972 following a public outcry, only 74 of the original participants were still alive. 28 men had died of the disease and a further hundred or so of related complications. 40 wives had been infected and 19 children had been born with congenital syphilis. In these experiments, scientists clearly treated the men as less than humans. Hence, there needs to be limits that clearly restrict the use of human beings as such experimental subjects, so as to prevent such inhumane experiments from happening ever again.

(Supporting 3) There should be limits to scientific research, for safety purposes. Nature of scientific research is neutral, but research could produce by- or end-products that are harmful/hazardous to society. These products could theoretically escape the laboratory/go out of control/spread elsewhere. Hence, regulations are required to control the extent of research carried out and quantity of products produced, so as to lower the risk of the products’ escape and harm to society. For example, Ron Fouchier, a researcher in the Netherlands, developed an easily transmissible version of avian influenza H5N1 by introducing the virus into lab ferrets. His work helps scientists to study the virus in advance, before it actually evolves in the wild, allowing new vaccines to be developed beforehand. However, the virus can potentially escape from the laboratory and spread in mammals like us, which is exactly what we would not want it to do. Nuclear research, which involves the reactions of atomic nuclei, is another kind of scientific research that poses the same problem. In nuclear research, when unstable nuclei start to decay after a random interval, radiation is produced. This radiation could pass through ordinary matter, and would be harmful in large amounts. Some forms pose as a severe, long-term hazard to human health even in very small quantities. As shown, not all scientific research is completely safe, and, to minimize the safety issue, limits are required.

(’04) ‘How inventions and discoveries are used is not the concern of the scientist.’ Do you agree?

Related:  (‘91)How far should scientists be held responsible for the effects of their discoveries?

(‘90)How far should scientists be held responsible for the uses made of their discoveries?


(Introduction, possible stand, may change to fit both sides) Science is a creative enterprise. It combines the exploration of the natural world with the generation of knowledge and its use in human endeavors. This combination of creativity with purpose is often exemplified in the discovery of science, but the effects of new scientific discoveries/inventions on society always raise concerns among the people. This is especially true when a discovery has a negative/harmful impact on society, be it the human health, biodiversity or the environment. I believe, in most circumstances, scientists should be held responsible for these effects.

 (Refute 1) Some argue that scientists should not be held responsible for the usage/effects of their discoveries, as scientists merely invent; they are not the ones who actually use the discoveries to harm society. Science discoveries are neutral in nature. All can be good or bad, depending on the way they are used. For instance, Thomas Midgley Jr. was a renowned chemist and inventor who held over 100 patents in his lifetime. However, his legacy is now tarnished, as the full effects of his inventions, most notably leaded gasoline and the greenhouse gas Freon, the first CFC, have been understood. As J.R. McNeill, an environmental historian, described, he has had “more impact on the atmosphere than any other single organism in Earth’s history.” This would seem to imply that he alone was responsible for the harmful effects his inventions had had on society. But if we were to look beneath the surface, we would find that that is not the case in reality, at all. Initially, Midgley discovered tetraethyl lead, or TEL, for a good purpose; to successfully eliminate the problem of “knocking” in automobile engines. General Motors (GM) was the company that used it, even setting up the General Motors Chemical Company to produce TEL. This is because, the previous ethanol-gasoline blend fuel could not be patented and offered no viable profit for GM.  At that time, many medical experts, including the US Surgeon General, expressed grave concerns over the potential health problems that would arise from the use of TEL, but their views were swept under the rug by GM, even after workers at their plant began to succumb to lead poisoning. There was also a lack of action from the federal government, which never informed the public of the dangers of TEL or commissioned an independent study on its effects. “Freon”, dichlorodifluoromethane, the first of the chlorofluorocarbons (CFCs), was also invented for a good purpose; to help discover an alternative to ammonia and propane, which were commonly used as refrigerants, but were flammable and highly toxic. Just as it was with TEL, the health and environmental effects were not made publicly available until years after. As seen, it was the use of Midgley’s discovery by the profit-driven GM Company that caused harm to society, coupled with the intentional actions by said company to hide the effects of TEL, and the inability of the government to discern these effects. These parties should be the ones responsible for effects of TEL and Freon today. Hence, it is those who use the discoveries wrongly, whether government, company, or individual, who should be responsible for whatever damage/effect the discoveries had on society.

(Support 1) However, I beg to differ. This is because, more often than not, scientists do not only play the passive role of inventing/discovering and completely leave it up to society to decide how their discovery should be used. Since the discoveries are their work, scientists have the liberty to decide how their discoveries are used, and most usually use that power to influence how their discoveries are used. At the very least, parties who want to use their inventions are able to do so because the scientists approved of it first. This is because, whenever scientists come up with a new device, their inventions will usually be patented. Patents are the government’s way of giving the scientist ownership of his or her creation. For a certain period of time, usually 10-20 years, patent-holders, also known as the scientists, are allowed to control how their inventions are used, allowing them to reap the financial rewards of their work. Patents may be a palpable, legally-binding manifestation of a person’s genius and innovation; though, by allowing the scientist to own their discovery/invention, it implicitly means that the scientist must have played some part in how their invention is used, however it is used. Hence, when the discovery is harmful towards society, scientists should also be held responsible. Sometimes, even, scientists are the ones who asked/approached the government, company, or individual for them to use/consider the discoveries.  For instance, in 1938, three chemists in Berlin managed to split the uranium atom. The energy released when this splitting/fission occurs is tremendous enough to power a bomb. However, before such a weapon could be built, numerous technical problems had to be overcome. When Albert Einstein learned that the Germans might succeed in solving these problems, he wrote to U.S. President Franklin Roosevelt with his concerns. Einstein’s letter helped initiate the U.S. effort to build an atomic bomb. Subsequently, other findings demonstrated conclusively that a bomb was feasible and made building the bomb a top priority for the U.S., with the government launching the Manhattan Project in 1941. This project eventually led to the deployment of 2 atomic bombs over the Japanese cities of Hiroshima and Nagasaki. ‘Little Boy’ killed 90 percent of the population in Hiroshima and tens of thousands more would later die of radiation exposure; while ‘Fat Boy’ killed an estimated 40,000 people in Nagasaki. Hence, in his situation, scientists are responsible for the usage/effects their discoveries.

(Support 2) Furthermore, since the discoveries are their work, ultimately, scientists are the ones who dictate the purpose of their inventions, its uses and effects. They first set an objective of their research in mind, and then carry out the research, according to that objective. Their invention is the result of that objective. If others use it and it turns out to be harmful toward certain parties, then that is actually merely the intent of scientist. Some may argue that it is the fault of the user, who used the invention wrongly, or changed the original purpose of the invention for their own interests, but scientific research is usually highly specific, and one invention cannot be easily used for another completely different/opposing purpose. For example, during the 1880’s, there were two prominent players in the electrical industry; The Edison General Electric Company, founded by Thomas Edison, that established themselves with direct current (DC, electric current that flows in one direction) service, and the Westinghouse Corporation, founded by George Westinghouse, that developed the alternating current (AC, electric current that reverses direction in a circuit at regular intervals) service. To win against his competitor, Edison started a smear campaign against Westinghouse, claiming that AC technology was unsafe to use. He held a public demonstration in New Jersey, supporting his accusations by setting up a 1,000 volt Westinghouse AC generator, attaching it to a metal plate and executing a dozen animals by placing them on the electrified metal plate. This was the first prototype of the electric chair, and caused the New York Legislature to pass a law establishing electrocution as the state’s new official method of execution. To further crush his opponent, Edison intensively campaigned for the use of AC current in the electric chair, and hired inventor Harold Brown, who, with his team, began designing an electric chair, publicly experimenting with DC voltage to show that it left lab animals tortured but not dead, then testing AC voltage to demonstrate how AC killed swiftly. This eventually caused the creation of the electric chair (with AC voltage) for the statewide prison system. As seen, Edison caused the invention of the electric chair, with a malignant objective. He meant for it to bring down his competitor, and intentionally created it to do just so, even when aware of the negative effects of the electric chair. In this case, if they foresaw/intended for the uses/effects, scientists are responsible for the usage/effects their discoveries.

(Refute 2) To counter that point, some may further argue that indeed, while original purpose of a discovery cannot be completely changed, scientific discoveries often lead to unintended effects that diverged from their original purposes. In this case, scientists should not be held responsible for the usage/effects of their discoveries, as they did not intend for these unexpected harmful by-effects of their inventions. Using the example of Thomas Midgley Jr. again, he first created TEL and Freon with good intentions; he did not expect for them to harm the environment and human health in such a way. Another example would be Agent Orange; it was invented with good intention; by Arthur Galston to speed the growth of soybeans and allow them to be grown in areas with a short season. It was used by the US military to act as a defoliant in the Vietnam War to protect and save the lives of U.S. and allied soldiers. However, Agent Orange turned out to have severe negative impacts on the environment and health in Vietnam. It caused 400 000 deaths and disabilities with another 500 000 birth defects. About 17.8 percent of the total forested area of Vietnam was sprayed, which disrupted the ecological equilibrium. The persistent nature of dioxins, erosion caused by loss of tree cover and loss of seedling forest stock meant that reforestation was difficult (or impossible) in many areas, and animal-species diversity was also impacted; in one study a Harvard biologist found 24 species of birds and 5 species of mammals in a sprayed forest, while in two adjacent sections of unsprayed forest there were 145 and 170 species of birds and 30 and 55 species of mammals. As shown, in these cases, the scientists did not intend for these effects of their discovery. We have to understand that scientists are also humans, and they also make mistakes, hence we should not hold them responsible for impacts even they did not expect.