Tag: Philosophy
-
Neuroethics: the delicate balance of neuroscience and morality
Pew pew pewHow can we create frameworks of practical moral reasoning in the absence of free will? Can neuroscience research shed light on how we make moral judgements? What are the general implications of neuroscience research itself? How can we differentiate between the study of the mind or the brain to begin with? In the current development of neuroscience research, scenarios have changed. Researchers are beginning to uncover a new knowledge about personal identity, emotions, awareness, and free will. All of these are key pieces in the understanding of the puzzle of the mind human. These issues that seemed to be alien to science are now exposed in the scenario of Neuroethics, the ethical issues brought upon by neuroscience as well as the neuroscience of ethics itself. As presented by Kathinka Evers, principal investigator of the Center for Research in Ethics and Bioethics from the University of Uppsala, we can investigate a slew of questions that born in this interface between the sciences of the human spirit and the natural sciences in her book “Neuroetica.” It should be remembered, in the face of this reconciliation between science and ethics, that there have been challenges and struggles to write on neuroethics. Understanding “the analysis of the concepts involved in practical moral reasoning “(p 21), and the first, according to Robert Hooke, as “knowledge of natural things, and of all useful arts, manufactures, and mechanical practices, artifacts and experimental inventions “(p.22), it’s easy to come to incorrect conclusions on these ethical issues.
Fortunately, through history not all modern thinkers have seen science in this way. As Evers points out, in accordance with philosopher Francis Bacon’s views of science, the study well-organized and detailed in nature, science should be much more than the mere school search for knowledge. The sciences have to fulfill a fundamental function, namely: to allow human beings to improve their life on earth (p.21); objective that would be difficult to achieve if it were insisted on keep excluded the philosophical, political, moral and metaphysical that are born in their same this particular case, within the neurosciences.
Now, although the ethical problems initially raised in neuroscience referred to the practice and use of brain imaging technologies, neuropharmacology or the interests of research and sponsors of this, currently neuroscientific research itself is also concentrated in the construction of “adequate theoretical foundations that are required to be able to deal appropriately with the problems of application “(p.28). This establishes a distinction clear between an applied neuroethics and a theoretical neuroethics, concerned about the capacity that could have the science of nature to improve our understanding of moral thinking. We can determine whether the former is really important for the latter by considering both concerns as part of a greater question, that is, if human consciousness can to be addressed or not in biological terms.
It should be mentioned that any attempt to expose the complete set of ideas that go through neuroethics and the development of these would be foolish. We can still refer to a small, but representative, set that begins with the idea of unifying different levels and types of knowledge, taking both the techniques and the methodologies of each discipline, in order to build bridges. Fragile as they may be, they would allow the flow of the knowledge of the neurosciences to other sciences and disciplines, integrating in turn, this knowledge in the conception that have human beings of himself. It resonates through the world and morality in a shared theoretical framework (p.30 and p.57). The materialism position may respond, aptly illustrated and proposed in chemistry by French philosopher Gaston Bachelard in 1953 and extended by neuroscientist Jean-Pierre Changueux, to the neuroscience of the present. It may be that far from any naive reductionism and dualism (ontological), we can assume the brain as “a plastic, projective organ and narrative, which results from a sociocultural, biological symbiosis that appeared in the course of evolution … ” (p 69), judging emotion as the characteristic mark of consciousness from an evolutionary perspective.
Following, you can expose an idea pretty striking, a neurophilosophical model of the free agency that tries to answer how even though Free will is or can be: “1) a construction of the brain, 2) causally determined, or 3) initiated unconsciously “(p.80), it is not something” illusory .” As Evers argues, first, the fact that free will be a construction of the brain not necessarily means that it is an illusion, and that perhaps if it is an illusion it will be for other reasons (p.86); second, “causality is a prerequisite for the free agency “(p.88), otherwise the behavior would be totally random, in addition, causal determinism does not imply an invariable and necessary relationship between cause and effect, to the extent that this relationship can be variable and contingent; third, although the processes non-conscious appear to be far from control aware, the relationship and influence between both are “To a certain extent mutual, and not unilateral” (p.104). Of course, to understand the development and integration of each argument to think of free will as “The ability to acquire a causal power, combined with the ability to influence the use of said power ” (p.107), you need to read chapter II of the book, where Evers makes use of different authors (Changueux, Le Doux, Libet, Freeman, Churchland, Pinker, Blakemore, Pylyshyn, among others) to recreate the scenario in which situates all this discussion and each one of his ideas.
Finally, we note the normative relevance of the neurosciences according to the understanding of the neural bases of development of thinking and moral behavior. We can mention four innate tendencies closely related that appeared in the evolution: 1) self-interest, 2) the desire to control and security, 3) the dissociation of what can be considered unpleasant or threatening, 4) selective sympathy. Regarding the latter, the author risks saying that the human being is a xenophobe with natural empathy insofar as it is “empathic by virtue of [your] understanding of a relatively large set of creatures; but […] nice so much more narrow and selective towards the restricted group [in which born or has chosen to join] “(page 132). Although understanding (empathy) can be extended to broad groups (i.e. foreigners), the affective bond that unites human beings is restricted to their group more close. There’s an indifference to the foreigner or the which is considered different.
Keeping in mind these innate preferences, there’s no doubt about the difficult situation of current moral discussions. It becomes a priority then, to establish a diagnosis in neurobiological terms to be able intervene human behavior, recognizing that the structure of the brain determines to some degree the social behavior, moral dispositions and the type of society that is created, although the latter has an influence on brain development (p.149). At the same time, we can pose the question about the scientific responsibility of neuroscience at the socio-political level in terms of its adequacy (formulation of real problems), conceptual clarity, and application of methods and techniques without forgetting the origins and interests. Making it clear what a finding or fact (if it is) of neuroscience is not can give off categorical imperatives. A duty can be universal because of knowing that you have an innate preference does not follow that it is okay or that it must conceive this fact as good or bad.
In short, “Neuroethics” is an excellent introduction for both the unnoticed reader and for professionals from different areas of health (Psychology, Psychiatry, Neuropsychology, Medicine) and other professionals such as philosophers, lawyers and politicians, concerned about the participation of neurosciences in the understanding of the mind, the behavior, socio-cultural organizations, mental health, education, but first of all in the perception of human existence and its future. It may be “A Critique of the Neuroscientific reason,” a clear demarcation of the limits of this knowledge and its uses in society, a judgment by the other disciplines, to the extent that knowledge about the brain seems to give to neuroscientists certain power to expand their ideas beyond the laboratory, expanding their horizons and its explanatory power in domains already mentioned. It’s sometimes quite assertive when plotting new research paths, other times. But other times it’s about attacking different fields of knowledge by not knowing the limits of its frame of reference and in the impossibility to purge the investigations carried out of their own cognitive biases. That would respond more to the interests of certain ideologies than to the objective to improve human life on earth.
-
"The pursuit of truth," a villanelle

Chaos under control Truth is elusive, nowhere to be found.
Footprint and forecast, through reason and verse,
through scars and marks that style the ground.Memory and reason, fade to the bland.
Glimpse of light, the sight of truth. We converse
scratched in concrete or scribbled in sand.From birthmark or gravestone, the discourse abound,
of dialogue, debating, counted controverse,
through scars and marks that style the ground.Through mystery, the truth we don’t understand.
We pursue a cure, if truth were a curse,
scratched in concrete or scribbled in sand.It evades, it leaves our own selves earthbound,
Like supernova, particles spread out dispersed,
through scars and marks that style the ground.The highest of truths, we seek heights grand.
washed like waves, without sleight of hand,
through scars and marks that style the ground,
scratched in concrete or scribbled in sand.
-
On becoming a better researcher
“Only passions, great passions can elevate the soul to great things.” – Denis Diderot, Pensées Philosophiques I believe the ways we become better researchers only come through self-reflection and meditating upon the arguments and principles behind what we do – not the simple acts of doing those things themselves. What makes good work that we find satisfying, engaging, morally clear, and even effective for whatever purpose or value we put forward can only come as we contemplate and fully realize the effects of what we’re doing.
As French philosopher Denis Diderot sought to learn about a variety of fields, from philosophy to art to religion, he advocated strongly for the emancipatory power of philosophy. Overturning our previously held convictions of the 1700s, Diderot’s Encyclopédie showed philosophy should trample underfoot prejudice, tradition, antiquity, shared covenants, authority, and everything that controls the mind of the common heard. Much the same way I fell in love with philosophy as an undergraduate student, I took these challenges upon myself. I wanted to figure out what it meant to be a good researcher no matter the field.
Being a good researcher, whether it’s in science, philosophy, mathematics, or anything, requires taking apart our notions of skills, talents, abilities, and all other arguments and claims we put forward about what we do and re-framing them in appropriate ways to address the solutions in ways we decide. I’ve always believed that success requires nurturing these values and virtues in such a way that I can not only prepare for the next step in my life, but I can address the issues I want to address. This is how I search for a purpose. As I look for these purposes, I attach motives, intentions, and other moral characteristics to them. I don’t only use simple purposes like getting into a good graduate school. because I know that’s not the most effective way to work. I need to understand what it means to be good at my craft in general and apply that to what I do.
When Diderot condemned asceticism, he argued for lifestyles in search of pleasure by cultivating passions. In response to the abstinence or celibacy of priesthood, Diderot argued those passions our body experiences cause us to achieve great things. I believe these methods of understanding the passion inherently tie into becoming a good researcher, but as Diderot sought to restructure knowledge itself and attack fundamental beliefs of his society, he was thrown in prison.
Because this task of taking apart what it means to be a good researcher is so arduous and complex, even simple things I do on a day-to-day basis can be incredibly difficult. My methods of thinking through these problems and becoming the best researcher I can possibly be don’t align so perfectly with the tasks I’m assigned to do on a day-to-day basis. It simply doesn’t make sense to me that, if I want to become the best researcher I can possibly be, I need to follow the simple directions that are put forward in front of me every day. It also doesn’t make sense that other factors such as how many hours I work should be relevant to success when there are far more certain, nuanced factors such as what effect my work has had on the world. Instead, I absolutely need to take apart arguments and claims about these notions such that I can figure out what it means to be a good researcher.
I notice minute differences in the way we reason to become better researchers. These little things can be as small as the difference between asking the question “What would the best researcher possible do in this situation?” vs. “What can I do in this situation to become the best researcher possible?” We can see this difference in running a protocol that hasn’t been used before on the grounds that the best researcher possible would do that or running a new protocol because it will make me a better researcher. The former shows courage and audacity in trying new things because the best researcher already has those traits established and would do that. The latter implies we’re not the best researcher, but, if we value the willpower in carrying out the task, performing it would make us the best researcher possible. Each method of reasoning is suited for different purposes and goals in what we do. That’s why it’s essential we understand these methods of reasoning for the purpose of becoming a better researcher.
If my boss tells me, “Do this because you need to do it to get a good recommendation for graduate school,” it’s very difficult for me to convince myself to do that thing. I see that sort of motive as empty, selfish, and even contrary to how researchers should perform. Besides, it becomes trivial and almost nonsensical to reason that “If I do X, Y, and Z, then I’ll get a good recommendation.” A good recommendation cannot be made by performing actions for the sake of getting a good recommendation. There needs to be authenticity and genuine moral agreement in it. Even if it were true that my boss would have the action itself to write about my actions in my recommendation, this still doesn’t show much as my actions are things that I myself can write about in my graduate school applications themselves. There’s no deeper meaning or theoretical idea my boss puts forward. As a result of the way I reason through these issues, it’s often incredibly difficult for me to follow simple, straightforward directions because I’m so busy taking apart the justification, validity, and other characteristics of anything we do in a way that I can figure out what they should mean.
The way I discern these differences in attempts to address these questions have caused me to become confused about what I should do in the present moment. It shows that, even though I’m always trying to be the best researcher I can possibly be, that doesn’t mean that what I do in the present moment is a direct statement on how good of a researcher I am. What I do in the present moment is a mixture of all of these thoughts about what it means to be a good researcher burning within me.
Not having these methods of discerning these issues took its toll on me. When I was an undergraduate student at Indiana University-Bloomington, I could barely see the purpose in much of my work to the point where I nearly dropped out. I had faced so many obstacles from other individuals in my attempts to address these issues, and I was so discouraged by almost no other individual posing these questions to begin with. My justification and motivations for doing things in the present moment are complicated, as I’ve explained due to my interest in these issues.
Challenging the very notion of knowledge itself, Diderot worked with mathematician-philosopher Jean le Rond d’Alembert to create the Encyclopédie, which they described as a theater of war in which Enlightenment intellectuals desiring social change rallied against the French Church and state. Allowing free thought, especially through atheism, the scholars laid down the fundamentals of fields such as mathematics, physics, and philosophy themselves. By reasoning through the inquiry and scope of these fields, d’Alembert wrote that memory gives rise to history, imagination to poetry, and reason to philosophy. I continue to turn to philosophy for finding truth in science as I work.The truth is I’ve been struggling with these issues for maybe four years now, and I still struggle with them. They affect me in ways that I detect through everything I do. When I wake up, go to work, contemplate my actions, and even dream while I sleep, I find these questions on my purpose shaking me in ways I can barely articulate.
-
History transcending science’s boundaries

Voltaire When I attended the 2019 meeting of the American Association of Advancement of Science, I couldn’t help but feel déjà vu. At my second AAAS conference, I found familiar faces among scientists and journalists. I also felt the conference’s theme “Science Transcending Boundaries” resonating with centuries-old writing that has remained relevant to this day.At the AAAS meeting, Erika Hayden, director of the Science Communication program at the University of California Santa Cruz, and I discussed how science writers should tell stories with history in mind. This would not only let writers put current findings in context, but transcend the boundaries of research. Looking at the work of philosophers and mathematicians in the 1950s, we can address ethical issues of automation and predict how artificial intelligence will change the workforce. Referencing 19th-century novelist Mary Shelley’s Frankenstein can warn of the dangers of genetic engineering. I also discussed how engaging the public with history and literature can instill more faith in them as readers.
I spoke with researchers and journalists about my website A History of Artificial Intelligence as well as my other writing on scientific history. I mentioned how my work had opened up eyes of my audience to the nuanced, complicated history of science. It can sometimes be a stark contrast to journalism’s principles of concise, straightforward writing, but, by writing with a historical perspective in mind, scientists and science writers can at least find well-reasoned, humanistic answers to age-old questions. These answers speak true to the lives, virtues, and values the human being seeks to instill within research. A historical account of science lets scientists and writers draw from fields such as ethics, art, and philosophy – a true transcendence of boundaries. The same way Bill Nye and Carl Sagan capture the current public’s imagination, popular science emerged from tens of thousands of popular science books published in France throughout the 1700s. Today’s scientists and writers can understand this history of science writing to put their roles and purposes in context and transcend boundaries.
Throughout the conference I spoke with journalists, researchers, and other professionals about the best ways to engage the public as a science communicator. As I reflected upon the historical works, I spoke with others how the 18th-century French author Bernard le Bovier de Fontenelle wrote about science such that a wide audience could understand in his work Conversations on the Plurality of Worlds. Exemplifying the theme “Science Transcending Boundaries,” he introduced readers to Cartesian philosophy centuries before the word “scientist” was even coined. I spoke with journalists on the principles of journalism and how they came about through historical events such as the French Revolution and the Dreyfus Affair. Through these events, journalists developed principles of writing in an investigative manner, independent of external forces that can, in some ways, revolutionize society’s ways of thinking. At the same time of Fontenelle, French philosopher Voltaire’s poems, short stories, critical essays, plays, letters, and history covering physics, chemistry, and botany would also redirect future scientific research. Imagining our work in these greater contexts of history, it gave others a deeper appreciation of their writing and research. With the past in mind, we would speculate on the future of issues such as artificial intelligence and genetic engineering.
With Fontenelle and Voltaire’s writing, scientific books went from being read by hundreds to hundreds of thousands. As intellectualism flourished in 18th-century France, science itself became more professionalized. Scientific institutions received more support, and individuals took more distinct professional research paths, re-defining the scientist. In 1795 French philosopher Nicolas de Condorcet advocated scientific reasoning in democratic governance. From the lab bench to the living room, science entered the hearts of the masses. It laid the foundation for the intellectual revolution of the Enlightenment to change reason and inquiry itself. Science writers themselves can learn about the purpose and value of scientific research through these historical trends. In learning from Fontenelle, Voltaire, and other historical writers, scientists can put their findings in greater contexts, writers can share a more accurate stories of science, and the world can become better for the sake of humanity.
-
Guest post – "Aristotle and Fake News: Why understanding rhetoric illuminates credible arguments"
By Carolyn HaythornIt’s hard for me to remember the time before the internet became such a pervasive part of daily life. I work online to earn money, watch Netflix to relax, scroll YouTube for advice on anything from personal finance to cooking, and read push notifications from my favorite news outlets to keep up-to-date. I’m part of the generation in which proper computer use was taught in school. Our digital literacy began with typing classes in grade school, then turned to learning about the dangers of Wikipedia in high school, and, by the time I was in college, people used the internet to write class papers more often than physical books in the library.
But one area where I think our digital education was lacking is in determining how to spot a ‘credible’ source.
Sure, people have always known that anyone can say whatever they want on the internet, and we’ve all heard that it’s important to question what you read before accepting it as fact. But very little was actually said about how to determine if something is credible, or what to do if you come across websites with suspect information. If anything, this was further confused in college, where only peer-reviewed academic articles were considered credible—a wealth of information that, by and large, you lose access to after graduation.One solution to this problem is to review how audiences are persuaded in the first place. By understanding how arguments are created, it can be easier to recognize flaws in logic, or failures in the speaker’s character. I’m talking about Aristotle, and his three modes of persuasion: pathos, logos, and ethos. You likely touched on these in school when studying persuasive writing and political speeches, but I don’t think nearly enough emphasis is placed on how methods of persuasion can influence perceptions of credibility, the spread of viral stories, and belief in factually unsound statements. Although all three modes are equally vital for a strong, sound argument, we as human beings are predisposed to focus on some factors more than others. I believe with the rise of misinformation, it’s more important now than ever before to understand exactly how we are influenced by persuasion, and what our weaknesses are in recognizing good arguments.Let’s start with the easiest, pathos. Pathos is an appeal to an audience’s emotion, whether negative or positive. This encompasses both evoking a particular emotion from an audience, as well as invoking that emotion as justification for a certain behavior or action. There’s a strong connection between emotion and persuasion: in fact, there’s evidence that people naturally include more emotionality in their language when they are trying to be persuasive, even if they are specifically advised against doing so. Perhaps appeals to emotion are frowned upon as unscientific or misleading, but they’re pervasive for a simple reason: appealing to emotion works. In 2012, researchers at the University of Pennsylvania found that the most emailed New York Times articles were ones which prompted emotional responses, especially if the emotion was positive or associated with high energy, for example, anger or anxiety, as opposed to sadness. A study in 2016 found that when people are forced to make quick decisions about an object, they are more likely to rely on their emotional response to the item rather than objective information. A person is more likely to quickly classify a cookie as something positive because it makes them happy, while with more time for consideration, they may classify it as is negative because it’s unhealthy. Finally, a metanalysis of 127 previous studies concluded that appeals to fear were nearly always effective at influencing an audience’s attitude and behavior, especially when the proposed solution is seen as achievable and only requires one-time action.
We know that people use emotion to make quick judgements, can be strategically influenced by arguments which appeal to emotion, in particular, fear, and are more likely to share articles which elicit emotion. These are all strong evidence that emotion is an integral part of how humans perceive and interact with the world. The problem with pathos is that if used without logos and ethos, the proposed solution to a problem may not be very effective, and there’s no guarantee that the problem being addressed is even real. For example, in 1998, Dr. Andrew Wakefield purported to have found a link between autism and vaccines. There is no such link, but the report garnered enough fear to spark the anti-vax movement which is now responsible for the reemergence of preventable diseases like measles and whooping cough. Fearmongering about marijuana in the 1930s lead to the drug being outlawed in 1937 and classified under the strictest designation by the Controlled Substances Act in 1971, despite contemporaneous recommendations from within the U.S. government to decriminalize its use. What’s more, there’s evidence that decisions made during stressful situations are less logically sound than decisions made in calm situations. The lesson? People suffer when pathos alone prevails.Logos is usually framed as the antidote to pathos. An appeal to logos is an appeal to logic: cold numbers, rational solutions, statistical significance. In theory, this sounds great. The problem is, the human brain isn’t wired to think purely logically: we have trouble conceptualizing large numbers, we seek patterns in random smatterings of data points, we’re quick to claim causation where chance or other variables are involved, and we’re easy victims of logical fallacies. Even among the scientific community, there are plenty of examples of seemingly logical, scientific arguments that turned out to be bad science. A paper published in 1971 asserted that women’s periods will “sync up” if they spend enough time together. Although this is still widely believed today, it has been thoroughly debunked by the scientific community. More pressing, the idea that low-fat diets are an effective way to lose weight without any regard to sugar consumption was introduced to the American consciousness in 1967, sponsored by representatives of the sugar industry. This no doubt altered the standard American diet and likely contributed to the rise in obesity across the U.S. (although the culpability of the sugar industry is up for debate). Even with good intentions, scientists can make mistakes: In 2018, scientists tried to replicate 21 previously published social science experiments, but only got the same results for 13, all with a weaker correlation than in the original studies. While it’s important that scientists review and revise their original conclusions, correcting common misconceptions is difficult once a myth has entered the popular consciousness: Think you’re immune? Check out this infographic.What’s more, without pathos and a focus on morality, seemingly “logical” solutions can be plain cruel. This is demonstrated beautifully in Jonathan Swift’s satirical A Modest Proposal, in which eating children is proposed as a rational solution to poverty in Ireland. Decisions made with a total disregard for emotions shouldn’t be our gold-standard for sound reasoning, and an argument lacking in pathos can be just as bad as one lacking in logos.If appeals to both pathos and logos can lead to mistakes in reasoning, where does that leave modern thinkers in their quest for credible arguments? The answer lies in Aristotle’s third mode of persuasion, ethos. Today, ethos is sort of a fuzzy idea; it means both having knowledge about a topic and establishing yourself to your audience as a credible speaker, two things that don’t necessarily go hand in hand. Aristotle himself split the idea into three parts: good sense, good moral character, and goodwill. Good sense, or phronesis, comes from having experience in one’s field, especially with a track record of rational, moral decision making. Good moral character, arete, is gained by practicing virtuous behaviors until they become habits. Finally, goodwill, eunoia, is earned by convincing the audience of one’s knowledge and intentions.
Having ethos is vital to a sound argument. In fact, Aristotle grants only three reasons for unsound arguments to exist: either the speaker is wrong due to lack of good sense, the speaker is lying due to lack of moral character, or the speaker is silent, because they don’t care if the audience hears good advice. The problem is, it’s difficult for readers to judge the ethos of a speaker, particularly over the internet. Unlike pathos and logos, the root of ethos comes from outside the argument itself: the audience must know the speaker’s experience (good sense) and moral character to avoid falling for unsound advice. To make matters worse, if a speaker wants to persuade an audience, they will go through the trouble of appearing credible whether they are offering sound advice or not, they. Today, that can mean anything from verbally assuring the audience of their credibility and good intentions, to selecting appropriate clothing for a given situation, or even hiring a web-designer to make sure content looks clean and professional: all things which index competence in the modern world. But the appearance of credibility alone isn’t enough to judge a speaker as credible.Where does that leave us? Finding reliable, credible information can seem daunting at first: emotions cloud our judgement, but logic is uncertain. We often must know if a speaker has reliable experience in a field without ever having met them. However, with a little practice, I think we can all improve our sense of what’s credible and what’s not. To that end, I offer the following advice: First, after hearing an argument or statement – be it online or in person – consider your own position to the piece. How did it make you feel? Does it confirm what you want to be true? Do you have any stake in the events at play? All of these factors cloud judgement, so take extra care when evaluating an argument. Second, consider the logic of the piece. Does everything make sense? Do the numbers add up? Does it align with a wider context, or does it seem out of place? If the argument does not make sense logically—and you have enough knowledge in the field to judge it appropriately—then it probably isn’t sound. Finally, consider the position of the speaker. Do they have experience with the topic being discussed? Do they have a history of honesty? Do they benefit from your support of their argument? If more information is needed to answer these questions, then do some digging. Look to others with more experience in the field to help determine if the speaker can be trusted. If you can’t find enough information to make a solid judgement, consider the source not credible.
And when in doubt, err on the side of caution: given that Rhetoric was written in the 4th century B.C., speakers have had a loooooong time to develop ways to manipulate audiences, whether intentions are pure or not!
Sources
Alvergne (2016). “Do women’s periods really synch when they spend time together?” The Conversation: Academic rigour, journalistic flair.
Aristotle, Rhetoric: Book II. Translated by W. Rhys Roberts.
Berger and Milkman (2012). “What Makes Online Content Viral?” Journal of Marketing Research. 49(3): 192-205. For a summary, see: Tierney (2010). “Will You be E-Mailing This Column? It’s Awesome” The New York Times.Brinton, Alan (1988). “Pathos and the “Appeal to Emotion“: An Aristotelian Analysis.” History of Philosophy Quarterly, 5(3): 207-219, pp 211.
Burnett and Reiman (2014). “How did Marijuana Become Illegal in the First Place?” Drug Policy Alliance.Calhoun (2013). “Human Minds Vs. Large Numbers” The Sieve: finding science stories in the vast expanse.
Hale (2018). “Patterns: The Need for Order.” PsychCentral.
Madhavan (2017). “Correlation vs Causation: Understand the Difference for your Business.” Amplitude.com
Moritz et. al. (2015). “Stress is a bad advisor. Stress primes poor decision making in deluded psychotic patients.” European Archives of Psychiatry and Clinical Neuroscience, 265(6), pp 461-469; Simonovic et. Al (2016). “Stress and Risky Decision Making: Cognitive Reflection, Emotional Learning, or Both.” Journal of Behavioral Decision Making, 30(2): pp. 658-665.
O’Connor (2016). “How the Sugar Industry Shifted the Blame to Fat.” The New York Times.Rocklage et. al. (2018). “Persuasion, Emotion, and Language: The Intent to Persuade Transforms Language via Emotionality.” Psychological Science, 29(5): 749-760.Rocklage, and Fazio (2016). “On the Dominance of Attitude Emotionality.” Personality and Social Psychology Bulletin, 42(2): 259-279. For a summary, see: Markman (2016). “Emotion Dominates Fast Choices,” Psychology Today.Tannenbaum et. al (2015). “Appealing to fear: A Meta-Analysis of Fear Appeal Effectiveness and Theories.” Psychological Bulletin, 141(6), 1178-1204.
“The Science Facts about Autism and Vaccines” Healthcare Management Degree.net.
-
Artificial intelligence re-defines reality and the self

Is this strong AI? Is this just fantasy? Caught in a human mind. No escape from reality. When the Cold War brought the world’s attention to revolutions in scientific research, artificial intelligence would shake our understanding of what separates a human from the rest of the world. Scientists and philosophers would draw from theories of mind and question the epistemic limits of what we can know about ourselves. Neurophysiologist and founder of machine learning Warren McCulloch described his cybernetic idealism in his 1965 book Embodiments of Mind. This postwar scientific movement he founded with mathematician Norbert Wiener and anthropologist Gregory Bateson was a mix of science and culture at the time. Cybernetics, based on the Greek word kybernētikḗ, meaning “governance,” was a collaboration between ideas from machine design, physiology, and philosophical ambition. Since its 1948 inception, it would create the language of science and technology we take for granted today. The advances in artificial intelligence brought upon by cybernetic idealism would continue to define how scientists and philosophers understand the world.
In 1943, Warren McCulloch and Walter Pitts developed the first artificial neuron, a mathematical model of a biological neuron. It would later become fundamental to neural networks in the field of machine learning. The advances in computer science and related disciplines have caused a flow of terms among writers. From the Internet of Things to Big Data as well as deep learning, the world is quantified. Mathematician Stephen Wolfram and other researchers have even argued that matter itself is digital. Latching on to any method of analyzing the world into data that we can use to draw conclusions, this created a new sort of metaphysics that beings themselves can be quantified. Cybernetics sought to create a language in which these scientific phenomena could be described. Artificial intelligence itself exploding in the news, especially with neural networks and deep learning technologies, wouldn’t have been possible to describe without McCulloch’s work in establishing what algorithms and information truly meant.
We should scrutinize this premise of digital metaphysics. We must understand how our metaphors of processing information like a computer would are limited in describing how humans reason. We must come to terms with how much knowledge the digital age can truly tell us from large amounts of information. Algorithm-based decisions that machines make that are involved in areas like business, medicine, and architecture differ from the decisions humans make. Data itself is part of our world, but, if it were to take the responsibilities and tasks normally brought upon by humans, then this new form of intellectual labor is changing the self and reality.
After World War I, the self was shaped by the technology of the era. Telegrams and newspapers constructed psychologist Sigmund Freud’s psychic censorship of painful of forbidden thoughts, which Freud modeled after methods of Czarist guards censoring information in Russia. During World War II, the cyberneticians created a notion of the self through feedback systems from Wiener’s engineering work on weapons systems during World War II. In determining how tracking machines could shoot enemy airplanes, the engineers had to aim thousands of feet higher than the plane and predict how the plane would move. Wiener created a machine that could learn how the pilot moves based on the pilot’s past behavior. These goal-directed feedback-based systems would define artificial intelligence. Artificial intelligence itself can then be described on these concepts of the self. In today’s age, we can model artificial intelligence on reality and the self.
The advances in Information theory emerged from newer forms of communication since the 1950s. Machines would rely on feedback loops and human-machine interfaces would create new forms of communication and interaction themselves. Scholars would write about “information” in measuring communicated intelligence using bits, binary digits. In How Reason Almost Lost Its Mind: The Strange Career of Cold War Rationality, Historian of science Paul Erickson and his colleagues described this period as a passage from Enlightenment reason to “quantifying rationality,” a shift from a qualitative capacity to judge to an extensive but narrow push to measure. But some Enlightenment notions survived the transition. Cybernetics would describe how these systems were structured and what possibilities they may have. Cognitive scientist Marvin Minsky would speculate when scientists could create a robot with human-like intelligence.
What is real? What isn’t? McCulloch sought to understand and separated the world into what a metaphysician studies, the “mind”, and what a physicist studies, the “body.” McCulloch developed “experimental epistemology,” that the physicist had to move into the “den of the metaphysician” to follow the synthetic a priori principle, a term that philosopher Immanuel Kant used to describe factual, yet universalizably necessary, truths. Drawing upon philosophers like Kant, Gottfriend Leibniz, and Georg Hegel, McCulloch and his colleagues hypothesized how to describe information itself different from other materials such as matter and energy. Wiener argued information was material from Leibniz’s “universal characteristics” that would be part of a logical language that all machines and computers could use. Leibniz also argued these mechanical calculations can amount to reasoning. Leibniz allowed cybernetics to move beyond the binary alternative between material and ideal in a philosophical sense. Leibniz and Kant were also sources for McCulloch’s search for the conditions of cognition—the synthetic a priori—in the digital structure of the brain.
These notions of cybernetics in today’s reality meant artificial intelligence could break down barriers researchers previously hadn’t even known about. In The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, computer scientist Pedro Domingos said machine learning is “the scientific method on steroids.” The way machine learning has already shaped society is absolutely unprecedented. Politicians take action and start dialogues through predictions of voting behavior of different demographics. Deep learning technologies can diagnose diseases like cancer better than doctors can in some respects. Domingos argued that a “master algorithm” will create a “perfect” understanding of society, connecting sciences to other sciences and social processes themselves. Automation will become automated this way. “The Industrial Revolution automated manual work and the Information Revolution did the same for mental work, but machine learning automated automation itself.” Domingos imagined society running on the steam of some other intelligence. Automation has been a major force in global history for at least two centuries. To automate automation is to imagine historical causality itself as controlled by artificial intelligence. Domingos is sanguine about automated automation making things better, but it isn’t clear why. This left us without a conceptual foundation on which to build an understanding of the ubiquitous digital processes in our society, deferring even historical causality to machine learning. McCulloch’s group of scientists had a glimmer of such an approach, a way to understand and govern the very machine learning they had set in motion.

Cybernetics keepin’ it real. By setting the foundations of neuroscience, scientists could describe these examples of neural activity. The same way synapses of the brain fire and transmit information, computer could use neural networks and telephone operators could use switching boards to create calculating machines, like the ones mathematician-physicist John von Neumann constructed. They then used the digital machine infrastructure to create logical propositions such as the ones philosopher-mathematician John Alan Robinson created. McCulloch and Pitts believed they could extend these ideas to ways computer could form predictions and, in turn, how artificial intelligence would operate. Embracing neural networks as ways the brain processes information, the two would create the semantics and syntax for neurophysiology and machine learning with principles of digital operation and organization in their paper “(Physio)Logical Circuits: The Intellectual Origins of the McCulloch-Pitts Neural Networks.” Similar networks would also study the symbolic nature of human understanding through semiotics as well as the the structure of human knowledge itself with semantic networks. The interdisciplinary interactions between neuroscience, computer science, mathematics, and other disciplines alongside these newfound methods of meaning and material itself meant these forms of neural networks were closely related to nature itself.
The two scientists wrote that these neural networks needed cause and effect relationships that Kant described that one event necessarily followed another in time. Neuronal activity, though, can’t describe this necessity because there are disjunctive relationships that prevent the determination of previous states. While we may observe neuronal states causing one to another, they aren’t apparent until after the effect is observed. The states following one another couldn’t be determined from their preceding states. This meant the knowledge of these digital systems would always be incomplete, with only a certain amount of autonomy. The brain establishes methods to receive and structure impulses within itself, states of matter of its logical structure and receptive neurons. The brain is the intersection and source of the mind and the world. Putting Kant’s principle of causality into these neurophysiological terms, these neural networks have a partial autonomy.
The scientists created the original McCulloch-Pitts neuron to show the digital is real. Logical expressions are central to cognition itself, and the digital is combination of idea and matter. Through our brains themselves and computers, these frameworks are still restricted in that they can’t “determine” their networks themselves. McCulloch’s Kantian approach uses symbols to represent these digital constructs. This abstraction of the digital is the metaphysics of reality.
Sociologist William Davies argued the data-driven approaches of the digital age don’t focus on causality, but, rather, the ability to control behavior. Davies, unlike Domingos, believes letting data make decisions means allowing causes of digital information without participating in the cause and effect relationships themselves. Understanding Big Data means seeking correlations in digital networks. The data abstractions are real and create the world, but data still can’t constitute decision-making. Humans perform actions on these cause and effect relationships, it means artificial intelligence of machine learning can perform using pragmatically successful algorithms. The digital still can’t determine the correlations that take place, and automation itself can’t be automated. Information is information. It takes a wisdom to put it in context.
We can come close to live in the digital reality by understanding how our world includes data from these processes. For the sake of machine learning algorithms making decisions that have profound impacts on social behavior and ethical norms, we need notions of responsibility, duty, and obligation that revolve around this digital reality. A partial autonomy given to computers means they don’t make judgements the same way humans do, but it the language of philosophy will allow us to answer the difficult questions they pose for the future.
-
The beauty of logic throughout history

Kurt Gödel As I peruse through biographies of the lives of philosophers, scientists, mathematicians, and other researchers, I find myself fascinated. I wonder how their hometowns, education backgrounds, and people they’ve met throughout their lives influenced their success in their work. In investigating what it means to be a genius and what it takes to produce amazing work, I still wonder about how people interacted with scientist Albert Einstein, mathematician Bertrand Russell, logician Kurt Gödel, and philosopher Ludwig Wittgenstein and what they thought of their greatness. With the distance and objectivity I have as I read about these famous minds of history, I appreciate the way history becomes more of a gradient of many events that give rise to settings, culture, and ideas themselves.
Einstein’s job as a patent clerk set the stage for his rumination of space and time. As he struggled to find a place in the world, he thought about the universe itself and what sort of theories would describe it. Russell bore wealthy apparel that represented his aristocratic lifestyle. It gave him an authoritative feel to his advice and commentary on society. As radios and newspapers covered his work, people revered him as a celebrity, especially how he emphasized scientists to engage with the public. Mathematician Alfred Whitehead and Russell’s Principia Mathematica laid the foundation for mathematics and computer science of the 1930s and beyond. Even Wittgenstein’s friendly demeanor earned him friends with other philosophers. Gödel made similar contributions to mathematics and philosophy. His story, though, was not marked with such popularity.
While Gödel studied at Princeton University alongside Einstein, he contrasted the physicist’s casual attire with his formal white jacket. He appeared distanced and detached, as though he were observing everything in the universe around him. Among Gödel’s contributions include his incompleteness theorems and an argument for God’s existence. The first incompleteness theorem let rule-based systems like those of Principia Mathematica prove arithmetic truths though they’ll always remain incomplete. The second incompleteness theorem prevent arithmetic theories from being proved consistent within those theories. At the time of their publication in the 1930s, scholars found it appealing there were truths they couldn’t prove. Gödel’s formalisms of this idea even appealed to scholars who searched for romantic beauty in their work. These incompleteness theorems have been referenced throughout mathematics, computer science, and logic and even in more distant disciplines such as aesthetics and neuroscience. This beauty of logic carried through Gödel’s discussions, lectures, and publications as other intellectuals admired his work. Even professor of cognitive science Douglas Hofstadter would write on the themes of mathematics and symmetry encompassing the theorems in Gödel, Escher, Bach. In the book, Hofstadter described the connections between music, art, and logic through a “strange loop.” Hofstadter argued this “strange loop” gives rise to consciousness, and, as the neurons of the brain respond to what the body perceives, they give rise to the entity itself that creates those perceptions about the world. This creates a “strange loop.”As he inspired discussions in philosophy, literature, science, and theology, the poet Hans Magnus Enzensberger wrote “Home to Gödel” and musician Hans Werner Henze’s “Second Violin Concerto was based of the setting of this poem. He might not have been the most popular celebrity among the general public like Einstein and Russell were, but Gödel surely had his fans among the intellectuals.
In this excerpt from “Home to Gödel,” Enzensberger describes the incompleteness conclusions and how fascinating Gödel’s thought process may be.
In any sufficiently rich system
including the present mire
statements are possible
which can neither be proved
nor refuted within the system.Those are the statements
to grasp, and pull!Gödel himself was nowhere near as outspoken as his fame might suggest he was. Born to a Luthern family in Moravia, he was shy and often nervous around others. As he identified as Austrian, he spoke German rather than Czech. He excelled in school and studied mathematics at the University of Vienna. There, he met with philosophers of the Vienna Circle, a group that supported logical positivism, a form of philosophy that used logic to address philosophical issues. They used this “scientific outlook” as a way to explain truths in science without using other forces such as mysticism. As they discussed what about mathematics truly existed in the universe, they formed their arguments and engaged in debates between each other. Gödel himself agreed with many of their views, but his belief in God and his belief in Platonism differentiated himself. This Platonistic view held that mathematical truths were discovered as though the mathematician were an explorer in a cave, rather the Constructionist view, which is that of an inventor building a machine. While a Platonist might believe mathematical objects, such as numbers and symbols, are real, Constructivists would be inclined to argue those objects are only works of fiction humans have created.
When Gödel received his doctorate in 1930, his dissertation showed first-order logic is complete. This meant every logical truth using first-order predicate logic could be proved in rule-based systems. It was after this he would perform work on the incompleteness theorems. Logician Jaakko Hintikka wrote Gödel’s announcement of the incompleteness conclusion at the Königsberg Conference on Epistemology of the Exact Scientists was the most important moment in 20th-century logic and possibly in all of logic. When Gödel announced it, however, his audience remained silent. Only Hungarian mathematician John von Neumann realized its important implications at the time. When Gödel published the proof, he lectured at Princeton University and rose to popularity. Later, theoretical physicist Roger Penrose argued that Gödel’s theorem shows that “Strong AI” doesn’t hold true. Human minds are not computers, and the artificial intelligence will never completely replicate them. Gödel himself didn’t believe his incompleteness conclusion proved Platonism true, but it did refute the anti-Platonist view that, in mathematics, truth and provability are the same thing.
Gödel would suffer from poor health physically mentally through his remaining work. When he married dancer Adele Nimbursky, he moved to Princeton University where he remained until his death. He applied for U.S. citizenship and, while examining the propositions of the constitution, discovered how the U.S. could legally turn into a dictatorship. Similarly, as Gödel studied the field equations of Einstein’s general theory of relativity, he imagined a world in which time could move backwards. His perfectionism lead him to only publish a few more papers that described his Platonist views, including “Russell’s Mathematical Logic” and What’s is Cantor’s continuum problem?” But the logician’s health would get the best of him. As he grew weak and disturbed by psychological effects, he grew a distrust towards everyone except his wife. While he made appointments with doctors and wouldn’t attend them and refused medicine, he was hospitalized after refusing to eat in the 1970s. When he died in 1978, the certificate read that the cause of death was “malnutrition and inanition caused by personality disturbance.”
Other logicians such as Austrian philosopher Ludwig Wittgenstein engaged in these debates as well. When he met with Russell during a meeting of the Moral Science Club at Cambridge University in the 1940s, some say Wittgenstein threatened philosopher Karl Popper with a fireplace poker during a debate. Regardless, Wittgenstein’s peers described him with emotion ranging from hatred to adoration. Indeed, Wittgenstein sought to use philosophy to grant solutions to the “puzzles” of contemporary life, and his seminal work Tractatus Logico-Philosophicus (Latin for “Logico-Philosophical Treatise”) sought to answer these issues philosophy faced. Such an ambitious goal showed how Wittgenstein believed philosophy to represent the integrity and truth of life that everyone seeks to achieve. The work’s central thesis was that propositions, or statements of the world, were pictures of reality. According to philosopher Norman Malcolm, Wittgenstein found this idea when he came across a newspaper using a map to describe the location of an automobile crash. The map pictured reality the same way propositions do. Philosopher Georg H. Von Wright described the beauty of Tractatus in its simple, static sentences similar to one of Wittgenstein’s architectural projects. Wittgenstein himself built a mansion using utmost precision down to the smallest details in Vienna for one of his sisters and later built a sculpture in the studio of his friend sculptor Drobil. Wright described the perfection and elegance of the finished work contrasted Wittgenstein’s dynamic, searching personality.
Wittgenstein himself sought this intellectual comfort in all of his work. As he drew inspiration from Russian writer Leo Tolstoy and his book The Gospel in Brief, he found solace after witnessing a tremendous amount of suffering against the Russians during World War I. Wittgenstein’s cousin economist Friedrich Hayek once described him so engrossed in a detective novel he wouldn’t speak to him. When he did finish, though, he engaged in lively discussion of the Russians in Vienna. The experience had shaken Wittgenstein fundamentally as a person and left him disillusioned with some of his views of society. Still, Wittgenstein’s enthusiasm for intellectual discussion showed how he lectured for long periods of time as professor of philosophy at Cambridge University. He dressed informally in a way that students found approachable even in his longwinded explanations. He did this without getting tired and even cut into his student’s lunch breaks.
The challenges logicians and other philosophers face today may still leave us speechless, as they did during the Königsberg Conference. Still, the beauty of logic permeates through whatever theorem, equation, or idea it prove. The lives of these minds reveal the nuances and arguments that carried through the decades, and they’ll remain relevant through the future of mathematics, physics, and philosophy.
-
Creating a greater story of artificial intelligence
With my new site A History of Artificial Intelligence, I share a story with over sixty events from the present day dating back to ancient civilizations. The way humans have created artificial intelligence such as self-driving cars and algorithms that recommend books to read has a lot of history behind it. Spanning literature, art, poetry, philosophy, computer science, logic, mathematics, ethics, mythology, and other fields, I create this grand narrative of AI. I hope that, as news unfolds about the concerns and social issues raised by AI technology, we can make informed, educated opinions on them by keeping the past alive. Studying artificial intelligence, robots, automata, androids, and other parts of our culture as they relate to stories and inventions from hundreds of years ago, we can ask the same questions that plagued the ancient Greeks and Romans: What makes us human? How can we ascribe humane qualities to nature? In what way can a computer “think”? These inquiries should take center stage in debates about the future of artificial intelligence as well as the policy and ethical recommendations that guide our decisions.Through a reddit Secret Santa gift exchange, I received the book Superintelligence: Paths, Dangers, Strategies written by philosopher Nick Bostrom. Though I’ve been incredibly busy with graduate school applications, I’ve read a little bit, and I hope everyone can understand the nuanced complexities of artificial intelligence the same way Bostrom does. I draw upon many of Bostrom’s methods (in speculating about issues and then addressing them from all angles with clarity and precision) as ways for researchers in any area to approach AI. Regardless of how closely I agree with Bostrom’s solutions, he presents a very strong argument by extrapolating and predicting the future of AI using features that can be generalized into trends. By this, I mean he chooses many of the the characteristics of AI that would make sense that will carry on the future, rather than cherrypicking examples that only support his conclusions.
With my timeline, I have been hoping to capture the essence of what AI is by creating a diverse, humanistic narrative that can describe periods and eras punctuated by specific, important events. I’ve found it difficult to decide which events to use for each time period and how they relate to one another, though. I especially struggle in deciding which events to use as the timeline becomes more and more recent. It’s difficult for me to know what current events in the news a worthy of being called “history” and also to determine specific, essential events among all the breakthroughs in AI over the past few decades. I also started the project to show off how to share a story that draws from all academic areas and presents any idea in a casual, generalizable manner. I also wanted to go beyond simply sharing facts or lecturing as much information as possible. Throughout my writing, I imagine all of my friends sitting next to me and pretend as though I’m speaking to them over a cup of tea. I try to communicate as though they would understand what I’m saying.
-
"Frankenstein" and tampering with nature

From the 1831 revised edition of Mary Shelley’s Frankenstein, published by Colburn and Bentley, London. Frankenstein by English novelist Mary Shelley: with philosophy, literature, science, and history, Shelley speculated how humans would attempt to use scientific progress to tamper with nature as far back as 1818. Frankenstein and his rejected monster remain central to debates about fetal tissue research, life extension, human cloning, and artificial intelligence. In the story, Victor Frankenstein builds an artificial, intelligent android from slaughterhouse and medical dissection materials. Like other Romantic pieces of English literature, Shelley confronted nature as man addressing the issues of science and the Enlightenment ideal of how to use power responsibly. But how did a novel from over two centuries ago become a central piece in contemporary bioethics discussions? Through a history overview, we understand the real monster – ourselves.
Shelley called it “A Modern Prometheus” in referencing that the heat and electricity used to power Frankenstein’s monster were similar to the heat the Titan Prometheus gave to his own creations. It also refers to 18th-century German philosopher Immanuel Kant’s warnings against “unbridled curiosity” after inventor Benjamin Franklin’s discovery of electricity in the 1750’s. Shelley’s Frankenstein monster is sentient, yet hideous, so it faced existential crises of why it was created in the first place. It referenced the philosophical crises of Prometheus. Alongside this, Italian physicist Luigi Galvani discovered dead frog legs’ muscles could twitch when struck by an electricity. Shelley specifically noted Galvani’s investigations, but never mentioned electricity used to create Frankenstein. Knowing the limits of what we can create continue to serve in debates about genetic engineering and artificial intelligence.
The relationship between Frankenstein and his monster grows tenuous through the novel. The monster realizes his own grotesque nature and begins to wonder how he can achieve happiness like any other human being. During one confrontation, Victor Frankenstein directly cursed his creation as he spoke:Why do you call to my remembrance, circumstances, of which I shudder to reflect, that I have been the miserable origin and author? Cursed be the day, abhorred devil, in which you first saw light! Cursed (although I curse myself) be the hands that formed you! You have made me wretched beyond expression. You have left me no power to consider whether I am just to you or not. Begone! relieve me from the sight of your detested form.
The monster responded with:
Thus I relieve thee, my creator. Thus I take from thee a sight which you abhor. Still thou canst listen to me and grant me thy compassion. By the virtues that I once possessed, I demand this from you. Hear my tale; it is long and strange, and the temperature of this place is not fitting to your fine sensations; come to the hut upon the mountain. The sun is yet high in the heavens; before it descends to hide itself behind your snowy precipices and illuminate another world, you will have heard my story and can decide. On you it rests, whether I quit forever the neighborhood of man and lead a harmless life, or become the scourge of your fellow creatures and the author of your own speedy ruin.
The monster’s eloquent response shows his forceful, yet gentle response. His domineering nature is only part of who he is, but his efforts to appear sincere and calm make him more trustworthy so that Frankenstein can create a female partner for him.
It was only until the latter half of the 20th-century when scholars from medicine and science began picking up on the novel’s ethical themes. The explosive history of genetic engineering had caused citizens from all fields to raise concerns. “The Frankenstein myth is real,” said Columbia University psychiatrist Willard Gaylin in a 1972 issue of The New York Times Magazine. At that time, U.K scientists had recently cloned a frog. Scientists began speculating how close we were to human cloning. Gaylin, who was also coo-founder of the world’s first bioethics think tank, the Hastings Center, speculated researchers could soon perform in vitro fertilization such that scientists could select genetic traits of the offspring. In similar dark themes of Frankenstein, artificial placenta and surrogate women could replace pregnancy and childbirth of other individuals. Though Victor Frankenstein resorted to slaughterhouses and medical dissections, we have many more resources. This comparison suggests that, as we rely on technological advancements, may be able to address the issues that Shelley predicted. These biological replacements raise questions of what right humans have to make such adjustments to giving birth. Gaylin continued to write in the New York Times Magazine that, “When Mary Shelley conceived of Dr. Frankenstein, science was all promise…Man was ascending and the only terror was that in his rise he would offend God by assuming too much and reaching too high, by coming too close.”Scientists may have begun contemplating ethics, but it didn’t stop researchers from making progress. By 1973, biologists Herbert Boyer of the University of California at San Francisco and Stanley Cohen of Stanford University developed recombinant DNA techniques for genetic engineering. It allowed scientists to edit genes across species. In 1975, 150 scholars and bioethicists gathered at the Asilomar conference center in Pacific Grove, California, to devise an elaborate set of safety protocols under which gene-splicing experimentation would be allowed to proceed. The mayor of Cambridge, Massachusetts, declared in 1976 that the City Council would hold hearings on whether to ban Harvard scientists from starting genetic engineering experiments.
“They may come up with a disease that can’t be cured—even a monster,” Mayor Alfred Vellucci warned. “Is this the answer to Dr. Frankenstein’s dream?” In 1977, after six months of discussing these issues, a body of scholars voted to proceed with the research, despite Vellucci’s opposition. Cambridge’s passion for genetic engineering continued for decades. Today there are over 450 biomedical businesses in the Cambridge area. Alongside the passion, though, the ethical issues lingered. The insidious themes of Shelley’s novel, direct or indirect, persisted as well. With each discovery came an alarm to contemplate its effects.
Dolly’s taxidermed remains at the National Museum of Scotland In 1997, Scottish embryologist Ian Wilmut made history with the first cloned mammal, Dolly the sheep. That same year, U.S. President Bill Clinton warned of human cloning. Clinton emphasized the humanistic and spiritual values in these controversial techniques, and banned federal funding for human cloning research. Experiencing a sort of disgust and fear similar to Victor Frankenstein, bioethicist Leon Kass, too, expressed a warning that this sort of disgust represented a “deep wisdom” in his New Republic essay in the same year. Repeating the themes of manufactured humans and a sort of Frankenstein monster abomination that may result, mankind’s fears took control. Kass even cited a “Frankensteinian hubris” of these techniques. The science continued, though, as did the fears.
In 1978, U. K. scientists created the first “test tube baby,” using in vitro fertilization. By 2017, the Society for Assisted Reproductive Technology reported almost 7 million children conceived through this method across the world. These methods included selecting traits and using surrogate egg donors. Unwarranted backlash against genetically modified food, though, dominated. Despite scientifically inaccurate campaigns about “Frankenfoods,” researchers have created hundreds of safe biotech crops. These foods such as golden rice yield more nutrients and resist disease. Topsoil erosion has decreased by forty percent since the 1980’s due to bioengineered herbicide-resistant crops, according to the U.S. Department of Agriculture. In this case, the Frankenstein story yields unnecessary, unfounded fears. Writers would even describe human engineered “Frankenbabies” and “designer babies” using similar terminology.The motives and purposes for this technology become even more muddy. While it’s not immoral to believe in the goal of fighting disease, it’s also important to remember that modified humans are not monsters in the literal sense of Frankenstein. Victor Frankenstein’s cursing of his monster isn’t quite the way we perceive these modified offspring. Whatever comparison or argument we draw from these stories, we still create humans. They’re capable of speech, thought, and other forms of judgements as anyone else it. Proponents such as transhumanists criticize the bioethical concerns as keeping us from achieving these biotechnological gifts.
Understanding the ways mankind has tampered with nature since the early days of science can provide us with a deeper, nuanced portrayal of these fears. If we don’t adhere to these concerns, we may find ourselves becoming more like the scientist Victor Frankenstein, the true monster.

