Guest post – "Aristotle and Fake News: Why understanding rhetoric illuminates credible arguments"

By Carolyn Haythorn

It’s hard for me to remember the time before the internet became such a pervasive part of daily life. I work online to earn money, watch Netflix to relax, scroll YouTube for advice on anything from personal finance to cooking, and read push notifications from my favorite news outlets to keep up-to-date. I’m part of the generation in which proper computer use was taught in school. Our digital literacy began with typing classes in grade school, then turned to learning about the dangers of Wikipedia in high school, and, by the time I was in college, people used the internet to write class papers more often than physical books in the library. 

But one area where I think our digital education was lacking is in determining how to spot a ‘credible’ source. 

Sure, people have always known that anyone can say whatever they want on the internet, and we’ve all heard that it’s important to question what you read before accepting it as fact. But very little was actually said about how to determine if something is credible, or what to do if you come across websites with suspect information. If anything, this was further confused in college, where only peer-reviewed academic articles were considered credible—a wealth of information that, by and large, you lose access to after graduation. 


One solution to this problem is to review how audiences are persuaded in the first place. By understanding how arguments are created, it can be easier to recognize flaws in logic, or failures in the speaker’s character. I’m talking about Aristotle, and his three modes of persuasion: pathos, logos, and ethos. You likely touched on these in school when studying persuasive writing and political speeches, but I don’t think nearly enough emphasis is placed on how methods of persuasion can influence perceptions of credibility, the spread of viral stories, and belief in factually unsound statements. Although all three modes are equally vital for a strong, sound argument, we as human beings are predisposed to focus on some factors more than others. I believe with the rise of misinformation, it’s more important now than ever before to understand exactly how we are influenced by persuasion, and what our weaknesses are in recognizing good arguments. 

Let’s start with the easiest, pathos. Pathos is an appeal to an audience’s emotion, whether negative or positive. This encompasses both evoking a particular emotion from an audience, as well as invoking that emotion as justification for a certain behavior or action. There’s a strong connection between emotion and persuasion: in fact, there’s evidence that people naturally include more emotionality in their language when they are trying to be persuasive, even if they are specifically advised against doing so. Perhaps appeals to emotion are frowned upon as unscientific or misleading, but they’re pervasive for a simple reason: appealing to emotion works. In 2012, researchers at the University of Pennsylvania found that the most emailed New York Times articles were ones which prompted emotional responses, especially if the emotion was positive or associated with high energy, for example, anger or anxiety, as opposed to sadness. A study in 2016 found that when people are forced to make quick decisions about an object, they are more likely to rely on their emotional response to the item rather than objective information. A person is more likely to quickly classify a cookie as something positive because it makes them happy, while with more time for consideration, they may classify it as is negative because it’s unhealthy. Finally, a metanalysis of 127 previous studies concluded that appeals to fear were nearly always effective at influencing an audience’s attitude and behavior, especially when the proposed solution is seen as achievable and only requires one-time action. 

We know that people use emotion to make quick judgements, can be strategically influenced by arguments which appeal to emotion, in particular, fear, and are more likely to share articles which elicit emotion. These are all strong evidence that emotion is an integral part of how humans perceive and interact with the world. The problem with pathos is that if used without logos and ethos, the proposed solution to a problem may not be very effective, and there’s no guarantee that the problem being addressed is even real. For example, in 1998, Dr. Andrew Wakefield purported to have found a link between autism and vaccines. There is no such link, but the report garnered enough fear to spark the anti-vax movement which is now responsible for the reemergence of preventable diseases like measles and whooping cough. Fearmongering about marijuana in the 1930s lead to the drug being outlawed in 1937 and classified under the strictest designation by the Controlled Substances Act in 1971, despite contemporaneous recommendations from within the U.S. government to decriminalize its use. What’s more, there’s evidence that decisions made during stressful situations are less logically sound than decisions made in calm situations. The lesson? People suffer when pathos alone prevails.

Logos is usually framed as the antidote to pathos. An appeal to logos is an appeal to logic: cold numbers, rational solutions, statistical significance. In theory, this sounds great. The problem is, the human brain isn’t wired to think purely logically: we have trouble conceptualizing large numbers, we seek patterns in random smatterings of data points, we’re quick to claim causation where chance or other variables are involved, and we’re easy victims of logical fallacies. Even among the scientific community, there are plenty of examples of seemingly logical, scientific arguments that turned out to be bad science. A paper published in 1971 asserted that women’s periods will “sync up” if they spend enough time together. Although this is still widely believed today, it has been thoroughly debunked by the scientific community. More pressing, the idea that low-fat diets are an effective way to lose weight without any regard to sugar consumption was introduced to the American consciousness in 1967, sponsored by representatives of the sugar industry. This no doubt altered the standard American diet and likely contributed to the rise in obesity across the U.S. (although the culpability of the sugar industry is up for debate). Even with good intentions, scientists can make mistakes: In 2018, scientists tried to replicate 21 previously published social science experiments, but only got the same results for 13, all with a weaker correlation than in the original studies. While it’s important that scientists review and revise their original conclusions, correcting common misconceptions is difficult once a myth has entered the popular consciousness: Think you’re immune? Check out this infographic. 

What’s more, without pathos and a focus on morality, seemingly “logical” solutions can be plain cruel. This is demonstrated beautifully in Jonathan Swift’s satirical A Modest Proposal, in which eating children is proposed as a rational solution to poverty in Ireland. Decisions made with a total disregard for emotions shouldn’t be our gold-standard for sound reasoning, and an argument lacking in pathos can be just as bad as one lacking in logos.  

If appeals to both pathos and logos can lead to mistakes in reasoning, where does that leave modern thinkers in their quest for credible arguments? The answer lies in Aristotle’s third mode of persuasion, ethos. Today, ethos is sort of a fuzzy idea; it means both having knowledge about a topic and establishing yourself to your audience as a credible speaker, two things that don’t necessarily go hand in hand. Aristotle himself split the idea into three parts: good sense, good moral character, and goodwill. Good sense, or phronesis, comes from having experience in one’s field, especially with a track record of rational, moral decision making. Good moral character, arete, is gained by practicing virtuous behaviors until they become habits. Finally, goodwill, eunoia, is earned by convincing the audience of one’s knowledge and intentions. 

Having ethos is vital to a sound argument. In fact, Aristotle grants only three reasons for unsound arguments to exist: either the speaker is wrong due to lack of good sense, the speaker is lying due to lack of moral character, or the speaker is silent, because they don’t care if the audience hears good advice. The problem is, it’s difficult for readers to judge the ethos of a speaker, particularly over the internet. Unlike pathos and logos, the root of ethos comes from outside the argument itself: the audience must know the speaker’s experience (good sense) and moral character to avoid falling for unsound advice. To make matters worse, if a speaker wants to persuade an audience, they will go through the trouble of appearing credible whether they are offering sound advice or not, they. Today, that can mean anything from verbally assuring the audience of their credibility and good intentions, to selecting appropriate clothing for a given situation, or even hiring a web-designer to make sure content looks clean and professional: all things which index competence in the modern world. But the appearance of credibility alone isn’t enough to judge a speaker as credible. 

Where does that leave us? Finding reliable, credible information can seem daunting at first: emotions cloud our judgement, but logic is uncertain. We often must know if a speaker has reliable experience in a field without ever having met them. However, with a little practice, I think we can all improve our sense of what’s credible and what’s not. To that end, I offer the following advice: First, after hearing an argument or statement – be it online or in person – consider your own position to the piece. How did it make you feel? Does it confirm what you want to be true? Do you have any stake in the events at play? All of these factors cloud judgement, so take extra care when evaluating an argument. Second, consider the logic of the piece. Does everything make sense? Do the numbers add up? Does it align with a wider context, or does it seem out of place? If the argument does not make sense logically—and you have enough knowledge in the field to judge it appropriately—then it probably isn’t sound. Finally, consider the position of the speaker. Do they have experience with the topic being discussed? Do they have a history of honesty? Do they benefit from your support of their argument? If more information is needed to answer these questions, then do some digging. Look to others with more experience in the field to help determine if the speaker can be trusted. If you can’t find enough information to make a solid judgement, consider the source not credible. 

And when in doubt, err on the side of caution: given that Rhetoric was written in the 4th century B.C., speakers have had a loooooong time to develop ways to manipulate audiences, whether intentions are pure or not! 

Sources

Alvergne (2016). “Do women’s periods really synch when they spend time together?”
The Conversation: Academic rigour, journalistic flair.


Aristotle, Rhetoric: Book II. Translated by W. Rhys Roberts.  

Berger and Milkman (2012). “What Makes Online Content Viral? Journal of Marketing Research. 49(3): 192-205. For a summary, see: Tierney (2010). “Will You be E-Mailing This Column? It’s AwesomeThe New York Times.

Brinton, Alan (1988). “Pathos and the “Appeal to Emotion“: An Aristotelian Analysis.” History of Philosophy Quarterly, 5(3): 207-219, pp 211.

Burnett and Reiman (2014). “How did Marijuana Become Illegal in the First Place?” Drug Policy Alliance.

Calhoun (2013). “Human Minds Vs. Large NumbersThe Sieve: finding science stories in the vast expanse.

Hale (2018). “Patterns: The Need for Order.” PsychCentral.


Moritz et. al. (2015). “Stress is a bad advisor. Stress primes poor decision making in deluded psychotic patients.” European Archives of Psychiatry and Clinical Neuroscience, 265(6), pp 461-469; Simonovic et. Al (2016). “Stress and Risky Decision Making: Cognitive Reflection, Emotional Learning, or Both.” Journal of Behavioral Decision Making, 30(2): pp. 658-665.

O’Connor (2016). “How the Sugar Industry Shifted the Blame to Fat.” The New York Times.


Rocklage et. al. (2018). “Persuasion, Emotion, and Language: The Intent to Persuade Transforms Language via Emotionality.” Psychological Science, 29(5): 749-760.

Rocklage, and Fazio (2016). “On the Dominance of Attitude Emotionality.” Personality and Social Psychology Bulletin, 42(2): 259-279. For a summary, see: Markman (2016). “Emotion Dominates Fast Choices,” Psychology Today.

Tannenbaum et. al (2015). “Appealing to fear: A Meta-Analysis of Fear Appeal Effectiveness and Theories.” Psychological Bulletin, 141(6), 1178-1204.

The Science Facts about Autism and VaccinesHealthcare Management Degree.net.

Artificial intelligence re-defines reality and the self

Isaiah, you so silly.
Is this strong AI? Is this just fantasy? Caught in a human mind. No escape from reality.

When the Cold War brought the world’s attention to revolutions in scientific research, artificial intelligence would shake our understanding of what separates a human from the rest of the world. Scientists and philosophers would draw from theories of mind and question the epistemic limits of what we can know about ourselves. Neurophysiologist and founder of machine learning Warren McCulloch described his cybernetic idealism in his 1965 book Embodiments of Mind. This postwar scientific movement he founded with mathematician Norbert Wiener and anthropologist Gregory Bateson was a mix of science and culture at the time. Cybernetics, based on the Greek word kybernētikḗ, meaning “governance,” was a collaboration between ideas from machine design, physiology, and philosophical ambition. Since its 1948 inception, it would create the language of science and technology we take for granted today. The advances in artificial intelligence brought upon by cybernetic idealism would continue to define how scientists and philosophers understand the world.

In 1943, Warren McCulloch and Walter Pitts developed the first artificial neuron, a mathematical model of a biological neuron. It would later become fundamental to neural networks in the field of machine learning. The advances in computer science and related disciplines have caused a flow of terms among writers. From the Internet of Things to Big Data as well as deep learning, the world is quantified. Mathematician Stephen Wolfram and other researchers have even argued that matter itself is digital. Latching on to any method of analyzing the world into data that we can use to draw conclusions, this created a new sort of metaphysics that beings themselves can be quantified. Cybernetics sought to create a language in which these scientific phenomena could be described. Artificial intelligence itself exploding in the news, especially with neural networks and deep learning technologies, wouldn’t have been possible to describe without McCulloch’s work in establishing what algorithms and information truly meant.

We should scrutinize this premise of digital metaphysics. We must understand how our metaphors of processing information like a computer would are limited in describing how humans reason. We must come to terms with how much knowledge the digital age can truly tell us from large amounts of information. Algorithm-based decisions that machines make that are involved in areas like business, medicine, and architecture differ from the decisions humans make. Data itself is part of our world, but, if it were to take the responsibilities and tasks normally brought upon by humans, then this new form of intellectual labor is changing the self and reality.

After World War I, the self was shaped by the technology of the era. Telegrams and newspapers constructed psychologist Sigmund Freud’s psychic censorship of painful of forbidden thoughts, which Freud modeled after methods of Czarist guards censoring information in Russia. During World War II, the cyberneticians created a notion of the self through feedback systems from Wiener’s engineering work on weapons systems during World War II. In determining how tracking machines could shoot enemy airplanes, the engineers had to aim thousands of feet higher than the plane and predict how the plane would move. Wiener created a machine that could learn how the pilot moves based on the pilot’s past behavior. These goal-directed feedback-based systems would define artificial intelligence. Artificial intelligence itself can then be described on these concepts of the self. In today’s age, we can model artificial intelligence on reality and the self.

The advances in Information theory emerged from newer forms of communication since the 1950s. Machines would rely on feedback loops and human-machine interfaces would create new forms of communication and interaction themselves. Scholars would write about “information” in measuring communicated intelligence using bits, binary digits. In How Reason Almost Lost Its Mind: The Strange Career of Cold War Rationality, Historian of science Paul Erickson and his colleagues described this period as a passage from Enlightenment reason to “quantifying rationality,” a shift from a qualitative capacity to judge to an extensive but narrow push to measure. But some Enlightenment notions survived the transition. Cybernetics would describe how these systems were structured and what possibilities they may have. Cognitive scientist Marvin Minsky would speculate when scientists could create a robot with human-like intelligence.

What is real? What isn’t? McCulloch sought to understand and separated the world into what a metaphysician studies, the “mind”, and what a physicist studies, the “body.” McCulloch developed “experimental epistemology,” that the physicist had to move into the “den of the metaphysician” to follow the synthetic a priori principle, a term that philosopher Immanuel Kant used to describe factual, yet universalizably necessary, truths. Drawing upon philosophers like Kant, Gottfriend Leibniz, and Georg Hegel, McCulloch and his colleagues hypothesized how to describe information itself different from other materials such as matter and energy. Wiener argued information was material from Leibniz’s “universal characteristics” that would be part of a logical language that all machines and computers could use. Leibniz also argued these mechanical calculations can amount to reasoning. Leibniz allowed cybernetics to move beyond the binary alternative between material and ideal in a philosophical sense. Leibniz and Kant were also sources for McCulloch’s search for the conditions of cognition—the synthetic a priori—in the digital structure of the brain.

These notions of cybernetics in today’s reality meant artificial intelligence could break down barriers researchers previously hadn’t even known about. In The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, computer scientist Pedro Domingos said machine learning is “the scientific method on steroids.” The way machine learning has already shaped society is absolutely unprecedented. Politicians take action and start dialogues through predictions of voting behavior of different demographics. Deep learning technologies can diagnose diseases like cancer better than doctors can in some respects. Domingos argued that a “master algorithm” will create a “perfect” understanding of society, connecting sciences to other sciences and social processes themselves. Automation will become automated this way. “The Industrial Revolution automated manual work and the Information Revolution did the same for mental work, but machine learning automated automation itself.” Domingos imagined society running on the steam of some other intelligence. Automation has been a major force in global history for at least two centuries. To automate automation is to imagine historical causality itself as controlled by artificial intelligence. Domingos is sanguine about automated automation making things better, but it isn’t clear why. This left us without a conceptual foundation on which to build an understanding of the ubiquitous digital processes in our society, deferring even historical causality to machine learning. McCulloch’s group of scientists had a glimmer of such an approach, a way to understand and govern the very machine learning they had set in motion.

Cybernetics keepin’ it real.

By setting the foundations of neuroscience, scientists could describe these examples of neural activity. The same way synapses of the brain fire and transmit information, computer could use neural networks and telephone operators could use switching boards to create calculating machines, like the ones mathematician-physicist John von Neumann constructed. They then used the digital machine infrastructure to create logical propositions such as the ones philosopher-mathematician John Alan Robinson created. McCulloch and Pitts believed they could extend these ideas to ways computer could form predictions and, in turn, how artificial intelligence would operate. Embracing neural networks as ways the brain processes information, the two would create the semantics and syntax for neurophysiology and machine learning with principles of digital operation and organization in their paper “(Physio)Logical Circuits: The Intellectual Origins of the McCulloch-Pitts Neural Networks.” Similar networks would also study the symbolic nature of human understanding through semiotics as well as the the structure of human knowledge itself with semantic networks. The interdisciplinary interactions between neuroscience, computer science, mathematics, and other disciplines alongside these newfound methods of meaning and material itself meant these forms of neural networks were closely related to nature itself.

The two scientists wrote that these neural networks needed cause and effect relationships that Kant described that one event necessarily followed another in time. Neuronal activity, though, can’t describe this necessity because there are disjunctive relationships that prevent the determination of previous states. While we may observe neuronal states causing one to another, they aren’t apparent until after the effect is observed. The states following one another couldn’t be determined from their preceding states. This meant the knowledge of these digital systems would always be incomplete, with only a certain amount of autonomy. The brain establishes methods to receive and structure impulses within itself, states of matter of its logical structure and receptive neurons. The brain is the intersection and source of the mind and the world. Putting Kant’s principle of causality into these neurophysiological terms, these neural networks have a partial autonomy.

The scientists created the original McCulloch-Pitts neuron to show the digital is real. Logical expressions are central to cognition itself, and the digital is combination of idea and matter. Through our brains themselves and computers, these frameworks are still restricted in that they can’t “determine” their networks themselves. McCulloch’s Kantian approach uses symbols to represent these digital constructs. This abstraction of the digital is the metaphysics of reality.

Sociologist William Davies argued the data-driven approaches of the digital age don’t focus on causality, but, rather, the ability to control behavior. Davies, unlike Domingos, believes letting data make decisions means allowing causes of digital information without participating in the cause and effect relationships themselves. Understanding Big Data means seeking correlations in digital networks. The data abstractions are real and create the world, but data still can’t constitute decision-making. Humans perform actions on these cause and effect relationships, it means artificial intelligence of machine learning can perform using pragmatically successful algorithms. The digital still can’t determine the correlations that take place, and automation itself can’t be automated. Information is information. It takes a wisdom to put it in context.

We can come close to live in the digital reality by understanding how our world includes data from these processes. For the sake of machine learning algorithms making decisions that have profound impacts on social behavior and ethical norms, we need notions of responsibility, duty, and obligation that revolve around this digital reality. A partial autonomy given to computers means they don’t make judgements the same way humans do, but it the language of philosophy will allow us to answer the difficult questions they pose for the future.

The beauty of logic throughout history

Kurt Gödel

As I peruse through biographies of the lives of philosophers, scientists, mathematicians, and other researchers, I find myself fascinated. I wonder how their hometowns, education backgrounds, and people they’ve met throughout their lives influenced their success in their work. In investigating what it means to be a genius and what it takes to produce amazing work, I still wonder about how people interacted with scientist Albert Einstein, mathematician Bertrand Russell, logician Kurt Gödel, and philosopher Ludwig Wittgenstein and what they thought of their greatness. With the distance and objectivity I have as I read about these famous minds of history, I appreciate the way history becomes more of a gradient of many events that give rise to settings, culture, and ideas themselves.

Einstein’s job as a patent clerk set the stage for his rumination of space and time. As he struggled to find a place in the world, he thought about the universe itself and what sort of theories would describe it. Russell bore wealthy apparel that represented his aristocratic lifestyle. It gave him an authoritative feel to his advice and commentary on society. As radios and newspapers covered his work, people revered him as a celebrity, especially how he emphasized scientists to engage with the public. Mathematician Alfred Whitehead and Russell’s Principia Mathematica laid the foundation for mathematics and computer science of the 1930s and beyond. Even Wittgenstein’s friendly demeanor earned him friends with other philosophers. Gödel made similar contributions to mathematics and philosophy. His story, though, was not marked with such popularity.

While Gödel studied at Princeton University alongside Einstein, he contrasted the physicist’s casual attire with his formal white jacket. He appeared distanced and detached, as though he were observing everything in the universe around him. Among Gödel’s contributions include his incompleteness theorems and an argument for God’s existence. The first incompleteness theorem let rule-based systems like those of Principia Mathematica prove arithmetic truths though they’ll always remain incomplete. The second incompleteness theorem prevent arithmetic theories from being proved consistent within those theories. At the time of their publication in the 1930s, scholars found it appealing there were truths they couldn’t prove. Gödel’s formalisms of this idea even appealed to scholars who searched for romantic beauty in their work. These incompleteness theorems have been referenced throughout mathematics, computer science, and logic and even in more distant disciplines such as aesthetics and neuroscience. This beauty of logic carried through Gödel’s discussions, lectures, and publications as other intellectuals admired his work. Even professor of cognitive science Douglas Hofstadter would write on the themes of mathematics and symmetry encompassing the theorems in Gödel, Escher, Bach. In the book, Hofstadter described the connections between music, art, and logic through a “strange loop.” Hofstadter argued this “strange loop” gives rise to consciousness, and, as the neurons of the brain respond to what the body perceives, they give rise to the entity itself that creates those perceptions about the world. This creates a “strange loop.”As he inspired discussions in philosophy, literature, science, and theology, the poet Hans Magnus Enzensberger wrote “Home to Gödel” and musician Hans Werner Henze’s “Second Violin Concerto was based of the setting of this poem. He might not have been the most popular celebrity among the general public like Einstein and Russell were, but Gödel surely had his fans among the intellectuals.

In this excerpt from “Home to Gödel,” Enzensberger describes the incompleteness conclusions and how fascinating Gödel’s thought process may be.

     In any sufficiently rich system
     including the present mire
     statements are possible
     which can neither be proved
     nor refuted within the system. 

     Those are the statements
     to grasp, and pull!

Gödel himself was nowhere near as outspoken as his fame might suggest he was. Born to a Luthern family in Moravia, he was shy and often nervous around others. As he identified as Austrian, he spoke German rather than Czech. He excelled in school and studied mathematics at the University of Vienna. There, he met with philosophers of the Vienna Circle, a group that supported logical positivism, a form of philosophy that used logic to address philosophical issues. They used this “scientific outlook” as a way to explain truths in science without using other forces such as mysticism. As they discussed what about mathematics truly existed in the universe, they formed their arguments and engaged in debates between each other. Gödel himself agreed with many of their views, but his belief in God and his belief in Platonism differentiated himself. This Platonistic view held that mathematical truths were discovered as though the mathematician were an explorer in a cave, rather the Constructionist view, which is that of an inventor building a machine. While a Platonist might believe mathematical objects, such as numbers and symbols, are real, Constructivists would be inclined to argue those objects are only works of fiction humans have created.

When Gödel received his doctorate in 1930, his dissertation showed first-order logic is complete. This meant every logical truth using first-order predicate logic could be proved in rule-based systems. It was after this he would perform work on the incompleteness theorems. Logician Jaakko Hintikka wrote Gödel’s announcement of the incompleteness conclusion at the Königsberg Conference on Epistemology of the Exact Scientists was the most important moment in 20th-century logic and possibly in all of logic. When Gödel announced it, however, his audience remained silent. Only Hungarian mathematician John von Neumann realized its important implications at the time. When Gödel published the proof, he lectured at Princeton University and rose to popularity. Later, theoretical physicist Roger Penrose argued that Gödel’s theorem shows that “Strong AI” doesn’t hold true. Human minds are not computers, and the artificial intelligence will never completely replicate them. Gödel himself didn’t believe his incompleteness conclusion proved Platonism true, but it did refute the anti-Platonist view that, in mathematics, truth and provability are the same thing.

Gödel would suffer from poor health physically mentally through his remaining work. When he married dancer Adele Nimbursky, he moved to Princeton University where he remained until his death. He applied for U.S. citizenship and, while examining the propositions of the constitution, discovered how the U.S. could legally turn into a dictatorship. Similarly, as Gödel studied the field equations of Einstein’s general theory of relativity, he imagined a world in which time could move backwards. His perfectionism lead him to only publish a few more papers that described his Platonist views, including “Russell’s Mathematical Logic” and What’s is Cantor’s continuum problem?” But the logician’s health would get the best of him. As he grew weak and disturbed by psychological effects, he grew a distrust towards everyone except his wife. While he made appointments with doctors and wouldn’t attend them and refused medicine, he was hospitalized after refusing to eat in the 1970s.  When he died in 1978, the certificate read that the cause of death was “malnutrition and inanition caused by personality disturbance.”

Other logicians such as Austrian philosopher Ludwig Wittgenstein engaged in these debates as well. When he met with Russell during a meeting of the Moral Science Club at Cambridge University in the 1940s, some say Wittgenstein threatened philosopher Karl Popper with a fireplace poker during a debate. Regardless, Wittgenstein’s peers described him with emotion ranging from hatred to adoration. Indeed, Wittgenstein sought to use philosophy to grant solutions to the “puzzles” of contemporary life, and his seminal work Tractatus Logico-Philosophicus (Latin for “Logico-Philosophical Treatise”) sought to answer these issues philosophy faced. Such an ambitious goal showed how Wittgenstein believed philosophy to represent the integrity and truth of life that everyone seeks to achieve. The work’s central thesis was that propositions, or statements of the world, were pictures of reality. According to philosopher Norman Malcolm, Wittgenstein found this idea when he came across a newspaper using a map to describe the location of an automobile crash. The map pictured reality the same way propositions do. Philosopher Georg H. Von Wright described the beauty of Tractatus in its simple, static sentences similar to one of Wittgenstein’s architectural projects. Wittgenstein himself built a mansion using utmost precision down to the smallest details in Vienna for one of his sisters and later built a sculpture in the studio of his friend sculptor Drobil. Wright described the perfection and elegance of the finished work contrasted Wittgenstein’s dynamic, searching personality.

Wittgenstein himself sought this intellectual comfort in all of his work. As he drew inspiration from Russian writer Leo Tolstoy and his book The Gospel in Brief, he found solace after witnessing a tremendous amount of suffering against the Russians during World War I. Wittgenstein’s cousin economist Friedrich Hayek once described him so engrossed in a detective novel he wouldn’t speak to him. When he did finish, though, he engaged in lively discussion of the Russians in Vienna. The experience had shaken Wittgenstein fundamentally as a person and left him disillusioned with some of his views of society. Still, Wittgenstein’s enthusiasm for intellectual discussion showed how he lectured for long periods of time as professor of philosophy at Cambridge University. He dressed informally in a way that students found approachable even in his longwinded explanations. He did this without getting tired and even cut into his student’s lunch breaks.

The challenges logicians and other philosophers face today may still leave us speechless, as they did during the Königsberg Conference. Still, the beauty of logic permeates through whatever theorem, equation, or idea it prove. The lives of these minds reveal the nuances and arguments that carried through the decades, and they’ll remain relevant through the future of mathematics, physics, and philosophy.

Creating a greater story of artificial intelligence

With my new site A History of Artificial IntelligenceI share a story with over sixty events from the present day dating back to ancient civilizations. The way humans have created artificial intelligence such as self-driving cars and algorithms that recommend books to read has a lot of history behind it. Spanning literature, art, poetry, philosophy, computer science, logic, mathematics, ethics, mythology, and other fields, I create this grand narrative of AI. I hope that, as news unfolds about the concerns and social issues raised by AI technology, we can make informed, educated opinions on them by keeping the past alive. Studying artificial intelligence, robots, automata, androids, and other parts of our culture as they relate to stories and inventions from hundreds of years ago, we can ask the same questions that plagued the ancient Greeks and Romans: What makes us human? How can we ascribe humane qualities to nature? In what way can a computer “think”? These inquiries should take center stage in debates about the future of artificial intelligence as well as the policy and ethical recommendations that guide our decisions.

Through a reddit Secret Santa gift exchange, I received the book Superintelligence: Paths, Dangers, Strategies written by philosopher Nick Bostrom. Though I’ve been incredibly busy with graduate school applications, I’ve read a little bit, and I hope everyone can understand the nuanced complexities of artificial intelligence the same way Bostrom does. I draw upon many of Bostrom’s methods (in speculating about issues and then addressing them from all angles with clarity and precision) as ways for researchers in any area to approach AI. Regardless of how closely I agree with Bostrom’s solutions, he presents a very strong argument by extrapolating and predicting the future of AI using features that can be generalized into trends. By this, I mean he chooses many of the the characteristics of AI that would make sense that will carry on the future, rather than cherrypicking examples that only support his conclusions.

With my timeline, I have been hoping to capture the essence of what AI is by creating a diverse, humanistic narrative that can describe periods and eras punctuated by specific, important events. I’ve found it difficult to decide which events to use for each time period and how they relate to one another, though. I especially struggle in deciding which events to use as the timeline becomes more and more recent. It’s difficult for me to know what current events in the news a worthy of being called “history” and also to determine specific, essential events among all the breakthroughs in AI over the past few decades. I also started the project to show off how to share a story that draws from all academic areas and presents any idea in a casual, generalizable manner. I also wanted to go beyond simply sharing facts or lecturing as much information as possible. Throughout my writing, I imagine all of my friends sitting next to me and pretend as though I’m speaking to them over a cup of tea. I try to communicate as though they would understand what I’m saying.

"Frankenstein" and tampering with nature

From the 1831 revised edition of Mary Shelley’s Frankenstein, published by Colburn and Bentley, London.

Frankenstein by English novelist Mary Shelley: with philosophy, literature, science, and history, Shelley speculated how humans would attempt to use scientific progress to tamper with nature as far back as 1818. Frankenstein and his rejected monster remain central to debates about fetal tissue research, life extension, human cloning, and artificial intelligence. In the story, Victor Frankenstein builds an artificial, intelligent android from slaughterhouse and medical dissection materials. Like other Romantic pieces of English literature, Shelley confronted nature as man addressing the issues of science and the Enlightenment ideal of how to use power responsibly. But how did a novel from over two centuries ago become a central piece in contemporary bioethics discussions? Through a history overview, we understand the real monster – ourselves.


Shelley called it “A Modern Prometheus” in referencing that the heat and electricity used to power Frankenstein’s monster were similar to the heat the Titan Prometheus gave to his own creations. It also refers to 18th-century German philosopher Immanuel Kant’s warnings against “unbridled curiosity” after inventor Benjamin Franklin’s discovery of electricity in the 1750’s. Shelley’s Frankenstein monster is sentient, yet hideous, so it faced existential crises of why it was created in the first place. It referenced the philosophical crises of Prometheus. Alongside this, Italian physicist Luigi Galvani discovered dead frog legs’ muscles could twitch when struck by an electricity. Shelley specifically noted Galvani’s investigations, but never mentioned electricity used to create FrankensteinKnowing the limits of what we can create continue to serve in debates about genetic engineering and artificial intelligence.

The relationship between Frankenstein and his monster grows tenuous through the novel. The monster realizes his own grotesque nature and begins to wonder how he can achieve happiness like any other human being. During one confrontation, Victor Frankenstein directly cursed his creation as he spoke:

Why do you call to my remembrance, circumstances, of which I shudder to reflect, that I have been the miserable origin and author? Cursed be the day, abhorred devil, in which you first saw light! Cursed (although I curse myself) be the hands that formed you! You have made me wretched beyond expression. You have left me no power to consider whether I am just to you or not. Begone! relieve me from the sight of your detested form.

The monster responded with:

Thus I relieve thee, my creator. Thus I take from thee a sight which you abhor. Still thou canst listen to me and grant me thy compassion. By the virtues that I once possessed, I demand this from you. Hear my tale; it is long and strange, and the temperature of this place is not fitting to your fine sensations; come to the hut upon the mountain. The sun is yet high in the heavens; before it descends to hide itself behind your snowy precipices and illuminate another world, you will have heard my story and can decide. On you it rests, whether I quit forever the neighborhood of man and lead a harmless life, or become the scourge of your fellow creatures and the author of your own speedy ruin.

The monster’s eloquent response shows his forceful, yet gentle response. His domineering nature is only part of who he is, but his efforts to appear sincere and calm make him more trustworthy so that Frankenstein can create a female partner for him. 

It was only until the latter half of the 20th-century when scholars from medicine and science began picking up on the novel’s ethical themes. The explosive history of genetic engineering had caused citizens from all fields to raise concerns. “The Frankenstein myth is real,” said Columbia University psychiatrist Willard Gaylin in a 1972 issue of The New York Times Magazine. At that time, U.K scientists had recently cloned a frog. Scientists began speculating how close we were to human cloning. Gaylin, who was also coo-founder of the world’s first bioethics think tank, the Hastings Center, speculated researchers could soon perform in vitro fertilization such that scientists could select genetic traits of the offspring. In similar dark themes of Frankenstein, artificial placenta and surrogate women could replace pregnancy and childbirth of other individuals. Though Victor Frankenstein resorted to slaughterhouses and medical dissections, we have many more resources. This comparison suggests that, as we rely on technological advancements, may be able to address the issues that Shelley predicted. These biological replacements raise questions of what right humans have to make such adjustments to giving birth. Gaylin continued to write in the New York Times Magazine that, “When Mary Shelley conceived of Dr. Frankenstein, science was all promise…Man was ascending and the only terror was that in his rise he would offend God by assuming too much and reaching too high, by coming too close.”

Scientists may have begun contemplating ethics, but it didn’t stop researchers from making progress. By 1973, biologists Herbert Boyer of the University of California at San Francisco and Stanley Cohen of Stanford University developed recombinant DNA techniques for genetic engineering. It allowed scientists to edit genes across species. In 1975, 150 scholars and bioethicists gathered at the Asilomar conference center in Pacific Grove, California, to devise an elaborate set of safety protocols under which gene-splicing experimentation would be allowed to proceed. The mayor of Cambridge, Massachusetts, declared in 1976 that the City Council would hold hearings on whether to ban Harvard scientists from starting genetic engineering experiments.

“They may come up with a disease that can’t be cured—even a monster,” Mayor Alfred Vellucci warned. “Is this the answer to Dr. Frankenstein’s dream?” In 1977, after six months of discussing these issues, a body of scholars voted to proceed with the research, despite Vellucci’s opposition. Cambridge’s passion for genetic engineering continued for decades. Today there are over 450 biomedical businesses in the Cambridge area. Alongside the passion, though, the ethical issues lingered. The insidious themes of Shelley’s novel, direct or indirect, persisted as well. With each discovery came an alarm to contemplate its effects.

Dolly’s taxidermed remains at the National Museum of Scotland

In 1997, Scottish embryologist Ian Wilmut made history with the first cloned mammal, Dolly the sheep. That same year, U.S. President Bill Clinton warned of human cloning. Clinton emphasized the humanistic and spiritual values in these controversial techniques, and banned federal funding for human cloning research. Experiencing a sort of disgust and fear similar to Victor Frankenstein, bioethicist Leon Kass, too, expressed a warning that this sort of disgust represented a “deep wisdom” in his New Republic essay in the same year. Repeating the themes of manufactured humans and a sort of Frankenstein monster abomination that may result, mankind’s fears took control. Kass even cited a “Frankensteinian hubris” of these techniques. The science continued, though, as did the fears. 

In 1978, U. K. scientists created the first “test tube baby,” using in vitro fertilization. By 2017, the Society for Assisted Reproductive Technology reported almost 7 million children conceived through this method across the world. These methods included selecting traits and using surrogate egg donors. Unwarranted backlash against genetically modified food, though, dominated. Despite scientifically inaccurate campaigns about “Frankenfoods,” researchers have created hundreds of safe biotech crops. These foods such as golden rice yield more nutrients and resist disease. Topsoil erosion has decreased by forty percent since the 1980’s due to bioengineered herbicide-resistant crops, according to the U.S. Department of Agriculture. In this case, the Frankenstein story yields unnecessary, unfounded fears. Writers would even describe human engineered “Frankenbabies” and “designer babies” using similar terminology. 

The motives and purposes for this technology become even more muddy. While it’s not immoral to believe in the goal of fighting disease, it’s also important to remember that modified humans are not monsters in the literal sense of FrankensteinVictor Frankenstein’s cursing of his monster isn’t quite the way we perceive these modified offspring. Whatever comparison or argument we draw from these stories, we still create humans. They’re capable of speech, thought, and other forms of judgements as anyone else it. Proponents such as transhumanists criticize the bioethical concerns as keeping us from achieving these biotechnological gifts. 

Understanding the ways mankind has tampered with nature since the early days of science can provide us with a deeper, nuanced portrayal of these fears. If we don’t adhere to these concerns, we may find ourselves becoming more like the scientist Victor Frankenstein, the true monster. 

"Light in the Jail Cell" memoir sneak peek

prayer on the cold concrete floor
Illustration by Matt Starr.

I’m currently editing a personal memoir “Light in the Jail Cell” so I can publish it one day. Here’s an excerpt from the prologue: 

As a scientist would proclaim, “In the absence of light, every object shows its true colors.” With this statement of blackbody radiation, we find parallels within ourselves. In discovering who we are, we find our true nature in times of struggle. We address our issues as they are, and we can come closer to who we are as humans. As a result of this, in a nation of second chances, the punishment should fit the crime. But punishment can be insidious, never-ending. It can transcend the torment of everyday experiences into something greater. It shakes the individual soul and leaves a mark on the American soul. 

Freedom tries to break the shackles. But despair lingers.

I want to share an episode of despair when I was arrested on account of mistaken identity. Locked inside a correctional facility for six days in Westside Detroit, I felt an anguish stir up inside of me. As an Indian American Muslim man mistaken for someone else, I felt the burden of my entire existence. Upon arrival, nothing but a few concrete benches, a toilet, and some dirty sandwich wrappers comforted me. Yet my experience in detainment gave me a personal freedom that I hadn’t found before. Listening to the stories of others and struggling the same way they did, I found an awareness to raise and virtues to instill in others. My experience gave me a voice to make sense of the world around me – even in an isolated jail cell of Detroit.

The light in the jail cell, if there was one, emerged like stars in the night. In the jail cell, men shared stories through wit and banal vulgarity. In the jail cell, I listened to the conversations that carried the messages from the men behind bars. Stories about getting caught at the wrong time and suffering at the hands of drug abuse and sexual exploitation. Socioeconomic issues, from crime, poverty, sexism, and a sinister prison- industrial complex, set the stage while the prisoners read their dialogue, line by line. 

Their identities were gone, though they would struggle to make a new name for themselves. Men showed their vulnerability, and, while some raised the flag of defeat, others overcame their internal struggles. Behind their personas lied anxiety, fear, and fury. Some showed their true colors while others stayed solid as a stone. Different walks of life met in the underworld of Mound Correctional Facility in Detroit, Michigan.

Through the experience, I sat and watched. At times I joined in the conversation. I listened, ignored, cried, and slept in front of the horror show. I formed close bonds with some of the other inmates and stayed away from many others. When I would think about what those men had to share with the world, I wondered what their words meant. I had to find a strength within myself to make it through.

The punishment of detainment was the experience that my nineteen-year-old self faced all on his own, and it would permeate through the conversations I carry and the perceptions of the places I would go. Between classes at my university and in the back of my mind during my scientific research work, I always remembered this experience. I saw the nation in which I lived as a sinister, unforgiving world. From meditation and reflection, I would grapple with the experience but also pretend it never happened.
Throughout my life experiences, I kept the story concealed at the darkest depths of my psyche. In the end, the story was under control, locked away to never be found.
After forgetting about everything, I went about the rest of my life as normal as I could have. I went to my classes, but the despair hadn’t been defeated. It was only avoided and ignored. And the more I pretended, the more the desire to revisit the despair arose. The story’s weight was heavy and sinister, and the strength I needed to overcome the story lied on the path before me. The experience was more than I could have imagined. It tried to eat away at me, but I fought the demons as best as I could. Through this struggle, a light emerged. I found a story to share with others. It would carry memories, emotions, and wisdom to the end.

I took to my laptop and typed the experience. Word after word, line after line, and page after page, I regurgitated everything. The story took control of me like a snake charmer as I narrated it with precision and clarity. Every detail fell together to create my experience. This went as deep as the music that stuck with me in my head to the superficial, macho talk of the men in the jail cell. My memories stored the senses, from sight to sound, and everything meshed together with the theoretical escape I prized. And the trauma forced me to re-evaluate my beliefs and thoughts surrounding the entire experience. I recollected every detail from my thoughts, from flashbacks to my childhood to starry-eyed visions of the future, in sculpting this experience. My sense of reality would fade away at times, but I was usually able to catch it and pull myself together to release the negative energy flowing within me.

When I sat back and looked at what I had written, things seemed to make sense. Elements of the narrative, like rising action, exposition, and forms of foreshadowing, emerged naturally from the experience like fire rising from a wooden stove. The stories of the men in Detroit went from foolish jibber-jabber to meaningful prose that carried a deeper meaning. With these stories, I could understand how those men were feeling and find the sympathy and wonder to share with others. While writing the story, I hacked away on my keyboard and poured out idea after idea.

Reading it, I hope you can see what it’s like to be innocent inside of a jail cell. I hope you can understand what rage and redemption is found there, yet, more importantly, I want to reveal this sacred healing light.

New website: "A history of artificial intelligence"

From ancient civilizations to the present day


I’d like to proudly announce the creation of my new website “A history of artificial intelligence.” (http://ahistoryofai.com). Through it, I show the various ways artificial intelligence has changed since its dawn thousands of years ago. I hope to use this website to craft a story of understanding between different civilizations and eras citing writers like Mary Shelley and scientists like Claude Shannon. Stay tuned as I add more and educate the world with it. 

How to improve your moral reasoning in the digital age

hmmmmm

Chinese scientists recently created gene edited babies using the controversial CRISPR-Cas9 technique. Scholars have alarmed the world about the ethical questions raised by genetic engineering. Writers also grappled with the recent explosion of machine learning and its effects. This includes the science behind how computers make decisions. This way they can determine its effects on society. These issues of artificial intelligence arise in self-driving cars and image recognition software. Both issues raise questions of how much power humans should exert in controlling genes or computers.

I believe we need to examine our heuristics and methods of moral reasoning in the digital age. With these issues of the information age, I’d like to create a generalized method of moral reasoning. With this, any human can address these issues while remaining faithful to the work of philosophers and historians.
In 1945 mathematician George Poyla introduced “heuristic” to describe rough methods of reasoning. It should come as no surprise that I fell in love with the term immediately. I drew fascination in how way we could make estimations and speculate on issues. In science in philosophy, we can discuss them in such a way to find solutions and benefit the world.

I’ve written on the digital-biological analogue in these issues. Scientists and engineers harness the power of machine learning to form decisions from large amounts of data. Though this is data that we, humans, feed to computers, the decisions comes algorithm design and even aesthetic choices. The algorithm design would be the scientific process a computer performs in making decisions. An aesthetic choice might be the way an engineer designs the appearance of a computer itself. The results of these choices illustrate the tension brought upon by robots making these decisions. They arise in the way self-driving cars make decisions or what sort of rights might a robot have. The ways our sense of control and autonomy, the way we control our lives, come into play are common among genetic engineering and artificial intelligence.  The ways our sense of control and autonomy, the way we control our lives, come into play are common in them. One of the basic principles of health care ethics and a subject to debate, autonomy pervades through everything.

We can reason about science by some notions of scientific realism, as philosophers argue. The philosophical notion of scientific realism generally holds that our scientific phenomena are real. The atoms that make up who we are or the genes of our DNA are phenomena that exist. One might argue for scientific realism because our scientific theories are the closest we can approximate of them. Through this line of thought, we should take positive faith in the world described by science. Under this interpretation, our arguments about digital-biological autonomy depend upon what sort of decisions empirical research dictates we can create. Our arguments about digital-biological autonomy may depend upon what decisions empirical research dictates. We can rely on science to show this. A constructed idea of an autonomous driver that algorithms dictate could make autonomous decisions. In her paper “Autonomous Patterns and Scientific Realism”, professor of philosophy Katherine Brading argues that scientific theories, under a notion of scientific realism, should allow for phenomena partially autonomous from data itself. It emerges about the context of the data. It comes down to creating an empirical process, from lines of code on a computer screen to the swerving motion a self-driving car. We can dictate what choices would be moral and immoral because those processes hold the truth of autonomy.

One might opt to take a more pessimistic view. It’s possible our scientific reasons for phenomena only amount to persuasion. A proponent of this point of view may argue that, we aren’t reasoning: we’re rationalizing. Atoms and genes don’t exist, or may not be able to determine their existence. Our methods of observing them, such as theories and equations, might not need to argue that atoms and genes exist. They only need those theories and equations to hold true given the circumstances of those phenomena. This may be the theory dictating the formation of atoms or a equations determining when a gene activates. An anti-realist might argue that, as the universe supposes no such notion of autonomy on humans, we can exercise an unrestricted autonomy. Our notions of autonomy would then give humans power over machines and scientific theories. This battle between realists and anti-realists has taken place through much of the history of philosophy. Philosopher Thomas Kuhn wrote that discoveries cause paradigm shifts in our knowledge. We experience changes in perception and language itself that allow us to create new scientific theories. This idea that science depends on the history and language of our time contrasts the scientific realism. In contrast, philosopher Ludwig Wittgenstein argued for remaining silent on such issues of what science tells. We can avoid some of the conflicts between realists and anti-realists. On gene editing, an anti-realist may argue autonomy depends upon unknowable factors of genes.

Ethicists have to contend with moral realism as well. Moral realism is the idea that moral claims we make depend on other moral components such as obligations, virtues, autonomy, etc. Something like “Murder is wrong” may depend on responsibility to do no harm. It can also be the idea that claims can be true or false and some are true. The first definition is the ontological definition while the second is the semantic definition. Au contraire, moral anti-realists may argue these claims don’t hold such a value. They may also argue that the claims may have the value, but no moral claim is actually true. To a non-philosopher, this might seem trivial. It’s easy to say “Of course moral claims depend upon things like obligations and autonomy!” But reasoning through arguments shows the this difference. A moral realist might argue that objective theoretical values such as autonomy are self-imposed, but not from our own self. Instead, they’re from an idealized version of ourself reflecting upon those values. The moral realist would create a biological-digital autonomy from this idealized notion. It might be ambiguous or impossible to define with complete clarity.

We can improve these methods of reasoning and thinking in areas such as logic or statistical reasoning. We can also determine knowledge of what our scientific theories tell us. Through this, we can create notions of autonomy to address these issues. In addressing these issues, we must identify what philosophical struggles the digital age imposes upon us. Silicon Valley ethicist Shannon Vallor teaches and conducts research on artificial intelligence ethics. In a recent interview with MIND & MACHINE, she spoke about how her students began experiencing cycles of behavior with technology. Students experienced anxieties when jumping into new technologies of smartphones and social networks. These students would become reflective and critical about technology. They would become more selective as time went on. She described a “metamorphosis” as students tried new technologies. They would reflect to understand how they themselves changed as a result of them. This could be with attention span or notions of control similar to autonomy. Throughout the process, the students ask what role the technology has in their lives.

Through human and machine autonomy, Vallor explained how the power to govern our lives relates to these ethical theories.  This meant using our intellectual control to make our own choices. AI presents a challenge of the promise of off-loading many of those choices to machines. But, though we give machines values, a machine doesn’t appreciate these values the way humans do. They’re programmed to match patterns in a way that’s completely different from our methods of moral reasoning. The question of how much autonomy we must keep for ourselves so we can maintain the skills of governing ourselves. Being clear, though, Vallor said machines don’t make judgements. Judgements must perceive the world. While machines process data in code, they don’t understand the patterns we perceive as humans. Instead, we have to understand what’s gained and what’s lost in giving that choice to machines. I believe these judgements are exactly how our heuristics about moral reasoning and theories come into play. It’s what separates man from machine.

Artificial intelligence programs, computers, and robots also learn from our biases we instill in them based on how we train them. Vallor expressed concerns of authoritarian influences taking control of artificial intelligence. Still, there are ways to use AI for democratic choices. One example of such an issue was China’s social credit system built on ideas of society built only on social control and social harmony. It uses an all-encompassing AI system to track citizen performance. It serves the centralized standards of behavior with systematic rewards and punishments to ensure people are on a narrow path. Vallor said “we are not helpless unless we decide that we’re helpless.”

One might present objections to these methods of moral reasoning. One might argue that humans are irrational on the basis of behavioral psychology. The biases, false judgements, and poor methods of reasoning that are in our nature from this field show that we rely on heuristics. Falling victim to fallacies such as ad hominem or sunk cost, our methods of reasoning my seem flawed. We can at least create arguments that have some degrees of certainty, though. It may includes the predictions economists and psychologists make of our behavior. I address this argument by arguing that, though our methods of reasoning may have flaws, they can improve. We learn life lessons and proper etiquette about treating people as we gain experience. This suggests that moral reasoning such as on issues of our digital-biological autonomy may improve too.

Still, there are reasons to remain pessimistic about our moral reasoning we derive in this sense. One might argue that our brains developed not to find truth, but to be better than those around us. An evolutionary psychologist may theorize these are social connections of “survival of the fittest.” They haven’t lead us to achieve a more objective truth, but a more effective persuasion. I address this by explaining human beings might have these naturalistic tendencies. But it doesn’t mean that the brain developed to behave in response to these social forces only. If an individual has to convince others to avoid certain dangerous species of animals, it depends on problem solving. This is the method of reasoning by the tribe. The cognitive method of deliberating facts and reflecting upon them shows this reality of our nature. It can apply to moral reasoning.

Another argument could be that our methods of reasoning are only rationalizations, not reasons. They’re only conclusions we want to believe. I illustrate this with how politicians may go to war or limit the rights of certain groups through the solutions. Then, they reason backwards from the solutions such that they contrive justification of them. The ideologies of politics, in general, seek these sorts of conclusions on rights, liberties, and other values. We may find ourselves emphasizing these answers before we have the questions. It amounts to ideology. I address this by arguing we can test rationalizations against observation and intuition to come closer to reasoning. We may have intuitive reasons that we are not able to articulate for every action. But we still may have the ability to form moral judgements about those actions. It’s this intuition about our moral reasoning that we trust to lead us in the right direction.

I have attempted to outline methods of moral reasoning given the constraints of technology. I did this while reckoning with the arguments put forward by philosophers for decades. I hope these notions can prove beneficial to conversations of autonomy and rights of the individuals and machines in the digital age. We must re-evaluate these thoughts to address today’s issues. Artificial intelligence and genetic engineering can seem more similar than they first appear. Through a notion of moral reasoning, determine what this digital-biological autonomy should be.