
Chinese scientists recently created gene edited babies using the controversial CRISPR-Cas9 technique. Scholars have alarmed the world about the ethical questions raised by genetic engineering. Writers also grappled with the recent explosion of machine learning and its effects. This includes the science behind how computers make decisions. This way they can determine its effects on society. These issues of artificial intelligence arise in self-driving cars and image recognition software. Both issues raise questions of how much power humans should exert in controlling genes or computers.
I believe we need to examine our heuristics and methods of moral reasoning in the digital age. With these issues of the information age, I’d like to create a generalized method of moral reasoning. With this, any human can address these issues while remaining faithful to the work of philosophers and historians.
In 1945 mathematician George Poyla introduced “heuristic” to describe rough methods of reasoning. It should come as no surprise that I fell in love with the term immediately. I drew fascination in how way we could make estimations and speculate on issues. In science in philosophy, we can discuss them in such a way to find solutions and benefit the world.
I’ve written on the digital-biological analogue in these issues. Scientists and engineers harness the power of machine learning to form decisions from large amounts of data. Though this is data that we, humans, feed to computers, the decisions comes algorithm design and even aesthetic choices. The algorithm design would be the scientific process a computer performs in making decisions. An aesthetic choice might be the way an engineer designs the appearance of a computer itself. The results of these choices illustrate the tension brought upon by robots making these decisions. They arise in the way self-driving cars make decisions or what sort of rights might a robot have. The ways our sense of control and autonomy, the way we control our lives, come into play are common among genetic engineering and artificial intelligence. The ways our sense of control and autonomy, the way we control our lives, come into play are common in them. One of the basic principles of health care ethics and a subject to debate, autonomy pervades through everything.
We can reason about science by some notions of scientific realism, as philosophers argue. The philosophical notion of scientific realism generally holds that our scientific phenomena are real. The atoms that make up who we are or the genes of our DNA are phenomena that exist. One might argue for scientific realism because our scientific theories are the closest we can approximate of them. Through this line of thought, we should take positive faith in the world described by science. Under this interpretation, our arguments about digital-biological autonomy depend upon what sort of decisions empirical research dictates we can create. Our arguments about digital-biological autonomy may depend upon what decisions empirical research dictates. We can rely on science to show this. A constructed idea of an autonomous driver that algorithms dictate could make autonomous decisions. In her paper “Autonomous Patterns and Scientific Realism”, professor of philosophy Katherine Brading argues that scientific theories, under a notion of scientific realism, should allow for phenomena partially autonomous from data itself. It emerges about the context of the data. It comes down to creating an empirical process, from lines of code on a computer screen to the swerving motion a self-driving car. We can dictate what choices would be moral and immoral because those processes hold the truth of autonomy.
One might opt to take a more pessimistic view. It’s possible our scientific reasons for phenomena only amount to persuasion. A proponent of this point of view may argue that, we aren’t reasoning: we’re rationalizing. Atoms and genes don’t exist, or may not be able to determine their existence. Our methods of observing them, such as theories and equations, might not need to argue that atoms and genes exist. They only need those theories and equations to hold true given the circumstances of those phenomena. This may be the theory dictating the formation of atoms or a equations determining when a gene activates. An anti-realist might argue that, as the universe supposes no such notion of autonomy on humans, we can exercise an unrestricted autonomy. Our notions of autonomy would then give humans power over machines and scientific theories. This battle between realists and anti-realists has taken place through much of the history of philosophy. Philosopher Thomas Kuhn wrote that discoveries cause paradigm shifts in our knowledge. We experience changes in perception and language itself that allow us to create new scientific theories. This idea that science depends on the history and language of our time contrasts the scientific realism. In contrast, philosopher Ludwig Wittgenstein argued for remaining silent on such issues of what science tells. We can avoid some of the conflicts between realists and anti-realists. On gene editing, an anti-realist may argue autonomy depends upon unknowable factors of genes.
Ethicists have to contend with moral realism as well. Moral realism is the idea that moral claims we make depend on other moral components such as obligations, virtues, autonomy, etc. Something like “Murder is wrong” may depend on responsibility to do no harm. It can also be the idea that claims can be true or false and some are true. The first definition is the ontological definition while the second is the semantic definition. Au contraire, moral anti-realists may argue these claims don’t hold such a value. They may also argue that the claims may have the value, but no moral claim is actually true. To a non-philosopher, this might seem trivial. It’s easy to say “Of course moral claims depend upon things like obligations and autonomy!” But reasoning through arguments shows the this difference. A moral realist might argue that objective theoretical values such as autonomy are self-imposed, but not from our own self. Instead, they’re from an idealized version of ourself reflecting upon those values. The moral realist would create a biological-digital autonomy from this idealized notion. It might be ambiguous or impossible to define with complete clarity.
We can improve these methods of reasoning and thinking in areas such as logic or statistical reasoning. We can also determine knowledge of what our scientific theories tell us. Through this, we can create notions of autonomy to address these issues. In addressing these issues, we must identify what philosophical struggles the digital age imposes upon us. Silicon Valley ethicist Shannon Vallor teaches and conducts research on artificial intelligence ethics. In a recent interview with MIND & MACHINE, she spoke about how her students began experiencing cycles of behavior with technology. Students experienced anxieties when jumping into new technologies of smartphones and social networks. These students would become reflective and critical about technology. They would become more selective as time went on. She described a “metamorphosis” as students tried new technologies. They would reflect to understand how they themselves changed as a result of them. This could be with attention span or notions of control similar to autonomy. Throughout the process, the students ask what role the technology has in their lives.
Through human and machine autonomy, Vallor explained how the power to govern our lives relates to these ethical theories. This meant using our intellectual control to make our own choices. AI presents a challenge of the promise of off-loading many of those choices to machines. But, though we give machines values, a machine doesn’t appreciate these values the way humans do. They’re programmed to match patterns in a way that’s completely different from our methods of moral reasoning. The question of how much autonomy we must keep for ourselves so we can maintain the skills of governing ourselves. Being clear, though, Vallor said machines don’t make judgements. Judgements must perceive the world. While machines process data in code, they don’t understand the patterns we perceive as humans. Instead, we have to understand what’s gained and what’s lost in giving that choice to machines. I believe these judgements are exactly how our heuristics about moral reasoning and theories come into play. It’s what separates man from machine.
Artificial intelligence programs, computers, and robots also learn from our biases we instill in them based on how we train them. Vallor expressed concerns of authoritarian influences taking control of artificial intelligence. Still, there are ways to use AI for democratic choices. One example of such an issue was China’s social credit system built on ideas of society built only on social control and social harmony. It uses an all-encompassing AI system to track citizen performance. It serves the centralized standards of behavior with systematic rewards and punishments to ensure people are on a narrow path. Vallor said “we are not helpless unless we decide that we’re helpless.”
One might present objections to these methods of moral reasoning. One might argue that humans are irrational on the basis of behavioral psychology. The biases, false judgements, and poor methods of reasoning that are in our nature from this field show that we rely on heuristics. Falling victim to fallacies such as ad hominem or sunk cost, our methods of reasoning my seem flawed. We can at least create arguments that have some degrees of certainty, though. It may includes the predictions economists and psychologists make of our behavior. I address this argument by arguing that, though our methods of reasoning may have flaws, they can improve. We learn life lessons and proper etiquette about treating people as we gain experience. This suggests that moral reasoning such as on issues of our digital-biological autonomy may improve too.
Still, there are reasons to remain pessimistic about our moral reasoning we derive in this sense. One might argue that our brains developed not to find truth, but to be better than those around us. An evolutionary psychologist may theorize these are social connections of “survival of the fittest.” They haven’t lead us to achieve a more objective truth, but a more effective persuasion. I address this by explaining human beings might have these naturalistic tendencies. But it doesn’t mean that the brain developed to behave in response to these social forces only. If an individual has to convince others to avoid certain dangerous species of animals, it depends on problem solving. This is the method of reasoning by the tribe. The cognitive method of deliberating facts and reflecting upon them shows this reality of our nature. It can apply to moral reasoning.
Another argument could be that our methods of reasoning are only rationalizations, not reasons. They’re only conclusions we want to believe. I illustrate this with how politicians may go to war or limit the rights of certain groups through the solutions. Then, they reason backwards from the solutions such that they contrive justification of them. The ideologies of politics, in general, seek these sorts of conclusions on rights, liberties, and other values. We may find ourselves emphasizing these answers before we have the questions. It amounts to ideology. I address this by arguing we can test rationalizations against observation and intuition to come closer to reasoning. We may have intuitive reasons that we are not able to articulate for every action. But we still may have the ability to form moral judgements about those actions. It’s this intuition about our moral reasoning that we trust to lead us in the right direction.
I have attempted to outline methods of moral reasoning given the constraints of technology. I did this while reckoning with the arguments put forward by philosophers for decades. I hope these notions can prove beneficial to conversations of autonomy and rights of the individuals and machines in the digital age. We must re-evaluate these thoughts to address today’s issues. Artificial intelligence and genetic engineering can seem more similar than they first appear. Through a notion of moral reasoning, determine what this digital-biological autonomy should be.
Leave a Reply