Artificial intelligence re-defines reality and the self

Isaiah, you so silly.
Is this strong AI? Is this just fantasy? Caught in a human mind. No escape from reality.

When the Cold War brought the world’s attention to revolutions in scientific research, artificial intelligence would shake our understanding of what separates a human from the rest of the world. Scientists and philosophers would draw from theories of mind and question the epistemic limits of what we can know about ourselves. Neurophysiologist and founder of machine learning Warren McCulloch described his cybernetic idealism in his 1965 book Embodiments of Mind. This postwar scientific movement he founded with mathematician Norbert Wiener and anthropologist Gregory Bateson was a mix of science and culture at the time. Cybernetics, based on the Greek word kybernētikḗ, meaning “governance,” was a collaboration between ideas from machine design, physiology, and philosophical ambition. Since its 1948 inception, it would create the language of science and technology we take for granted today. The advances in artificial intelligence brought upon by cybernetic idealism would continue to define how scientists and philosophers understand the world.

In 1943, Warren McCulloch and Walter Pitts developed the first artificial neuron, a mathematical model of a biological neuron. It would later become fundamental to neural networks in the field of machine learning. The advances in computer science and related disciplines have caused a flow of terms among writers. From the Internet of Things to Big Data as well as deep learning, the world is quantified. Mathematician Stephen Wolfram and other researchers have even argued that matter itself is digital. Latching on to any method of analyzing the world into data that we can use to draw conclusions, this created a new sort of metaphysics that beings themselves can be quantified. Cybernetics sought to create a language in which these scientific phenomena could be described. Artificial intelligence itself exploding in the news, especially with neural networks and deep learning technologies, wouldn’t have been possible to describe without McCulloch’s work in establishing what algorithms and information truly meant.

We should scrutinize this premise of digital metaphysics. We must understand how our metaphors of processing information like a computer would are limited in describing how humans reason. We must come to terms with how much knowledge the digital age can truly tell us from large amounts of information. Algorithm-based decisions that machines make that are involved in areas like business, medicine, and architecture differ from the decisions humans make. Data itself is part of our world, but, if it were to take the responsibilities and tasks normally brought upon by humans, then this new form of intellectual labor is changing the self and reality.

After World War I, the self was shaped by the technology of the era. Telegrams and newspapers constructed psychologist Sigmund Freud’s psychic censorship of painful of forbidden thoughts, which Freud modeled after methods of Czarist guards censoring information in Russia. During World War II, the cyberneticians created a notion of the self through feedback systems from Wiener’s engineering work on weapons systems during World War II. In determining how tracking machines could shoot enemy airplanes, the engineers had to aim thousands of feet higher than the plane and predict how the plane would move. Wiener created a machine that could learn how the pilot moves based on the pilot’s past behavior. These goal-directed feedback-based systems would define artificial intelligence. Artificial intelligence itself can then be described on these concepts of the self. In today’s age, we can model artificial intelligence on reality and the self.

The advances in Information theory emerged from newer forms of communication since the 1950s. Machines would rely on feedback loops and human-machine interfaces would create new forms of communication and interaction themselves. Scholars would write about “information” in measuring communicated intelligence using bits, binary digits. In How Reason Almost Lost Its Mind: The Strange Career of Cold War Rationality, Historian of science Paul Erickson and his colleagues described this period as a passage from Enlightenment reason to “quantifying rationality,” a shift from a qualitative capacity to judge to an extensive but narrow push to measure. But some Enlightenment notions survived the transition. Cybernetics would describe how these systems were structured and what possibilities they may have. Cognitive scientist Marvin Minsky would speculate when scientists could create a robot with human-like intelligence.

What is real? What isn’t? McCulloch sought to understand and separated the world into what a metaphysician studies, the “mind”, and what a physicist studies, the “body.” McCulloch developed “experimental epistemology,” that the physicist had to move into the “den of the metaphysician” to follow the synthetic a priori principle, a term that philosopher Immanuel Kant used to describe factual, yet universalizably necessary, truths. Drawing upon philosophers like Kant, Gottfriend Leibniz, and Georg Hegel, McCulloch and his colleagues hypothesized how to describe information itself different from other materials such as matter and energy. Wiener argued information was material from Leibniz’s “universal characteristics” that would be part of a logical language that all machines and computers could use. Leibniz also argued these mechanical calculations can amount to reasoning. Leibniz allowed cybernetics to move beyond the binary alternative between material and ideal in a philosophical sense. Leibniz and Kant were also sources for McCulloch’s search for the conditions of cognition—the synthetic a priori—in the digital structure of the brain.

These notions of cybernetics in today’s reality meant artificial intelligence could break down barriers researchers previously hadn’t even known about. In The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, computer scientist Pedro Domingos said machine learning is “the scientific method on steroids.” The way machine learning has already shaped society is absolutely unprecedented. Politicians take action and start dialogues through predictions of voting behavior of different demographics. Deep learning technologies can diagnose diseases like cancer better than doctors can in some respects. Domingos argued that a “master algorithm” will create a “perfect” understanding of society, connecting sciences to other sciences and social processes themselves. Automation will become automated this way. “The Industrial Revolution automated manual work and the Information Revolution did the same for mental work, but machine learning automated automation itself.” Domingos imagined society running on the steam of some other intelligence. Automation has been a major force in global history for at least two centuries. To automate automation is to imagine historical causality itself as controlled by artificial intelligence. Domingos is sanguine about automated automation making things better, but it isn’t clear why. This left us without a conceptual foundation on which to build an understanding of the ubiquitous digital processes in our society, deferring even historical causality to machine learning. McCulloch’s group of scientists had a glimmer of such an approach, a way to understand and govern the very machine learning they had set in motion.

Cybernetics keepin’ it real.

By setting the foundations of neuroscience, scientists could describe these examples of neural activity. The same way synapses of the brain fire and transmit information, computer could use neural networks and telephone operators could use switching boards to create calculating machines, like the ones mathematician-physicist John von Neumann constructed. They then used the digital machine infrastructure to create logical propositions such as the ones philosopher-mathematician John Alan Robinson created. McCulloch and Pitts believed they could extend these ideas to ways computer could form predictions and, in turn, how artificial intelligence would operate. Embracing neural networks as ways the brain processes information, the two would create the semantics and syntax for neurophysiology and machine learning with principles of digital operation and organization in their paper “(Physio)Logical Circuits: The Intellectual Origins of the McCulloch-Pitts Neural Networks.” Similar networks would also study the symbolic nature of human understanding through semiotics as well as the the structure of human knowledge itself with semantic networks. The interdisciplinary interactions between neuroscience, computer science, mathematics, and other disciplines alongside these newfound methods of meaning and material itself meant these forms of neural networks were closely related to nature itself.

The two scientists wrote that these neural networks needed cause and effect relationships that Kant described that one event necessarily followed another in time. Neuronal activity, though, can’t describe this necessity because there are disjunctive relationships that prevent the determination of previous states. While we may observe neuronal states causing one to another, they aren’t apparent until after the effect is observed. The states following one another couldn’t be determined from their preceding states. This meant the knowledge of these digital systems would always be incomplete, with only a certain amount of autonomy. The brain establishes methods to receive and structure impulses within itself, states of matter of its logical structure and receptive neurons. The brain is the intersection and source of the mind and the world. Putting Kant’s principle of causality into these neurophysiological terms, these neural networks have a partial autonomy.

The scientists created the original McCulloch-Pitts neuron to show the digital is real. Logical expressions are central to cognition itself, and the digital is combination of idea and matter. Through our brains themselves and computers, these frameworks are still restricted in that they can’t “determine” their networks themselves. McCulloch’s Kantian approach uses symbols to represent these digital constructs. This abstraction of the digital is the metaphysics of reality.

Sociologist William Davies argued the data-driven approaches of the digital age don’t focus on causality, but, rather, the ability to control behavior. Davies, unlike Domingos, believes letting data make decisions means allowing causes of digital information without participating in the cause and effect relationships themselves. Understanding Big Data means seeking correlations in digital networks. The data abstractions are real and create the world, but data still can’t constitute decision-making. Humans perform actions on these cause and effect relationships, it means artificial intelligence of machine learning can perform using pragmatically successful algorithms. The digital still can’t determine the correlations that take place, and automation itself can’t be automated. Information is information. It takes a wisdom to put it in context.

We can come close to live in the digital reality by understanding how our world includes data from these processes. For the sake of machine learning algorithms making decisions that have profound impacts on social behavior and ethical norms, we need notions of responsibility, duty, and obligation that revolve around this digital reality. A partial autonomy given to computers means they don’t make judgements the same way humans do, but it the language of philosophy will allow us to answer the difficult questions they pose for the future.

Published by


Leave a comment