A Computational Theory of Mind

Brains are only like computers in a specific abstract sense. We can take apart this analogy in the context of the brain-computer analogy to determine knowledge for philosophy, neuroscience, artificial intelligence, and other research areas. It’s very harmful in many ways to treat the nervous system as the hardware in such a way that we need to understand the cognitive science as software when we don’t understand the limitations of such a metaphor. Any theory of anatomical connection we demonstrate in vertebrate nervous systems may give us a basic description of what happens at each stage, but don’t tell us how a given input relates to a certain output. Instead, they obfuscate the description of the brain by using unnecessary comparisons to explain phenomena that are better off explained by describing the phenomena directly and precisely.

An output of a computer depends on its program, input, and functional stages that lead to the output. We can theorize and speculate on artificial and biological computers by using this analogy with other phenomena such as artificial neural networks in computer science and mathematics or biological computers among the brains of different organisms. These computers show connections between the disciplines underlying computation with its theory from statistical mechanics and thermodynamics. We can use ideas from information theory, entropy dynamics, and constraint problems on the resulting artificial and biological computers.

Classicalism vs connectionism

The computational theory of mind is the leading contemporary version of the representational theory of mind, in which we use mental structures to represent mental processes. The computational theory of mind tries to explain all psychological states in terms of mental representations. Philosopher Stephen Stich argued cognitive psychology doesn’t and shouldn’t taxonomize mental states by their semantic properties. Those semantic properties are determined by the extrinsic properties of a mental state. Stich proposes a Syntactic Theory of the mind, arguing the semantic properties of mental states don’t have an explanatory role in the mental states. The Syntactic Theory of Mind uses computational theories of psychological states that only concern with the formal properties of the objects the state relate to. We use semantically evaluable objects with the computations of mental processes. Computational theory of mind proponents disagree on how personal-level representations (thoughts) and process (inferences) in the brain are realized. Classical Architecture proponents (classicists) such as Turing, Fodor, Pylyshyn, Newell, and Simon, believe mental representations are symbolic structures that have semantically evaluable constituents. Mental processes are rule-governed manipulations of them that are sensitive to their constituent nature. Connectionist Architecture proponents (connectionists) like McCulloch, Pitts, Rumelhart, and McClelland believe mental representations are realized by activation patterns in simple processors (nodes). These mental processes are made of the spreading activation of these patterns. The nodes aren’t semantically evaluable typically. One may argue that localist theories are neither definitive nor representative of the connectionist program.

Classicists want to find mental properties similar to language. Fodor’s Language of Thought Hypothesis (LOTH) uses mental symbols to make up the neural basis of a thought like a language. In the LOTH, the potential infinity of complex representational mental states comes from primitive representational states that form using recursive formation rules. We use a combinatorial structure to account for productivity and systematicity of the system of mental representations. We explain the properties of thought using the content of representational units and their combinability into contentful complexes. The semantics of language and thought is compositional.

Connectionists want to consider the architecture of the brain, networks of interconnected neurons. This architecture can’t carry out classical serial computations, but, instead, parallel computations lack semantic compositionality nor are semantically evaluable the way classicists argue. Representation is distributed, not local (unless it’s computationally basic). Connectionists argue information processes in these networks resembles human cognitive functioning. Connectionist networks trained by exposure to objects learn and distinguish. Some argue connectionism means there aren’t propositional attitudes. LOTH-style representation may, on the other hand, be necessary for the general features of connectionist architectures.

Stich believed mental processes are computational, but these computations aren’t sequences of mental representations. Other philosophers accept mental representation, but deny that the computational theory of mind gives the correct account of mental states and processes. Writer Tim Van Gelder doesn’t believe psychological processes are computational. Instead, dynamic cognitive systems give rise to states that are quantifiable of a complex system of the nervous system, the body, and the environment in which they are created. Cognitive processes aren’t rule-governed by discrete symbolic states. Instead, they’re continuous, evolving total states of dynamic systems by mutually determining states of the system’s components. The dynamic system leads to representation that is information-theoretic through state variables or parameters.

Philosopher Steven Horst wrote that computational models are useful in scientific psychology, but they don’t give us a philosophical understanding of intentionality of commonsense mental states. The computational theory of mind tries to reduce the intentionality of states to the intentionality of the mental symbols, but the relevant notion of symbolic content is bound by the notions of convention and intention. Horst believed the computational theory of the mind uses the very properties that it is supposed to reduce things to as a circular argument that need to be reduced themselves.

Intentionality

If we treat propositional attitudes with intentionality as a physical properties, we can build a computer with states that have genuine intentionality. But no computer model that stimulates human propositional attitudes will have genuine intentional states. Intentionality of propositional attitudes isn’t a physical property.

We may consider the network theory of meaning (or holistic theory or conceptual-role theory) such that the meaning of an expression plays a role in its internal representational economy. This way it relates to sensory input and behavioral output. Meaning is relational as an expression’s meaning is a function of its inferential and computational role in a person’s internal system. A robot that behaves like a human is still subject to the question of whether those thoughts it generates have the same meaning that represent our own meaning. Assigning meaning to the internal states of a robot would be applying a double standard arbitrarily with no useful purpose. The robot’s internal machinery doesn’t change that it believes, wants, and understands things. The robot’s intentional states depend on how complex its internal informational network of states it has.

We need altogether a better theory of representation in organisms much the same way we have theoretical definitions and ideas of what molecules, proteins, and neutrons are. We can also study the mind as it relates to the computer by differentiating between understanding its design and its function. Though we can perform actions such thinking, feeling, and arguing without knowing exactly the neuroscience of our brains, we can also use a computer for, more or less, what a computer is designed to do without knowing exactly how a computer. Albeit, we must know some computer basics such as turning on a computer by pressing a button as well taking care of our brains by taking care of our bodies, we must also account for intentionality in understanding why intentions works, rather than simply knowing that we have intentions and following in blind dogma.

Levels of organization

The brain-computer analogy presents a problem of complexity that we know we have in the brain as that relates to organization of a computer. The semantic, syntactic, and mechanistic levels introduce issues with the level of the algorithm and the structural implementation of those features. Neurobiological theory challenges the way of specifying the organizational description. The levels of membrane, cell, synapse, cell assembly, circuit, and behavior can be argued as levels, but even within them we have different partitions of the levels of themselves. We can also determine levels by the research methods such as how through learning and memory we can take a cellular approach to show modifications in presynaptic neurotransmitter releases in habituation. Which level is functional and which level is structural is difficult to determine, too.

Mental state semantics

According to the computational theory of mind, the mind operates on symbols and uses symbolic representations to represent mental states. We discuss the meaning of these symbols as the semantics and the relationships between them as the syntax. We may argue that more complicated mental states come from these basic symbolic “words” of the language of thought. The hypothesis that there’s a language of thought encoded within our brains is not obvious, nor is it agreed upon by everyone. There are many competing hypotheses and theories to how the logical form fo propositions relate to the structural form of the mental states that correspond to them. If we take an intentional stance to the mind (that we treat the object that has a behavior we want to predict as a rational agent that has beliefs, desires, and similar mental states that exhibit intentionality), we can uncover objective, real patterns of the world, and this is an empirical claim we can determine beyond the skepticism associated with it. Philosopher Daniel Dennett argued any object or system whose behavior we predict with this strategy is a believer. A true believer, Dennett argued, is an intentional system whose behavior we can reliably predict with the intentional stance. Our brains have somehow handled the statistical combinatorial explosion that accompanies its own complex nature such that we can use billions of cells in networks with one another, and the only representational system we have upon which to model is human language. We haven’t imagined any plausible alternatives in such detail as we do our own language.

Causality

A calculator’s representation and rules for manipulating representations can explain its behavior much the same way we describe how and why people do what they do. Philosopher Zenon Pylyshyn said we explain why a machine does something with certain interpretations of the symbols in a domain. Psychologcial theory would cross-classify categories of neurophysiology theory that would make neurophysiological generalizations miss important relations that are only describable at the level at which representations are referred to. The psychological maps only would map onto an indefinite mix of neurobiological categories.

Connectionism (Parallel distributed processing)

As philosopher Paul Churchland has argued, we may use connectionism or parallel distributed processing (PDP) in figuring out the computational operations in nervous systems in such a way we may use computer models of parallel distributed systems to generate the appropriate phenomena on a higher level (cognitive science, psychology, etc.) from basic processes (neuroscience, physics, etc.).

Tensor network theory

Neuroscientists began the theory began on the cerebellum because it has a limited number of neuron types that are each distinct on a physiological level and connected in a specific way that the cerebellar cortex produces the Purkinje cell with two different cell systems as input. Using wiring diagrams of cerebellar neurons to describe the connections accept input and result output in a parallel manner. We have a trade-off between detail to understand the system with how the array itself processes information. Through tensor network theory we attempt to use principles from mathematics, physics, and computer science in understanding how these systems may model the nervous system. We can create a schematic neuron to find out more about the patterns of neurons arranged in mathematical arrays. Though the model may be limited by the assumptions of casual theory and epistemic concerns of the phenomena we attempt to describe, it’s a nice heuristic to see something we wouldn’t otherwise see through single-cell data. We may use concepts from linear algebra and statistics to create output vectors in a coordinate system such that the corresponding tensor matrix governs the transformation of ensembles from input-output relationships by the corresponding reference frame. The spiking frequency defines a point on an axis of the coordinate system with the output a vector in the space of the output neurons. We may generalize a tensor mathematical to transform vectors into other vectors such that we address the basic problem of functionalist sensorimotor control as going from one different coordinate system to another.

When we figure out what the mind-brain does, then how it might implement various functions in a top-down manner among different levels of science, the theorizing is highly constrained, yet very well-informed, by the data of the level at which we implement. But, with tensor network theory, we wouldn’t label these processes as top-down, but, rather, from lower-level fundamental processes to higher-level descriptions.

We use a tensor transformer to transform in a way we still need: to transform vectors in sensory space to vectors in motor space. We may deform one phase space to get an object in the other one using representations as positions in phase space and computations as coordinate transformations between phase spaces. The Pellionisz-Llinás approach uses sensorimotor problems constrained by realistic creatures as a method of reducing at bottom the problem of making coordinate transformations between phase spaces. In tensor network theory, we look for functional relationships between connected cell assemblies and investigate them for properties relevant to phase spaces much the same way a computer or artificially intelligent machine searches for solutions among sentence-related criteria. Such AI would require this knowledge to determine what to do.

Tensor network theory still needs to unify results across the disciplines of cognitive science, psychology, and neuroscience in such a way that we can construct a universalized, common set of rules with coherent explanations that we can experimentally test and verify. Attempts to describe the vestibulu-ocular reflex, the method of determining movement from visual image stimuli, using semicircular canals of the vestibular system, we further imagine each eyeball detecting the images and communicating to those receptors. This system needs to determine how muscles contract so the eyes move in a way to reflect the head movements. The corresponding tensor approach would imagine the system converting a head position vector into a vector that describe muscle positions. The transformation from vestibular to oculomotor, according to the Pellionisz-Llinás hypothesis, takes a premotor vector intoa motor vector. The vestibulur organ, we can show, has a set of positions it prefers that we can call an eigenposition.

We further pose Churchland’s phase-space sandwich hypothesis that describes spatial organization of maps layer so that the corresponding neurons may perform any transformation from two dimensions to two dimensions. The maps representing phase spaces aren’t literally stacked upon one another. They may remain spatially distant from each other. With the topology of the cortical area, we still have to answer whether tensor network theory can account for neuroplasticity. Covariant proprioception vectors can give feedback about motor performance which can further provide information of transformations of the cerebellar matrix. The matrix would then turn into a state such that its eigenvectors are identical so that they are the “correct” coordinate transformation. Climbing fibers of the cerebellum may provide a pathway for reverbative feedback that modifies transformational properties of the cerebellar network. This is found in AI that use relaxation algorithms.

Mental states

If we determine how behavior related to cognition and complexity emerge from the basic neurophysiological theories that govern sensorimotor control, we can determine the nature and dynamics of cognition. We may construct representations at abstract levels of organization that correspond to cognitive activity as the way sentiential representations act according to logical rules. Phase spaces may recognize certain features as humans do, such as eyes of faces or shapes of animals. We may describe phase spaces in such a way that they’re occupied by these sensory stimuli. Using the cones of photoreceptors’ reflectances responsible for color, we can demonstrate a computational problem of how to represent a unique color with a triplet of reflectance values.

Parallel models

Sequential models can be powerful, but AI researchers have shown their ineffectiveness in simulation of fundamental cognitive processes in areas of pattern recognition and knowledge storage and retrieval. The differences between human brains and computer science phenomena only furthers these issues. Humans and computers use very different methods of storing memory as well as methods of connectivity among humans neurons against artificial ones.

The Hinton-Sejnowski visual recognition system uses a network of two sets of binary units: one for detecting input from external stimuli and the other for connecting detectors to nondetecting units. These networks determine the truth and validity of hypotheses by gauging which units fire and which don’t. It performs a cooperative search in which these assemblies vote for various outcomes and the one with the most votes wins. The relationships between various hypotheses depend upon synaptic weights using probability functions and distributions. They also perform relaxations that cool the system such that it may take different molecular organizations in an annealing process. During this process the crystalline structures have a global energy minimum that parallels adding noise to the system. From these fluctuations in noise, the system breaks out of superficial minimima. The Metropolis-Hastings algorithm lets us gauge locally improbably hypotheses such that they may win over other hypotheses.

To make the model reflect empirical data in neuroscience, we must show it accounts for processing of various neurobiological pathways. Computer vision models need to account for contours of perception as well as emergent phenomena such as recognizing how a property of an image emerges from various structures working in a dynamic, systemic manner of the visual image itself. Connectionists could update their brain-computer models using evolution the same way sensorimotor mechanisms have to suit a simultaneous solution in visual recognition.

We distinguish between different levels of description of computational processes. These levels have certain reducible relationships among them in which we can make varying levels of commitment to the reductionism between them. The theory of symbolic computational functionalism of the computational theory of mind (known as computationalism) lets minds manipulate discrete, defined symbols to model discrete, defined logical structures and computer languages. A human mind may be a deterministic finite state automata under this theory, and the theory is independent of implementation. Even if different beings have different physical structures of themselves, they may have similar or the same mental states. Philosopher Patricia Churchland and neuroscientist Terrence Sejnowski have criticized that the implementation is important, especially as lower theoretical levels (such as neuroscientific phenomena) are significant to higher ones. Opponents may also argue that the representations of computationalism don’t tell us anything more than the non-representational descriptions do. Using representation may just amount to an unnecessary model or analogy that only steers us away from the precise, defined meaning of the world.

The computationalist may respond she doesn’t want to make a physiologically accurate human mind model, but wants to find intelligent features for any agent. In AI, one might want to solve a problem in computational space that doesn’t represent human features. She may also respond that representational theories note when the features of representation, such as the similarity between representations and their objects and how accurate they are, in such a way that the representational theory is more effective, valid, and justified than non-representational theories.

We may account for the intentional nature of basic emotions even if they have a physiological component to them, such as changes in facial expression or bodily mechanisms. Weak content cognitivism, the belief that emotions are or are caused by propositional attitudes, may attack this relationship of emotions to a bodily response, but the relationship of emotions to beliefs doesn’t mean all emotions are caused by propositional attitudes like beliefs. A computational theory of mind should account for emotional effects and similar affects that influence perception and judgement. But the changes in emotions don’t seem discrete as though there were differences in logical systems as we described with the Hinton-Sejnowski theory or with tensor network theory. Emotions form a continuous gradient that doesn’t seem to arise from a sort of combinatorial engine that the computationalist theory would argue. We would need a semantic activation model that adheres to principles of symbolic computational functionalism as well.

The connectionist model describes effects of some emotions, but doesn’t model emotion itself. To allow semantic activation models to use emotions in a cognitive position would mean that emotions, in some sense, are the same as similar cognitive categories such as “visual stimuli” or “beliefs.” The other features of emotion, though, semantic activation models need to describe implementation-dependent details of the model itself.

The computationalist position also has issues with how to model affects, such as those of basic emotions, independently of cognition yet still play a role in rational human behavior. The computationalist may be inclined to treat emotions as external or even unnecessary to their models. Computationalists also can’t account for the effects of basic emotions on perception and categorization using their current models. These emotions themselves may be more fundamental to those perceptions and categories that we form, given their unique nature on intellectual perception.

Neural circuitry

We may imagine the brain as a computer through neural circuitry excitation/inhibition ratios as a property for cognitive function in cortical circuits. Research in circuit function on synaptic parameters in memory and decision-making can give us parameter spaces to reduce NMDAR conductance strengths from excitatory pyramidal neurons to inhibitory interneurons or excitatory pyramidal neurons. We may apply dopamine neuronal activity using a bifurcation diagram. In math, we generally use bifurcation plots to study dynamical system behavior with respect to parameter variations or similar perturbations. We may use Ohm’s law to relate current, potential, capacitance, and resistance among membrane channel dynamics. The dopamine neuron uses ionic currents using the Hodgkin-Huxley models. We can use these fundamentals to create circuit models of neuronal activity using population firing rates to calculate dopamine efflux in the nucleus accumbens.

Functional connectivity

Functional connectivity (FC) is the statistical correlation of neural activity to two different regions. We find evidence for this at the micro-circuit level (the relationship between structure and function through anatomical and neurophysiological research techniques). We can integrate information across brain networks using large-scale brain connectivity at finer temporal and spatial resolution. If we introduce spatiotemporal models of resting-state networks, we can analyze the time frequency of these networks using wavelet analysis, sliding-windows, and similar methods of describing temporal correlations between the networks.

FC is similar to functionalism in that we’re defining our representations in terms of their functions. Functionalism holds that qualitative states (e.g., pain) are functional states of a system, interrelated to inputs, outputs, and other internal states. For this reason, cognitive models of the mind have used FC in their explanations. If we had a neuroscientific system that realizes the same set of functional states a person, it still has the problem of liberalism and chauvinism, philosopher Ned Block argued. Liberalism is the problem a mentality theory faces when it attributes mentality to systems that don’t have it, such as behaviorism, Block believed. Functional connectivity in neuroscience must address the objection against functionalism of how mentality theories attribute mentality to systems without it. A behavioral disposition may be necessary for the possession of a certain mental state, but it isn’t sufficient. Chauvinism is the problem that a theory withholds attributing mentality to systems that seem to possess it. Block argued type physicalism falls to chauvinism because it’s the view that mental state types are equivalent to physical state types.

We may talk about the mental state of pain caused by sitting on a tack that causes behaviors such as loud cries and other mental states such as anger. We define these functional definitions (of analytic functionalism) using causal roles that are analytic and a priori truths about the other mental states alongside their propositional attitude. Identities are necessary and not subject to empirical observation. Psychofunctinoalism, on the other hand, uses empirical observation (in an posteriori manner) and experimentation to determine which mental state terms and concepts are contingent to their observations.

Structural connectivity

Structural connectivity (SC) are the long-range anatomical connections among brain areas through white-matter fiber projections. We use fiber tracking using bounded diffusion of molecules in water to create non-invasive connectivity maps. In the past scientists used diffusion tensor imaging (DTI), we track neural fibers, but more recent studies have used advances in graph theory for much more research on topological features in brain connectivity.

We can characterize the relationship between FC and SC as the former relying on connections between areas and the latter the physical characteristics of the fibers. Effective connectivity (EC) characterizes the interactions between visual processing regions (a psychophysiological interaction analysis) using structural equation modeling (SEM) based on minimization of predicted and observed dependent variables. EC also refers to the broader definition of SC that captures the features that shape connectivity like synaptic strengths, neurotransmitter concentrations, and neural excitability. Through both model-driven and data-driven approaches (the former generation signals under assumptions and the latter using statistics, information theoretical measures, or phase relationships to extract EC), we can infer EC and the topology of these networks. Using binary graphs, path length measures, clustering coefficients, and other ideas from graph theory alongside results from diffusion-based tractography, we can show the resting-state networks in various regions of the brain. Scientists have introduced Network Based Statistics for comparing whole-brain connectivity between different groups of connections.

We use the covariance between populations of neural activity with the Jacobian of the system of equations describing the neural activity in each node. For an input covariance matrix, we can describe the covariance between neural populations. The Kuramoto network model uses the global graph metrics of schizophrenia patients to account for the neurophysiological impairment to describe resting-state network activity between topological properties in schizophrenia. We may use either noise-driven spontaneous dynamics and complex interactions between phase-oscillators (with coupling, delays, and noise) to introduce a dynamic nature to the model, but these two factors contradict one another. The former implies temporal correlations in spontaneous activity emerge from uncorrelated noise propagation through connections while the latter uses complex interactions of oscillatory activities in regions of the brain. We may use a supercritical Hopf bifurcation to reconcile the two using synchronized networks and their corresponding temporal variations. From this, the Kolomogorov-Smirnov distance between empirical and simulated FC dynamic distributions is optimal at this critical point and more sensitive to deviations from the critical point.

Reinforcement learning

Reinforcement learning is emerging a dominant computational paradigm for modeling psychological and neural aspects of affectively charged decision-making tasks. The Markovian assumption lets us use decision-making models that describe how nervous tissue carries out perceptual inference. The Markovian assumption lets us use Markov models such that the various states that they use to describe processes are independent of the states that came before it. Hopfield neural networks alongside the work of Hinton-Sejnowski would let computational models use rules such as the Bush-Mosteller rule (learning based on trial-based differences between predictions and outcomes) or the Sutton-Barto approach (Monte Carlo methods and temporal-difference learning in artificial neural networks). We can introduce the temporal difference error such that the agent in the system chooses an action that maximizes a temporal reward. When diffusion ascending systems of nervous systems could use temporal difference learning as a general way biological systems could learn to value states. We can used a modified form of Hebbian learning such that it depends on incorrect prediction of the future to reinforce a bidirectional synaptic change. These Hebbian synapses could then store predictions of the future in a way that accounts for the actions of dopamine neurons.

Optimizing procedures

We may use optimizing methods from mathematics, physics, and computer science in neuroscience. If we assume artificial neural networks are similar to biological ones, we may use error minimization as an optimization procedure. The way we adjust parameters and weights we may analyze the computations of a neural system in how it generate ideas from the organization of a network. We may use backpropagation in creating models that have the capacities of a biological neural network, and speculate on how networks function in a computational theory of mind. The nervous systems of the brain have too many parameters to all be entirely controlled by genetics, neurodevelopment involves a massive synaptogenesis that grow using optimization processes, some parameters are used for feedback to adapt behavior to circumstances, and natural selection optimizes nervous systems in such a way that we may regard the nervous system’s selective pressures as error-minimizing.

The neural circuit in visual tracking of moving objects uses many unknown parameters and specific weights. We can construct a network by fixing the known parameters and train it on input and output to determine the unknown parameters. The probability inference methods depend on the degree of similarity between artificial and biological networks. We may use models to generate hypotheses because the nervous system evolution may be described with a cost function and artificial models use backpropagation to search through possibilities.

Conclusion

As 18th-century German philosopher Immanuel Kant said, studying concepts of the mind without empirical science is empty and studying science without philosophy is blind. Understanding how the brain works means going from simulating in a computer to making synthetic brains. We see how models interact with the actual world (whether they simulate the world or directly use it), determine which real-world parameters are relevant to our models, and extend models to cover all levels of organization. We wrestle with reduction, causation, and other phenomena through both science and philosophy.

Published by