Blog

Contextual Emergence

How things come about from different layers within systems

What is contextual emergence?

The patterns that emerge from Conways Game of Life do so depending on the underlying theory.

Contextual emergence is a specific kind of relationship between different domains of scientific descriptions of particular phenomena. Although these domains are not ordered strictly hierarchically, one often speaks of lower and higher levels of description in which emergence occurs. From the lower levels (L), more fundamental in a certain sense, phenomena emerge in higher levels (H) in more complex phenomena. Strings of DNA in a genome may correspond to different transcripts on an transcriptome level for an individual. Chaotic conditions may emerge from certain differential equations subject to certain constraints. This complexity depends on the conditions of the context. Hence, contextual emergence.

Contextual emergence involves well-defined relationships between different levels of complexity. We can use a two-step procedure to create a systematic, formal way that an individual description (Li) creates a statistical description (Ls) among the lower level. This process can lead us to describe individuals at a higher level (Hi). We iterate this process (Li -> Ls -> Hi) through sets of descriptions connected with one another to reveal what emerges at higher levels.

During this method, we identify equivalence classes of individual states that are indistinguishable with respect to a certain property of the entire system. We can realize different statistical states in Ls by individual states in Li. Each state has limited knowledge, but, together, we can create probability distributions represent the statistical states Ls. This could be how spike signals from neural circuits encode for higher-level functions in the brain.

A property dualist position would also recognize three features of this emergence. The emergent property at the higher level Hi must have real instances, remain co-occurrent with some property or complex feature recognized in the lower level, and this property can’t be reduced to any property postulated by or definable within the lower level.

Then, we can assign individual states at the higher level H to coextensional statistical states at level L. We use a top-down constraint. This needs information about the higher description to choose a context setting the framework for the set of observable properties at level H created from L. We can implement stability criteria at level L such that the appropriate context emerges at level H. The stability refers to the ability for the features of the system to remain valid even under small changes. This includes equilibrium states of gas systems and homeostatic relationships between units of biological mechanisms such as glycolysis. We may also define stability as systems that have boundaries maintained under the dynamics specified for it We may choose to confine ourselves to certain electrochemical properties that emerge from membrane dynamics in synaptic networks. This allows the emergent properties to remain well-defined from the contextual topology of L. It also tells us which properties of L are relevant to the contextual emergence of H.

This interplay between upward and downward strategies lets the system remain self-consistent. Moving from a higher context to a lower one requires the stability conditions to lead to lower-level partitions of the system while moving to a higher context means the statistics of lower-level states extend to higher-level individual states we can observe.

Philosopher Aristotle argued emergent structures arise when their constituents interact in an interdependent manner, but others may argue that emergence may occur even if the parts act independently of one another or even be autonomous. In either case, to echo the theory of Gestalt, the whole is greater than the sum of its parts.

Point mechanics to statistical mechanics to thermodynamics

We can even demonstrate the relationship between different fields of science through contextual emergence. Moving from classical point mechanics, involving forces due to gravitational effects and electromagnetism, to statistical mechanics to thermodynamics illustrates this phenomena. From point mechanics to statistical mechanics particles or other individual units (Li) form ensemble distributions which can be studied using statistics. We can define many-particle systems with statistical ensemble descriptions (Ls) like momenta or energy of distributions, such as the Maxwell-Boltzmann distribution for N particles. From there, we can find mean kinetic energy, Gibbs free energy, entropy, and other statistical quantities.

We can observe expectation values of momenta distributions of particle ensembles to calculate temperature of the system as a higher-level function (Hi) on the assumption the system is in equilibrium. The zeroth law of thermodynamics does not come from statistical mechanics, but from thermodynamics. Other features such as irreversibility and adiabatic nature emerge as well. We can characterize this thermal equilibrium (Hi) using Kubo-Martin-Schwinger (KMS) states, defined by the condition that characterizes the structural stability of a KMS state against local perturbations or changes. This leads to stationarity, ergodicity, and mixing using the zeroth law of thermodynamics to define the system as stable. We can also use the second law of thermodynamics to express the stability in maximization of entropy for thermal equilibrium states.

The first step of the contextual emergence process (Li -> Ls) describes statistical states from the individual states, and the second gives individual thermal states from statistical mechanical states. Other examples may include emergence of geometric optics from electrodynamics, electrical engineering features from electrodynamics, chirality from quantum mechanics, and diffusion or friction of a quantum particle in a thermal medium. Neuroscientists have even found use in contextually emerging cognitive states from neural correlates.

Hodgkin-Huxley equations

The Hodgkin-Huxley equations that describe generation and propagation of action potential form a system fo four ordinary nonlinear differential equations: an electric conductance equation for transmemberane currents and three master equations for the opening kinetics of sodium and potassium channels. These lower-level stochastic (using Markov processes as transition probabilities) phenomena lead to higher-level descriptions of ion channel function to characterize a deterministic dynamic system. We can treat ion channels as macro-molecular quantum objects with the Schrödinger equation for many particles. The Schrödinger equation describes a highly entangled state of electrons and atomic nuclei as a whole, and, on a molecular level, the structure of a closed or open pore of an ion channel through the Born-Oppenheimer approximation separates electronic and nucleonic wave functions. Then, we can use the electronic quantum dynamics in a constrained rigid nucleonic frame that has a classical spatial structure. This stochastic spatial structure gives the equations of the Hodgkin-Huxley system as a contextually emergent phenomenon.

Mental states emerging from neuroscience

To realize mental states from neural states, we specify the L level as neuron states of neural assemblies in the brain with respect to H, a class of mental states that reflects the situation under study. We may use experimental protocols that include a task for subjects to define mental states while recording brain states. We may use individual neuron properties Li to find Ls such that statistical states have equivalence classes of those individual states. The differences must be irrelevant with respect to the higher level H. Philosopher David Chalmers said a neural correlate of a conscious mental state can be multiply realized by “minimally sufficient neural subsystems correlated with states of consciousness” in “What is a neural correlate of consciousness?”

We can look at phenomenal families, sets of mutually exclusive phenomenal mental states that jointly partition a space of mental states. Creature consciousness can give us refined levels of phenomenal states of background consciousness (awake, dreaming, etc.), wake consciousness (perceptual, cognitive, affective, etc.), perceptual consciousness (visual, auditory, tactile, etc.), and visual consciousness (color, form, location, etc.). With one of these contexts, we choose stability criterion at Ls that has complicated neurodynamics to find robust, proper statistical states.

We may describe L-dynamics and H-dynamics meshing with one another if coarse graining and time evolution commute with one another. We create meshes, parts of space differentiated by complexes of cells between the two levels, that follow from higher-level stability criterion. The coarse graining means fine details of the system can be smoothed over, as entropy of the system increases, such that we can make predictions about the system as a whole.

Contextual emergence could help artificial intelligence approach its potential while accounting for the inherent, intrinsic differences between science and philosophy. We may model the mind as a contextual emergent phenomena of the neurophysiology of the brain. As we learn about the world, we can account for emergent phenomena when addressing issues in science and philosophy, and AI would benefit from these methods of understanding. AI could avoid the issues of reductionism using higher-level emergent behavior resulting from neural networks in the human brain. Backpropagation of neural networks lets us optimize the gap between reality and models they represent using feedback loops with optimal weights of individual neurons when optimized for emergent details. The same way a human can differentiate between a drawing of an lion and a photograph of a lion itself using the emergent phenomena of visual images that appear together to create a lion, intelligent machines can embrace contextual emergence to view the work with inquisitive wonder and curiosity to learn. Instead of having to show a computer hundreds of thousands of images of a lion to teach them how to identify a lion, they can realize a lion in another context, such as lines of a piece of artwork, through the emergent properties of a drawing of a lion itself.

Emergence in AI can account for emotional reactions and instincts by evolving using stochastic emergent phenomena the same way human intelligence has evolved. We may address the role emotions and biases play in decision-making and intelligence, as described by psychologists Daniel Kahneman, Amos Tversky, and Gerd Gigerenzer.

We can represent proper cells with basins of attraction and chaotic attractors with coarse-grained generating partitions. These partitions of the system lead to Markov chains with a rigorous theoretical constraint for the proper definition of stable mental states. The mathematical techniques come from ergodic theory and symbolic dynamics.

The emergence of mental states from electroencephalogram (EEG) dynamics shows that data from subjects with EEG data from sporadic epileptic seizures can correlate with mental states of the seizures themselves. Using a 20-channel EEG recording, we get a 20-dimension state space that we reduce to a lower number through principal component restrictions. We find a homogeneous grid of cells to set up a Markov transition matrix that reflects the EEG dynamics using fine-grained auxiliary partition. Then, this matrix gives eigenvalues that characterize time scales for which the dynamics can be ordered by size. The eigenvectors span an eigenvector space such that the measure principal component states form a simplex. The three leading eigenvalue give a neural state representation that has a 2-simplex with three vertices, or a triangle. We can further classify neural states by distance from the vertices of the simplex to clusters of neural data. In the principal component state space, the clusters appear as non-intersecting convex sets between mental states. We may also use recurrence structure analysis to partition the state space into recurrent clusters such that they overlap from the recurrence plot of the dynamical system. We figure out the metastable states and transitions between them using a Markov chain with one distinguished transient state and other states representing the metastable states in the dynamics.

Intentionality

Philosopher Daniel Dennett describes the intentional stance of the prediction of a system’s behavior too complex to be treated as either a physical or designed system. Intentional systems behave in predicted ways by ascribing beliefs and desires to their internal states. From thermostats to chess computers, we can make predictions of a system with necessary and sufficient conditions. The system’s dynamics have to be non-trivial, so this excludes linear systems with periodic oscillations or damped relaxations. We construct an intentional hierarchy from general case of nonlinear nonequilibrium dissipative systems to more specific intentional systems. A physical system’s physical nature is necessary for being a nonlinear dissipative nonequilibrium system while a nonlinear dissipative nonequiliibrium nature is necessary for an intentional system. An intentional system is necessary for being a true believer, according to Dennett. Sufficient conditions in the intentional hierarchy implement contextual stability conditions.

The transition from equilibrium thermodynamics to fluid dynamics represents phenomenal laws of fluid dynamics (like the Navier-Stokes equation) emerging from statistical mechanics under the assumption of local equilibrium. Sufficient boundary conditions give rise to self-organization, such as through “magnetic snakes.” We give a rationality constraint for optimal dissipation of pumped energy, and true believers emerge contextually as intentional systems under mutual adoption of the intentional stance.

The representational thought may reference aboutness, and the intentional approach concerns the contentfulness or meaningfulness of representational states. We may create a network theory of meaning that emerges from the semantics of a system. Philosopher Karl Popper argued against reductionism on the grounds there’s a world of abstract, nonphysical objects we interact with when we reason, discover proofs, speculate consequences, use language, and think about mathematics and philosophy. This autonomous reality (known as World 3, with World 1 being physical laws and World 2 as mental events and processes) we find dispositions to verbal behavior and wiring in the brain. Popper implies it’s more understandable how nonphysical states interact with intelligibilia than how neural states might.

Symbolic grounding

The symbolic grounding problem is the problem fo assigning meaning to symbols on purely syntactic grounds. Cognitivists such as philosophers Jerry Fodor and Zenon Pylyshyn have described this problem. It can also describe how the question of how conscious mental states can be characterized by neural correlates. The relation between analog and digital systems such that syntactic digital symbols relate to the analog behavior of a system they describe symbolically needs to be further examined through dynamical automata. Piecewise linear time-discrete maps over a two-dimensional state space assume the interpretation as symbolic computers through a rectangular partition of the unit square. A single point trajectory is not fully interpretable as symbolic computation. We need higher-level macrostates from ensembles of state space points, or probability distributions of points, that evolve under the dynamics.

Writer Beim Graben showed only uniform probability distributions that have rectangular support exhibit a stable dynamics can be interpreted as computation. The huge space of possible probability distributions can be contextually restricted to a subclass of uniform probability distributions to create meaningfully grounded symbolic processes. Symbolic grounding is contextually emergent.

Mental causation

Describing the mind as causally relevant in a physical world introduces the problem of mental causation, the question of how mental phenomena can be highly significant in psychology and cognitive neuroscience. It means creating a notion of agency that includes the causal efficacy of mental states. This causal efficacy of mental phenomena seems inconsistent with vertical (interlevel, synchronic) determination of the mental state by neural correlates. Philosopher Jaegwon Kim argued supervenience (also known as exclusion) describes the problem that mental states are either causally inefficacious or have the threat of overdetermining neural states. Either mental events play nor horizontally determining causal role at all or they’re the causes of the neural bases of their relevant horizontal mental effects. Contextual emergence through different levels of complexity means the conflict between horizontal and vertical determination of mental events isn’t an issue. We can define proper mental states from dynamics of an underlying neural system through statistical neural states on proper partitions with individual mental states.

This construction implies that the mental dynamics and the neural dynamics, related to each other by a so-called intertwiner, are topologically equivalent. Instead of some mutually exclusive duality of the mental and the neural, we have a monistic idea that they are part of one and the same concept, albeit related to one another
in a significant way. We can describe it using dual-aspect monism using symmetry breakdown conceptually prior to the opposite of generalization. When symmetries between entities restore themselves, we observe the similarities brought upon by the symmetries and generate equivalence classes of increasing size that can describe contextually emergent phenomena. Given properly defined mental states, the neural dynamics gives rise to a mental dynamics that is independent of those neurodynamical details that are irrelevant for a proper construction of mental states. Mental states can be causally and horizontally related to other mental states, and they neither cause their vertical neural determiners nor cause the horizontal effects of the neural determiners. This resolve the problem of mental causation in a deflationary manner. Vertical and horizontal determination don’t compete against one another. They work cooperatively.

Mental causation is a horizontal relation between previous and future mental states with effectiveness given by the vertical relation (the downward relation of neural states from higher-level mental constraints). Psychophysical neutral elementary entities are composed to sets of such entities that depend on the composition of these sets in a way they acquire mental or physical properties. The psychophysically neutral domain does not have elementary entities waiting to be composed, but, rather, has one overarching whole to be decomposed into its parts. The mental and material from a psychophysical neural whole causes a contextual emergence that requires a new technical explanation and a metaphysical one.

The technical framework refers to the contextual emergence of multiplicity from unity. The “primordial” decomposition of an undivided whole generates different domains that gives rise to differentiations, such as the mind-matter distinction. The psychophysical neutral reality is the trivial, completely symmetric partition in which nothing is distinguished from one another. We can decompose this to give rise to more and more refined partitions in which symmetries are broke and equivalence classes become smaller and smaller. Phenomenal families of mental states emerge.

On a metaphysical level, mental and physical epistemic limits describe the undivided whole as an ontic (physical factual existence) dimension. They reminisce of philosopher Plato’s abstract perfect ideas and philosopher Immanuel Kant’s things-in-thesmelves (empirically inaccessible in principle and specifically mute). The mind-matter problem causes an emergence of mind-matter correlations as direct and immediate consequence of the ontic, undivided whole that can’t be further divided without introducing more distinctions. Many describe determinism as a feature of ontic descriptions of states and observables while stochasticity uses epistemic descriptions.

Mathematical models of classical point mechanics are most common examples of deterministic descriptions and three properties of them are important. (1) The differential dynamics mean the system’s evolution obeys a differential equation in a space of ontic states. (2) The unique evolution of the system means initial and boundary conditions give a unique trajectory. (3) The value determinateness assumes that any state can be described with arbitrarily small error. These three features define a hierarchy for the contextual emergence of deterministic descriptions assuming (1) is a necessary condition for determinism, (2) can be proven under sufficient condition that trajectories created by a vector field obeying (1) pass through points whose distance is stable under small perturbations. We assume (2) for almost every initial condition as a necessary condition of determinism that defines a phase flow with weak causality. To prove (3), we need strong causality as a sufficient condition. The deterministic dynamics of Kolmogorov flow implement microscopic chaos as a stability condition. It’s also possible a continuous stochastic process that fulfills the Markov criterion can lead to a deterministic “mean-field equation.”

Different descriptive levels can correlate with different degrees of granularity. Lower-level descriptions address systems in terms of micro-properties while more global macro-properties account for higher-level descriptions. Philosophy Bas van Fraassen noted the explanatory relativity, in which explanations are not only
relationships between theories and facts, but three-place relations between theories, facts, and contexts. Contexts determine relevance of explanation backed by relevance criteria for reproducibility in science, especially in interdisciplinary fields such as bioinformatics or computational neuroscience. This gives a framework for discussing contextual emergence alongside theories and facts as they relate to explanations. We consider the granularity of descriptions that we observe when descriptive levels transform between one another and their associated granularities by the interlevel relation of contextual emergence. This gives a formally sound and empirically applicable procedure to construct level-specific criteria for relevant observables across disciplines.

Reductionism and ontology

It may seem appealing to reduce every system down to its fundamental components and conclude that every empirical phenomena in science or other disciplines is only applied mathematics. But this misses out on the features of the whole that emerge in the contexts of the higher layers which cannot be reduced. Consciousness among neural and mental correlates of different states provide one example, but we only need to look at any example, such as the emergence of transcriptome interactions from how a genome itself structures itself, to realize that these properties come about only at the higher levels, and, therefore, involve phenomena that are not completely reducible to mathematics. Biologist Peter Corning argued in “The Re-Emergence of “Emergence”: A Venerable Concept in Search of a Theory” that whole systems produce unique combined effects that may involve the context between and the interactions with the system and its environment.

Contextual emergence has been originally conceived as a relation between levels of descriptions, not levels of nature: It addresses questions of epistemology rather than ontology. In agreement with Esfeld, who advocated that ontology needs to regain more significance in science, it would be desirable to know how ontological considerations might be added to the picture that contextual emergence provides.

Various granularity degrees raises questions of descriptions with finer grains as they relate to the fundamental nature of systems when compared to coarser grains. The majority of scientists and philosophers of science answer believe this, so there’s one fundamental ontolgoy that elementary particle physics result from reducing other descriptive levels. This reductive premise produced critical assessments and alternative proposals. Philosopher Willard Van Oramn Quine introduced the ontological relatively that, if there is one ontology that fulfills a given descriptive theory, there is more than one. Philosopher Hilary Putnam developed a related kind of ontological relativity, first called internal realism, and later referred to as pragmatic realism.

We may apply Quine’s ideas to concrete scientific descriptions, their relationships with one another, and their referents. A descriptive framework can be ontic or epistemic depending on which other framework it relates to. An engineer may consider wires of an electrical circuit to be ontic, but a solid-state physicist may consider them epistemic. We can use the relevance criteria to distinguish between context-specific descriptions and avoid pitfalls of reductionism. We create a subtle and more flexible framework while still restricting ourselves to the premises and limits of the contextually emergent model.

Strong and weak emergence

Weak emergence involves emergent properties that computer simulations can control such that the interacting cells of the system retain their independence. Other emergent properties, irreducible to the system’s constituent parts, are strong. Both are supervenient and involve novel properties as the system grows, but the distinction introduces a scale-dependency to observable phenomena.

%d bloggers like this: