“…And it is of course not true that we have to follow the truth. Human life is guided by many ideas. Truth is one of them. Freedom and mental independence are others. If Truth, as conceived by some ideologists, conflicts with freedom, then we have a choice. We may abandon freedom. But we may also abandon Truth.” – “How to Defend Society Against Science”, Paul Feyerabend
Paul Feyerabend (1924-1994), having studied science at the University of Vienna, moved into philosophy for his doctoral thesis, made a name for himself both as an expositor and (later) as a critic of Karl Popper’s “critical rationalism”, and went on to become one of the twentieth century’s most famous philosophers of science. An imaginative maverick, he became a critic of philosophy of science itself, particularly of “rationalist” attempts to lay down or discover rules of scientific method.
Hilma af Klint, Group X, No. 1, Altarpiece, 1915
Born to the son of a civil servant and a seamstress, Feyerabend took up reading as well as singing during his childhood. Having passed his final high school exams in March 1942, he was drafted into the Arbeitsdienst (the work service introduced by the Nazis), and sent for basic training in Pirmasens, Germany. Feyerabend opted to stay in Germany to keep out of the way of the fighting, but subsequently asked to be sent to where the fighting was, having become bored with cleaning the barracks! He even considered joining the SS, for aesthetic reasons. His unit was then posted at Quelerne en Bas, near Brest, in Brittany. Still, the events of the war did not register. In November 1942, he returned home to Vienna, but left before Christmas to join the Wehrmacht’s Pioneer Corps. Their training took place in Krems, near Vienna. Feyerabend soon volunteered for officers’ school, not because of an urge for leadership, but out of a wish to survive, his intention being to use officers’ school as a way to avoid front-line fighting. The trainees were sent to Yugoslavia. In Vukovar, during July 1943, he learnt of his mother’s suicide, but was absolutely unmoved, and obviously shocked his fellow officers by displaying no feeling.
In 1945, Feyerabend was shot in the hand and in the belly during the retreat from the Russian Army. The bullet damaged his spinal nerves. Two years later, he’d return to Vienna to study history and sociology at the University until later transferring to physics. His first article, on the concept of illustration in modern physics, published. Feyerabend could be described as “a raving positivist” at the time. The student found himself persuaded him of the cogency of realism about the “external world” (Popper’s important arguments for realism came somewhat later). The considerations Hollitscher deployed were, first, that scientific research was conducted on the assumption of realism, and could not be otherwise conducted, and, second, that realism is fruitful and productive of scientific progress, whereas positivism was simply a commentary on scientific results, barren in itself.
Feyerabend received his doctorate in philosophy for his thesis on “basic statements” in 1951. He applied for a British Council scholarship to study under Wittgenstein at Cambridge, but Wittgenstein died before Feyerabend arrived in England, so Feyerabend chose Popper as his supervisor instead.
In 1975, Feyerabend published his first book, Against Method, setting out “epistemological anarchism”, whose main thesis was that there is no such thing as the scientific method. Great scientists are methodological opportunists who use any moves that come to hand, even if they thereby violate canons of empiricist methodology.
In his article, “How to Defend Society Against Science”, the philosopher sought to defend society and its inhabitants from all ideologies, science included. All ideologies must be seen in perspective. One must not take them too seriously. One must read them like fairytales which have lots of interesting things to say but which also contain wicked lies, or like ethical prescriptions which may be useful rules of thumb but which are deadly when followed to the letter.
Now, is this not a strange and ridiculous attitude? Science, surely, was always in the forefront of the fight against authoritarianism and superstition. It is to science that we owe our increased intellectual freedom vis-a-vis religious beliefs; it is to science that we owe the liberation of mankind from ancient and rigid forms of thought. Today these forms of thought are nothing but bad dreams-and this we learned from science. Science and enlightenment are one and the same thing-even the most radical critics of society believe this. Kropotkin wants to overthrow all traditional institutions and forms of belief, with the exception of science. Ibsen criticizes the most intimate ramifications of nineteenth-century bourgeois ideology, but he leaves science untouched. Levi-Strauss has made us realize that Western Thought is not the lonely peak of human achievement it was once believed to be, but he excludes science from his relativization of ideologies. Marx and Engels were convinced that science would aid the workers in their quest for mental and social liberation. Are all these people deceived? Are they all mistaken about the role of science? Are they all the victims of a chimaera?
To these questions my answer is a firm Yes and No.
Now, let me explain my answer.
The explanation consists of two parts, one more general, one more specific.
The general explanation is simple. Any ideology that breaks the hold a comprehensive system of thought has on the minds of men contributes to the liberation of man. Any ideology that makes man question inherited beliefs is an aid to enlightenment. A truth that reigns without checks and balances is a tyrant who must be overthrown, and any falsehood that can aid us in the over throw of this tyrant is to be welcomed. It follows that seventeenth- and eighteenth-century science indeed was an instrument of liberation and enlightenment. It does not follow that science is bound to remain such an instrument. There is nothing inherent in science or in any other ideology that makes it essentially liberating. Ideologies can deteriorate and become stupid religions. Look at Marxism. And that the science of today is very different from the science of 1650 is evident at the most superficial glance.
For example, consider the role science now plays in education. Scientific “facts”are taught at a very early age and in the very same manner in which religious “facts”were taught only a century ago. There is no attempt to waken the critical abilities of the pupil so that he may be able to see things in perspective. At the universities the situation is even worse, for indoctrination is here carried out in a much more systematic manner. Criticism is not entirely absent. Society, for example, and its institutions, are criticized most severely and often most unfairly and this already at the elementary school level. But science is excepted from the criticism. In society at large the judgement of the scientist is received with the same reverence as the judgement of bishops and cardinals was accepted not too long ago. The move towards “demythologization,” for example, is largely motivated by the wish to avoid any clash between Christianity and scientific ideas. If such a clash occurs, then science is certainly right and Christianity wrong. Pursue this investigation further and you will see that science has now become as oppressive as the ideologies it had once to fight. Do not be misled by the fact that today hardly anyone gets killed for joining a scientific heresy. This has nothing to do with science. It has something to do with the general quality of our civilization. Heretics in science are still made to suffer from the most severe sanctions this relatively tolerant civilization has to offer.
Wolfgang Tillmans, Philharmonie Bloch III, 2017.
Is this unfair? Have I not presented the matter in a very distorted light by using tendentious and distorting terminology? Must we not describe the situation in a very different way? I have said that science has become rigid, that it has ceased to be an instrument of change and liberation, without adding that it has found the truth, or a large part thereof. Considering this additional fact we realize, so the objection goes, that the rigidity of science is not due to human will. It lies in the nature of things. For once we have discovered the truth. What else can we do but follow it?
This trite reply is anything but original. It is used whenever an ideology wants to reinforce the faith of its followers. “Truth” is such a nicely neutral word. Nobody would deny that it is commendable to speak the truth and wicked to tell lies. Nobody would deny that_-and yet nobody knows what such an attitude amounts to. So it is easy to twist matters and to change allegiance to truth in one’s everyday affairs into allegiance to the Truth of an ideology which is nothing but the dogmatic defense of that ideology. And it is of course not true that we have to follow the truth. Human life is guided by many ideas. Truth is one of them. Freedom and mental independence are others. If Truth, as conceived by some ideologists, conflicts with freedom, then we have a choice. We may abandon freedom. But we may also abandon Truth. (Alternatively, we may adopt a more sophisticated idea of truth that no longer contradicts freedom; that was Hegel’s solution.) My criticism of modern science is that it inhibits freedom of thought. If the reason is that it has found the truth and now follows it, then I would say that there are better things than first finding, and then following such a monster.
This is the story of how I won. This is the story of how I spoke out against wrongdoing that sought to hurt me fundamentally as a human being. I overcame these struggles with the fearlessness that has been given to me. The world is full of moral ambiguities and existential horrors. Yet I made the right decisions at the right time in such a way that I found success and happiness.
I’m an Indian American Muslim male. During my junior year of college at Indiana University-Bloomington, I was also a physics-philosophy double-major with a pre-med track. I became interested in the purpose of a college education and doing research on the history/philosophy of education to find answers to questions I pondered such as: What is the purpose of volunteering/grades/extracurriculars/etc? Why do we learn the way we do? How do we use these classes to help us realize those things? I spoke w/philosophers, scientists, professors, and other professionals to gather information about these issues from them, too. I’ve written these topics on complacency, academic freedom, advice for incoming freshmen, and rhetoric in our models of learning.
I tried starting a conversation among a premed club I was part of, but they retaliated against me. They isolated me, manipulated me, made lies about me, and reported me to the Dean that I was harassing them. They mostly did this out of their insecurities for those questions I was suggesting, but it was also because I was presenting well-researched, justified beliefs that contradicted theirs. Through months of them ignoring the issues I wanted to raise and the discussions I wanted to have, I felt even more disillusioned. The Dean proceeded to criticize, interrupt, mock, and interrogate me with force. She said I was acting “bizarre” and called my story “twisted.” She didn’t give me a chance to defend myself. She’d laugh at me when I tried explaining how my friends were making up lies about me to silence me. She interrupted me in ways that I couldn’t even finish what I was saying. She continued this behavior for months through email and in-person. I was traumatized. The university charged me with harassment and stalking. They left me off with a warning, but they required that I’d start therapy with a social worker who had no graduate training so I could better myself. I had no choice but to blame myself for everything and agree with whatever the Dean told me. Throughout all of this, I had no chance to defend myself on any claim others made against me.
That’s when things got worse. I felt the pain, fear, anxiety, and distrust spreading into other parts of my life. Even when I tried doing positive things (like exercising and meditating) I felt the mocking voice of the Dean resonating in my head. I began sleeping 10-12 hours a day, stopped praying and exercising, eating less healthy, going to class less, and lost sight in the purpose of my classes. It got to the point where I wasn’t doing any studying and felt my blood boiling in my lectures. I had no idea what I was suffering from.
I couldn’t do anything to defend myself because I feared repercussions and abuse from the Dean of Students. My friends didn’t know how to help me so they isolated themselves from me. My professors watched as my grades dropped and I could barely will myself out of bed for the last two years of college. Not having answers to my questions of the purpose of a college education started taking its toll on me. And the toxicity of the environment around me towards me just made me scared of myself. In hindsight, my therapist didn’t help much. He mostly talked about superficial things like social skills, didn’t take notes, gave me a blank stare most of the time, and only tried to keep me out of trouble instead of understanding me. He’d say things like “Oh, people are idiots,” and he even believed in astrology.
It’s now been over a year since I graduated. I’ve been working at the National Institutes of Health while taking weekly therapy sessions with a therapist with a PhD and decades of experience out of my own will. This therapist is amazing like a modern day Sigmund Freud in how he gives detailed answers, speaks truthfully and with justification, and has amazing skills in rhetoric.
After I graduated, I struggled with coming to terms with difficult events in my past, figuring out what my purpose is, and trying my best to prepare for a successful career as a scientist. My doubts lingered. What am I looking for? I was tired of asking that question. I was tired of all the crazy things it lead to in my life. How could I trust anyone truly wanted to support me? During this time, I opened up. I began speaking to officials from Indiana University-Bloomington about my experience. I told them about how I had tried to answer questions related to the purpose of college education and my pre-medical friends retaliated against me. I told them how the questions I wanted to talk about weren’t some kind of side hobby or interest of mine but actual fundamental pieces of any student’s essential education. It was so much so that I needed that opportunity or right to ask them so that I could further my education, seeing as how negatively they were affecting my life. Every time I wrote out my story or spoke to someone about it, I felt like I needed a glass of water or needed to take a walk. I also told them I wasn’t trying to do anything in particular. My sole intention was to share the story because it was the right thing to do.
In August I got off the phone with a senior investigator from the university. I had explained to her everything that happened. She said what I went through was egregiously wrong and should have never happened to anyone. She said they’re going to require racial and religious bias training from the Dean and other staff members that were involved. She said this because the Dean and the pre-medical student who bullied me were both white women. They said they were going to keep close check on all of the Dean’s communication of all forms. I told the investigator I didn’t want anymore input.
It took 3 years. But I finally got my voice heard and taken seriously from the university. That’s all I needed to know that I won. I began to realize the university would probably handle issues related to the purpose of a college education much differently from now and onward. They would recognize the struggle of students who don’t see a purpose in anything anymore and recognize that as a valid, vulnerable position that needs to be defended and protected such that students could make the world a better place and achieve their goals. That was the proof the university could act in the way I wanted it to, and that my goals could be achieved. The university took my side in making the future a brighter place not just for me but for anyone who wishes to learn. I never knew whether I was truly a victim of racism or Islamophobia, either, but the university’s action in taking the issue of a racial or religious bias seriously at least satisfied me.
I want to take a sigh of relief and say I’m fine now, but it’s still going to take me a while to figure myself out. It’s gonna take some visits to coffee shops and long walks. It took me a while to get back on my feet, though. I began eating well, exercising, studying and performing research spanning science to philosophy, I won. Let this be a victory for everything a university should stand for. Let the future be brighter for students who wish to learn and grow. The past is heavy, but the future is greater. And I will no longer be shackled by fear. I want to extend my gratitude to everyone who supported me along the way. I want to thank my current therapist most of all. And thank you for reading this. It really means a lot to me.
In my current research on the zebrafish brain, I’m creating a mapping of parts of the brain to the genes which are expressed using mathematics and statistics. This method of devising theoretical models carries difficulties and issues in the way the accuracy and precision of these models. This model of the zebrafish neuroscience holds insight for our methods of using the organism for studying psychiatric disorders. In understanding phenomena of the brain, neuroscientists have various methods of referring to how to both explain and describe the causal mechanisms of the brain. The way our brain interacts with things like stimuli (such as visual imagery or sounds) and creates its own effects (such as neuronal responses in the brain) need to be precise to determine the nature of those phenomena we empirically observe. The 3M (model-mechanist-mapping) constraint is one such method.
In this post I will show plausibility that satisfying the 3M constraint gives us predictive, explanatory power in neuroscience that can be extended to cognitive science, psychology, and (pose the question for) consciousness. I’ll use various examples of neuroscience in proving its predictive power. I’ll also like to relate this predictive power to, at the very least, a basic form of consciousness. I hope to elucidate current findings in both science and philosophy as they relate to consciousness itself. We can begin this sort of inquiry with an overview of these neuroscientific explanations, then proceed to basic questions of how neuroscience relates to consciousness and what sort of empirical evidence has been shown towards this problem. Finally, we conclude with what limits scientists and philosophers currently face, and what anyone can do to meet those problems.
Alchemical Illustration from the Emerald Tablet of Hermes.
The Tablet had such an impact on the minds of histories greatest philosophers, esotericists and mystical thinkers, that it became the esoteric industry standard for every medieval and later renaissance system of alchemy.
The 3M has two claims. The first is that the variables in the model correspond to identifiable components, activities and organizational features that produces maintains or underlie the phenomena. The second is that the mathematical dependencies that are posited among the these perhaps mathematical variables within that model correspond to causal relations among the components of that mechanism. This mechanism-model-mapping (3M) constraint embodies widely held commitments about the requirements on mechanistic explanations and provides more precision about those commitments.
3M is much more than imposing an arbitrary rule on scientific theory, as David Kaplan, Lecturer in the Department of Cognitive Science at Washington University in St. Louis, explains. The demand follows from the many limitations of how predictions are formed and the conspicuous absence of an alternative model of explanation that satisfies scientific-commonsense judgments about the adequacy of explanations and does not ultimately collapse into the mechanistic alternative. The idea of being in compliance with the 3M constraint is shown to have considerable utility for understanding the explanatory force of models in computational neuroscience, and for distinguishing models that explain from those merely playing descriptive and/or predictive roles. Conceiving computational explanation in neuroscience as a species of mechanistic explanation also serves to highlight and clarify the pattern of model refinement and elaboration undertaken by computational neuroscientists. Under 3M, we can generally believe that the more accurate and detailed models are for target systems, the greater effectiveness they explain the phenomena.
One of the biggest setbacks of machine learning, as I’ve explained, is that models are far too descriptive of sets of data, yet not explanatory that they can be used for prediction. Scientist and philosophers debate whether 3M can explain phenomena in addition to describing them. I believe that dynamical and mathematical models in systems and cognitive neuroscience can generally explain a phenomenon only if there is a plausible mapping between elements in the model and elements in the mechanism for the phenomenon. In 1983 Professor of Psychology Philip Johnson-Laird expressed what was then a mainstream perspective on computational explanation in cognitive science: “The mind can be studied independently from the brain.” The extent to which this is true (which we call computational chauvinism, as did Piccinini in 2006) can be confirmed with our theoretical models of genetic mapping in the brain. However, we can argue forms of this computational chauvinism hold true as we bridge the gap between computational explanations and cognitive science. Our human cognitive capacities can be confirmed independently of how they are implemented in the brain. Delineating this computational chauvinism and predictive power of the 3M model, neuroscientists can have more power in their explanations of the brain.
Computational chauvinism is three claims: (1) computational explanation in psychology is independent from neuroscience, (2) computational notions are uniquely appropriate to psychological theory, and (3) computational explanations of cognitive capacities in psychology embody a distinct form of explanation. The neuroscientific and biological explanations and mechanistic explanations are covered by this form of explanation. These neuroscientific forms of explanation should prove insightful to the the two questions of consciousness, as explained by philosopher David Chalmers: generic and specific. Generic consciousness relies on the question of how neural properties explain the conscious state and the specific form, how they explain the content of the conscious state itself. To show that a computational analysis of neuroscience is possible, especially in the realm of consciousness, we need to refute the challenge fo computational chauvinism.
Computational chauvinism shares connections with functionalism, once the dominant position among philosophers of the mind (Putnam 1960). Functionalism, that the way a mental state functions determines what makes the mental state what it is, can be used to support the conclusion to abandon neuroscientific data. Canadian philosopher
Zenon Pylyshyn also argues these connections between computational chauvinism and functionalism.This comes as a result of the functionalist belief that psychology can explain phenomena independently of neuroscientific evidence. Drawing the analogy that the brain is similar to a computer, we imagine the functions of the mind as similar to running software. The computationalist neuroscientists believe the brain can be modeled as a computer. That the psychological phenomena can proceed without respect to the neuroscience means the brain is only the hardware of the computer and nothing else. Cognitive science would be the software that emerges. With this computer analogy, the functionalist would argue that the finding neural and computational explanations would be mostly irrelevant to psychology and cognitive science. At best, they may play a minor role in extreme examples of brain physiology. I will argue to refute functionalism to show the potential for the explanatory power of computational neuroscience.
More difficulties arise in our notions of objectivity with consciousness. At best, we can only observe behavior that tracks consciousness. We must use introspective forms of reasoning and thinking relate these subjective experiences to objective ideas and models of consciousness while appropriately measuring a subjective responses of consciousness. If I were to continue to stand by the explanatory power of computational neuroscience, it should hold the potential for this gap between the subjective and objective. The breadth of neuroscience, as it covers all forms of studying the brain and nervous system the constituents that make them up, we can look at the physical and mechanistic properties of the cerebral cortex for evidence of perceptual consciousness. My previous work on stochastic models of the brain should serve as a worthy example of this with the sense of vision. Looking at the general state of empirical work, especially as it relates to vision, give us a starting point for describing this consciousness.
believe I can argue that the constraint of 3M on explanatory mechanistic models because it can create the difference between phenomenological and mechanistic models as well as distinguishing between the possibility and actuality of the models. Phenomenological models provide descriptions of phenomena, and, as philosopher Mario Bunge argues, they describe the behavior of a target system without any unobservable variables (similar to the hidden variables I’ve described with causal models). In computational neuroscience, descriptive models (that summarize data effectively) differ from mechanistic models (that explain how neuroscientific systems work).
I cite the 1999 textbook Spikes: Exploring Computational Neuroscience as a seminal book in the scientific theories of computational neuroscience. The book sought to measure signals and responses from the nervous system and analyze those spike trains that followed. It uses several examples such as Gaussian waveform patterns and variations of the Hodgkin-Huxley models of firing neuron potentials. The latter model uses mathematics and conductance to explain how action potentials can be fired from neurons. These scientists, winning the Nobel Prize in 1963, performed that was very closely related to Biophysicist Richard Fitzhugh’s work in the 1960’s. Fitzhugh reduced the Hodgkin-Huxley model so that it could be visualized in phase-space and, therefore, use all variables at once, and, from this, be used for more accurate detailed predictions. I also believe this work distinguished the qualitative features of neurons on the topological properties of their corresponding phase space.
Kaplan explains that the model’s predictive power is weak. While it may generate accurate predictions about the action potential in the axonal membrane of the squid giant axon (their experimental system) to within roughly ten percent of its experimentally measured value, the critical question is whether it explains. Despite these features of the Hodgkin-Huxley models, these equations don’t explain how voltage changes membrane conductance. Scientists and philosophers who wish to use the predictive power of models in neuroscience require models to reveal the causal structures responsible for the phenomena themselves. Still, the Hodgkin-Huxley equations continue to provide the inspiration for interesting mathematics and physics problems.
The electrical activity can be physically measured from neuron cells and the scientists needed a way of determining the “spikes” (as the title suggests) that result from the data. At the time the book was written, there were many many other features of neurons, neural networks and brains that one would need to understand as well, no question about that. But the book sought to explain the spike (or action potentials) timing with as much accuracy and precision as possible. As neurons fire and send signals, they produce an action potential that’s created by the difference in charge along the neuron. From, using mathematical descriptions such as Bayesian formalism, the authors argue how to make sense of the neural data using probabilistic approaches to explain how stimuli may be predicted. Sensory neurons govern vision and we can gauge information processing by observing the potential of these receptive fields. The various electrical properties discussed in the book, such as spike rates, local field potential, and blood oxygen level dependent signal (BOLD), especially from groups of neurons and how they relate to one another provide the basis for these explanations of consciousness. Though “Spikes” was published in 1999, even as far back as 1990 were the biologist Francis Crick and neuroscientist Christof Koch describing how groups of neurons functioned together. Though they can be quantified mathematically, the exact nature of how they together relate to consciousness is not completely understood.
However, these neural sensory systems (the groups of neurons, pathways, and the parts involved in perception) do have potential about the subject’s environment. From this information we can create neural representations, which are the ways neural activity form to correspond to represent external stimuli that we readily observe. The closeness of this relationship, though, is hotly debated. Philosopher Rosa Cao argued that neurons will have little or no access to semantic information about the world, for example. Cao has also raised questions of what sort of functional units arise in describing neural representation. A very simple example I put forward is that information (in this case, representation of the relevant aspect of the stimulus that causes a neural response) is carried through series of spike potentials in the brain. Certain models that have been created from these data include the Dehaene-Changeux model which has been shown to create a global workspace for consciousness. By this explanation, a state must be accessible to be considered a consciousness state. A system X accesses content from system Y if (and only if) X uses that content in its computations/processing. It must be “globally” accessible to multiple systems including long-term memory, motor, evaluational, attentional and perceptual systems (Dehaene, Kerszberg, & Changeux 1998; Dehaene & Naccache 2001; Dehaene et al. 2006). This is irrelevant of whether the access is phenomenal.
Though I can’t make any statements of 3M directly in its relation to models of consciousness, I believe scientists and philosophers should begin observing the 3M criteria in their studies of consciousness. Researchers of any kind can raise questions of the explanatory power of these methods of describing physiological phenomena. We need a deep, precise explanation for our theory as they relate to forming predictions. Then, we can venture into the domain of consciousness with much more insight than without. The debates among mechanistic, dynamic, and predictivist explanations among functional and structural taxonomies. For all its flaws and limitations, mechanistic models of the brain still provide beneficial results to the answers at the core issues in philosophy of neuroscience, including explanation, methodology, computation, and reduction.
Sources
Bunge, M. (1964). Phenomenological Theories. In (Ed.) M. Bunge, The Critical Approach: In Honor of Karl Popper. New York: Free Press
Cao, Rosa. (2012). “A Teleosemantic Approach to Information in the Brain”, Biology and Philosophy, 27(1): 49–71. doi:10.1007/s10539-011-9292-0 ––– (2014). “Signaling in the Brain: In Search of Functional Units”, Philosophy of Science, 81(5): 891–901. doi:10.1086/677688
“A Neuronal Model of a Global Workspace in Effortful Cognitive Tasks”, Proceedings of the National Academy of Sciences of the United States of America, 95(24): 14529–14534. doi:10.1073/pnas.95.24.14529
Chalmers, David J. (1995). “Facing up to the Problem of Consciousness”, Journal of Consciousness Studies, 2(3): 200–219.
Crick, Francis and Christof Koch. (1990). “Toward a Neurobiological Theory of Consciousness”, Seminars in the Neurosciences, 2: 263–275.
Dehaene, Stanislas and Jean-Pierre Changeux. (2011). “Experimental and Theoretical Approaches to Conscious Processing”, Neuron, 70(2): 200–227. doi:10.1016/j.neuron.2011.03.018
Dehaene, Stanislas, Jean-Pierre Changeux, Lionel Naccache, Jérôme Sackur, and Claire Sergent. (2006). “Conscious, Preconscious, and Subliminal Processing: A Testable Taxonomy”, Trends in Cognitive Sciences, 10(5): 204–211. doi:10.1016/j.tics.2006.03.007
Dehaene, Stanislas, Michel Kerszberg, and Jean-Pierre Changeux
Fitzhugh, R. (1960). Thresholds and plateaus in the Hodgkin-Huxley nerve equations. The Journal of General Physiology 43 (5), 867–896.
Fitzhugh, R. (1961). Impulses and physiological states in theoretical models of nerve membrane. Biophysical journal 1 (6), 445–466.
Haken, Hermann, J. A. Scott Kelso, and H. Bunz. (1985). “A Theoretical Model of Phase Transitions in Human Hand Movements.” Biological Cybernetics 51 (5): 347–56.
Johnson-Laird, P.N. (1983). Mental Models: Towards a Cognitive Science of Language, 47 Inference and Consciousness. New York: Cambridge University Press.
Kaplan, D. M. (2010) “The Explanatory Force of Dynamical and Mathematical Models in Neuroscience: A Mechanistic Perspective” Philosophy of Science Vol. 78, No. 4.
Piccinini, G. (2006). Computational explanation in neuroscience. Synthese 153:343–353.
—– (2007). Computing mechanisms. Philosophy of Science 74:501-526. Piccinini, G., and Craver, C.F. (forthcoming): Integrating psychology and neuroscience: functional analyses as mechanism sketches. Synthese.
Putnam, H. (1960). Minds and machines. Reprinted in Putnam, 1975, Mind, Language, and Reality. Cambridge: Cambridge University Press.
Pylyshyn, Z. W. (1984). Computation and Cognition. Cambridge, MA: MIT Press.
There are generally two types of machine learning. Supervised learning is where we have a labeled dataset. This means we already have data from which to develop models using algorithms such as Linear Regression, Logistic Regression, and others. With this model, we can make predictions like, given data on housing prices, what will the cost of a house with a given set of features be. Unsupervised learning, on the other hand, doesn’t have a labeled dataset. The model we create in this setting just needs to derive a pattern amongst the data. We do this with algorithms such as K Means Clustering, K Nearest Neighbors, etc. to solve problems like grouping a set of users according to their behavior in an online shopping portal. But what if we don’t have much data? What if we are dealing with a dynamic environment and the model needs to gather data and learn in real time? Enter reinforcement learning. In this post, I’ll take a look at the basics of what reinforcement learning is, how it works and some of its practical applications.
Reinforcement Learning through Super Mario
Comparison with other Machine Learning Techniques
Without reinforcement learning, there is no supervisor to tell you if you did right or wrong. If you did well, you get a reward, otherwise you would not. If you did terrible, you might even get a negative reward. Reinforcement learning adds in another dimension – time. It can be thought of being in between supervised and unsupervised learning. Whereas in supervised learning, we have labeled data and unsupervised learning we don’t, in reinforcement learning, we have time delayed labels, which we call rewards. RL has the concept of delayed rewards. The reward we just received may not be dependent on the last action we took. It is entirely possible that the reward came because of something we did 20 iterations ago. As you move through Super Mario, you’ll find instances where you hit a mystery box and keep moving forward and the mushroom also moves and finds you. It is the series of actions that started with Mario hitting the mystery box that resulted in him getting stronger after a certain time delay. The choice we make now affects the set of choices we have in the future. If we choose a different set of actions, we will be in a completely different state and the inputs to that state and where we can go from there differs. If Mario hit the mystery box but chose not to move forward when the mushroom began to move, he’ll miss the mushroom and he won’t get stronger. The agent is now in a different state than he would have been had he moved forward.
If you were playing Super Mario Bros. for the first time, you might have started with a clean slate – not knowing what to do. You see an environment in which you as Mario, the agent, have been placed that consists of bricks, coins, mystery boxes, pipes, sentient mushrooms called Goomba, and other elements. You begin taking actions in this environment by pressing a few keys before you realized then you can move Mario with the arrow keys to the left and right. Every action you take changes the state of Mario. You moved to the extreme left at the beginning but nothing happened so you started moving right. You tried jumping onto the mystery box after which you got a reward in the form of coins. Now, you learned that every time you see a mystery box, you can jump and earn coins. You continued moving right and then you collided with a Goomba after which you got a negative reward (also called a punishment) in the form of death. You could start all over again, but by now you’ve learned that you must not get too close to the Goomba; you should try something else. In other words, you have been “reinforced”. Next, you try to jump and go over the Goomba using the bricks but then you’d miss a reward from the mystery box. So you need to formulate a new policy, one that’ll give you the maximum benefit – gives you the reward and doesn’t get you killed. So you wait for the perfect moment to go under the bricks and jump over the Goomba. After many attempts, you take one such action that causes Mario to step over the Goomba and it gets killed. And then you have an ‘Aha’ moment; you’ve learned how to kill the threat and now you can also get your reward. You jump and this time, it’s not a coin, it’s a mushroom. You again go over the bricks and eat the mushroom. You get an even bigger reward; Mario’s stronger now. This is the whole idea of reinforcement learning. It is a goal-oriented algorithm, which learns techniques to maximize the chances of attaining the goal over many iterations. Using trial and error, reinforcement learning learns much like how humans do.
AlphaGo
Reinforcement learning hit the big time in March 2016 when DeepMind’s AlphaGo, trained using RL, defeated 18-time world champion Go player Lee Sedol 4-1. It turns out the game of Go was really hard to master for the machine, more so than games like Chess simply because there are just too many possible moves and too many numbers of states the game can be in.
Just like Mario, AlphaGo learned through trial and error, over many iterations. AlphaGo doesn’t know the best strategy, but it knows whether it won or lost. AlphaGo uses a tree search to check every possible move it can make and see which is better. On a 19×19 Go board, there are 361 possible moves. For each of these 361 moves, there are 359 possible second moves and so on. In all, there are about 4.67×10^385 possible moves; that’s way too much. Even with its advanced hardware, AlphaGo cannot try every single move there is. So, it uses another kind of tree search called the Monte Carlo Tree Search. In this search, only those moves that are most promising are tried out. Each time AlphaGo finishes a game, it updates the record of how many games each move won. After multiple iterations, AlphaGo has a rough idea of which moves maximizes its chance of winning.
AlphaGo first trained itself by imitating historic games played between real players. After this, it started playing against itself and after many iterations, it learned the best moves to win a Go match. Before playing against Lee Sedol, AlphaGo played against and defeated professional Go player Fan Hui 5-0 in 2015. At that moment, people didn’t consider it a big deal as AlphaGo hadn’t reached world champion level. But what they didn’t realize was AlphaGo was learning from humans while beating them. So by the time AlphaGo played against Lee Sedol, it had surpassed world champion level. AlphaGo played 60 online matches against top players and world champions and it won all 60. AlphaGo retired in 2017 while DeepMind continues AI research in other areas.
It’s all fun and games, but where can RL be actually useful? What are some of the real world application? One of the largest field of research and now beginning to show real promise is the field of Robotics. Teaching a robot to act similar to humans has been a major research area and also part of several sci-fi movies. With reinforcement learning, robots can learn similar to how humans do. Using this, industrial automation has been simplified. An example is Tesla’s factory that consists of more than 160 robots that do a large part of the work on cars to reduce the risk of defects.
RL can be used to reduce transit time for stocking and retrieving products in the warehouse for optimizing space utilization and warehouse operations. RL and optimization techniques can be utilized to assess the security of electric power systems and to enhance Microgrid performance. Adaptive learning methods are employed to develop control and protection schemes, which can effectively help to reduce transmission losses and CO2 emissions. Also, Google has used DeepMind’s RL technologies to significantly reduce the energy consumption in its own data centers.
AI researches at SalesForce used deep RL for automatically generating summaries from text based on content abstracted from some original text document. This demonstrated an approach for text mining solution for companies to unlock unstructured text. RL is also being used to allow dialog systems (chatbots) to learn from user interactions and help them improve over time. Pit.AI used RL for evaluating trading strategies. RL has immense applications in the stock market. Q-Learning algorithm can be used by anyone to potentially gain income without worrying about market price or risks involved. The algorithm is smart enough to take all these under considerations while making a trade.
A lot of machine learning libraries have been made available in recent times to help data scientists, but choosing a proper model or architecture can still be challenging. Several research groups have proposed using RL to simplify the process of designing neural network architectures. AutoML from Google uses RL to produce state-of-the-art machine-generated neural network architectures for language modeling and computer vision.
Meandering through information from different disciplines is difficult for anyone – be them a scientist, philosopher, or anything else. On his website and in this interview, we’ll take a look at how Adam Kruchten learned to figure out what guided him in his passions and how he applies both scientific and philosophical thinking to understanding statistics. HA: Adam, as an undergraduate, you studied mathematics and philosophy. Now you’re going to enroll at University of Pittsburgh to study biostatistics. How did you go from being interested in mathematics and philosophy to biostatistics? Adam: Statistics, and inference more generally, in some form has always been of interest to me, it just took me quite some time to learn that about myself. Early in my undergraduate career I worked in research in statistical mechanics, and I was always fascinated by the probabilistic models. Idealizations could capture tremendous amounts of useful information about extraordinarily complex phenomena. Further, the same underlying notions of probabilistic modeling could be used to understand and cope with both true randomness and epistemological uncertainties without any difference in mathematics. Originally I thought I was mostly drawn in by the physics. I realized later that the physics, while interesting, was not what drew me in. It was really the methodology. I hopped around different fields, but had the same problem. Eventually I settled on math and philosophy, and there I found fields where I could study and understand fundamental issues underlying robust scientific inferences. In math I was drawn to logic, and in philosophy I was drawn broadly to issues addressing philosophy of science: philosophy of science proper, but also language, epistemology, and metaphysics.
After graduation I took a job in applied mathematics, but my role was really mostly an applied statistician. Here I worked closely with a professor of statistics and found the underlying study of inferences that had really drawn me to numerous fields prior.
As for biostatistics specifically rather than statistics more generally? Biostatistics occupies its place inside public health programs. I think applying statistics to public health issues is a great way to make a meaningful impact through the study and application of my underlying passions.
HA: What role does (or will) philosophy play in your research? How do you hope to study science and philosophy hand in hand? Adam: Beyond philosophy directly informing my statistical work, I would also like to eventually research questions that are fundamental to inference itself. When doing this kind of research you are not just relying on philosophy, you are directly doing philosophy.
HA: On your blog you’ve written about the philosophical thesis of physicalism in a way that people without a strong background in philosophy can understand (https://adamkruchten.wordpress.com/2018/05/07/you-are-not-your-brain/). What sort of understanding do you think this general audience should have of philosophy? Adam: I try to write in an accessible way that doesn’t require much philosophical understanding, but I think I do expect readers to at the very least think “like a philosopher.” By “think like a philosopher,” I really mean several things. You should read with curiosity and openness: reading while prepared to dig deeper into elements you may not understand and with a willingness to change your own views as necessary. At the same time I think you should read with a critical but charitable mind. Critical, meaning you look for implicit assumptions, look for leaps in logic, and rigorously assess the foundations of any premises. Charitably, meaning you only attempt to criticize the best possible version of the argument: don’t set up straw men, see if small errors in argument and prose can be easily corrected, and engage with the mindset that an argument was made in good faith.
HA: A bit more specific, what can scientists do to appreciate philosophy better? Adam: There’s an obvious answer here which is just “read more philosophy.” This is an honest answer, but it only goes so far. I think reading more philosophy is always useful, but there is far more philosophy than even a professional philosopher could read and understand, let alone someone with a career outside of the field.
For a more practical answer I think scientists should engage in science the same way I answered the previous question. Think like a philosopher by acknowledging and assessing underlying premises and methodological assumptions in doing science. HA: Before we finish, what’s one book everyone should read? Adam: This is a tough one. I have a hard time suggesting one book for a variety of reasons. I think I will answer with the book I feel most influenced my thought, Immanuel Kant’s Prolegomena to any Future Metaphysics. This book is Kant’s own summary of the much longer Critique of Pure Reason. I think that reading this book shed a great deal of light on various ways of thinking I had taken for granted, and helped me come to terms with a lot of what I had, at times erroneously, assumed implicitly to be true about the world. Just as Hume awoke Kant from his dogmatic slumber, so did this book for me.
Neuroimaging, ways of understanding how the brain produces images, produces sets of data that are high-dimensional and complicated. Ways of interpreting this data provides the means for understanding how the brain encodes and decodes images. In this context, encoding refers to predicting the imaging data given external variables, such as stimuli descriptors and decoding refers to learning a model that predicts behavioral or phenotypic variables from fMRI data. With the way these models can be learned and predicted, supervised machine learning methods can be used to decode images to relate brain images to behavioral or clinical observations. Sci-kit learn can be used for this analysis in making predictions that can be cross-validated.
I’ve explored Nilearn, a Python module that uses simple interfaces for people to apply machine learning to neuroimaging data. This module would allow me to get the best visualizations for raw data and processed results, and it’s built on scikit-learn, a popular Python machine learning module.
In my fMRI project, I re-create the methods of Miyawaki et al. (2008) in inferring visual stimulus from brain activity. In the experiment of Miyawaki et al. (2008) several series of 10×10 binary images are presented to two subjects while activity on the visual cortex is recorded. In the original paper, the training set is composed of random images (where black and white pixels are balanced) while the testing set is composed of structured images containing geometric shapes (square, cross…) and letters. I will use the training set with cross-validation to get scores on unknown data. I can examine decoding (the reconstruction of visual stimuli from fMRI) and encoding (prediction of fMRI data from descriptors of visual stimuli). This would let me look at the relation between stimuli pixels and brains voxels from both angles. The approach uses a support vector classifier and logistic ridge regression in the prediction function in both the decoding and the encoding.
This June, I’ll begin work in a neuroscience lab where I will be using computational methods to study the zebrafish brain. I hope to cultivate more skills as part of this intrinsic, self-driven passion for neuroscience from a computational perspective. The dynamic interplay of experimental and theoretical models in evaluating and re-evaluating hypotheses is fascinating.
References:
Miyawaki, Y., Uchida, H., Yamashita, O., Sato, M.-A., Morito, Y., Tanabe, H. C., et al. (2008). Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron 60, 915–929. doi: 10.1016/j.neuron.2008.11.004
I’m proud to announce the launch of SeqAcademy.org, a website to teach others RNA-Seq and ChIP-Seq analysis even with no prior programming experience.
This project was the result of the NIH April Data Science Hackathon, at which researchers from across the globe met to work on projects here at the NIH. Our group created SeqAcademy, an educational pipeline for RNA-Seq and ChIP-Seq designed to teach others the basics of bioinformatics. I hope to use this website to teach myself html/css, reach out to others, and provide for the greater bioinformatics community. I could see this project gaining momentum and becoming something greater – even with letting users login to the website, follow tutorials at their own pace, and earn certificates of completion. I would love to take SeqAcademy the step further and develop it into whatever the bioinformatics community needs as far as tutorials are concerned. It would require a lot of work creating a website from scratch, but it has the potential to help a lot of people.
“Everyone should have a deep understanding of science.” It seems like a lofty ideal. While it’s one thing for the general public to respect scientists for their work, it’s another to ask them to understand it on a deep level. As scientists and science writers share knowledge with others, we get a glimpse into their minds. Communicators like Neil deGrasse Tyson popularize astrophysics in such a way that the audience feels at ease with scientific jargon or conversations of the universe. In his new book Astrophysics for People in a Hurry, he promises this level of conversation for a non-scientific audience. Everyone develops a kind of understanding similar to theirs, and it’s more of a shared appreciation than a test of intelligence.
With his history of the universe, Tyson is off through space and time. In about 14 billion years, the expanding universe that began from the size of water droplet grew to today’s observable universe of 46 billion lightyears. Precision and detail are found sprinkled throughout Tyson’s story as he explains how the four fundamental forces of physics and phase changes of matter came about and interacted with one another. The reader feels comfortable with galaxies, planets, and dark energy with Tyson’s style of sharing how much time has passed and how much longer the reader will need to hold on. It feels as though the individual events unfold with respect to a greater purpose or narrative. Though the book is a set of essays, they’re presented like a conversation over tea with Tyson himself. Everything from Tyson’s background as a black astrophysicist to his religious (or lack thereof) convictions come about in this narrative.
Popular science is popular in some ways through awe. Stories that capture the public’s imagination – especially Tyson’s astrophysics tales – provide a public engagement that has not only instilled empathy in individuals but shaped policy on a larger scale. In astrophysics, the images from major telescopes like the Hubble and James Webb wouldn’t have been possible without the popular opinion swaying in their favor. Online science projects like Zooniverse and Foldit rely on crowd-sourced efforts of individuals to, respectively, volunteer projects and find protein structures. Everyone – scientists and non-scientists alike – becomes part of the same unified project this way. Greater purposes, narratives, and everyone’s place in the universe make sense on a different level through these projects. Like gladiators in a coliseum – the stories of science are shown to the spectators. As scientists and writers share the stories, everyone is intrigued in wonder. It’s exciting and thrilling to look at scientific phenomena in different ways – each one challenging everyone’s assumptions and ideas. Tyson’s book – and the rest of science communication – educate the public through these dimensions, and, while scientists keep speeding ahead through the universe, the rest of society can stand comfortably behind them knowing they’ll still catch up.
I sipped dark coffee while I stared out the window of the bedroom in my apartment. I could see the National Library of Medicine, the National Cancer Institute, and other buildings in the background. Like an eagle perched on a branch, I gazed at the landscape before me. Surrounding the buildings were green trees, concrete paths, and faces of people on the edge of scientific research. This would be the National Institutes of Health, the place I would call home for the next two years.
A mere ten-minute walk from my apartment, I was in no hurry. I downed more gulps from my coffee on my way to work.
This morning routine would lead me to my work with my eyes focused on a goal. Some type of purpose or mission for my research in bioinformatics. That’s a fancy word for using computers to study biology. Everyday was about figuring out what that goal was and how to achieve it. It could be reproducing results described in a published paper or presenting data to carry the meaning behind its research. I’d use my walk to work every morning to figure this out how to find this treasure.
I arrived at my work bench at 9:30. Alone in an empty room, I opened my laptop and went straight to business. Throughout my day, I’d communicate with my boss and other professors in person and otherwise on these goals. The culture of the NIH – driven to find answers, solutions, and new ideas – encouraged this drive. I worked with scientists who communicated with authenticity yet productivity. People valued research for its effectiveness and efficiency, but also spoke to each other as human beings. At the NIH, people valued criticism and debate to bring forward new ideas yet still trusted one another to work well together. While I hacked away on my laptop, I’d drink plenty of water to remain hydrated.
The research at the NIH was nothing that I had seen before. While I treated research as a side hobby during my undergraduate years, I’d have to push myself to new limits to keep up with new information at the NIH. Needless to say, everyone at the NIH had a purpose, and it was all about how to find it. As the afternoon fell into evening, I closed my laptop at my work bench and took a deep sigh. Searching for treasure at the NIH meant taking a long, winding journey. I sipped the last bit of coffee from my canteen.
Cadence Bambenek is a lover of words and dystopian novels. Her experience at newspapers has lead her to current position Psychology Today. Her work can be found on her website, and, in this interview, we’ll chat about what makes her amazing.
Hussain: Cadence, let’s start from the ground up. What made you interested in writing?
CB: What made me interested in writing? Reading. I was the little girl with a book and a flashlight under the covers long after my mother instructed me to turn out the lights – for the second time. I mostly loved historical fiction and fantasy novels. Growing up, I saved every quote that spoke to me, wrote bad poetry and song lyrics and daydreamed novel ideas. Simply put, I have always loved words – so much so that writing feels akin to breathing.
Hussain: What lead you to where you are now?
CB: In third grade, a teacher nominated me to attend a young authors conference. Indeed, I was fortunate enough to have teachers encourage my writing for much of my primary education, so that’s where I think I got the notion that pursuing writing as a career was rational. By college, I also knew I enjoyed traveling and photography. That, combined with my propensity for asking a lot of questions, led me to declare a degree in Journalism.
In college, it took some time for me to get my bearings I sampled a lot of different student organizations before finally joining a student organization dedicated to entrepreneurship. Shortly after, I began writing for one of the student newspapers, The Badger Herald, as a campus news reporter. I ultimately became the vice president of Transcend engineering, meaning I was really lugged into different projects and the entrepreneurial ecosystem. And at The Badger Herald, I really enjoyed interviewing professors about their research as a campus reporter, and happily took on the position of Tech Writer for the paper. That same semester, I picked up an internship at the Wisconsin State Journal, and I think it was the combination of experiences that landed me an internship at Business Insider writing about technology last summer. It was an amazing experience writing at a national digital publication that ultimately helped me realize it wasn’t quite what I wanted to do. Last November, a former editor of mine at The Badger Herald who really enjoyed one of the science-focused stories I put together, encouraged me to apply to the NASW travel fellowship to attend the same AAAS conference she had attended a year prior. I applied to the fellowship, was accepted and discovered the world of science writing! Which brought a lot of clarification yet complication to my life.
Hussain: What is the biggest challenge you face as a science writer right now?
CB: The greatest challenge for me right now is getting my foot in the door. I am about to start my first internship more in the realm of science writing this fall with Psychology Today, which I am excited for. I’m also hoping to try my hand at freelancing and take advantage of my time in New York.
Hussain: What is the biggest challenge facing science writers as a whole? How would you recommend others approach it?
CB: On the whole, I think that conveying to readers that science is an ongoing process and that science writers and scientists alike don’t have all of the answers. We are all, general members of the public included, part of a dialogue about the role of science and technology in our lives.
Hussain: When you’re not writing, what are some personal things do you do to become a better person?
CB: When I’m not writing, I’m probably grabbing coffee with a friend. Or reading a dystopian novel. I also enjoy when I convince myself to go on a run. It has both a therapeutic and empowering effect on me.
Hussain: We’ve talked (very briefly) about how journalism is about putting others first in a way that is selfless and humanitarian. How do you interpret and implement these kinds of motives in your work?
CB: With sources, I am always conscious to accurately represent their story. This is not Nightcrawler, I am not here to profit off of anyone else’s misfortune. I want whoever I interview to be confident in me and my skills and to convey their truth as best I can. To me, writing is so powerful because it’s the opportunity to give people words and concepts to help them understand and articulate their own life experiences.
Hussain: What newspapers and other publications are your favorite to follow and why?
CB: Because I’d like to write for magazines, I primarily focus on those. For my interests, I subscribe to WIRED and Bloomberg Businessweek and read The New Yorker to expose myself to writing unlike anything else. I also love The Atlantic, Quartz and the Science of Us section in New York Magazine. Right now, I’m also focusing on branching out and reading more niche publications.
Hussain: Name one book everyone should read.
CB: I loved Americanah by Chimamanda Ngozi Adichie and also Wild Swans: Three Daughters of China by Jung Chang. One is fiction while the other is nonfiction, and the stories take place on the other side of the world from the other, but both do an amazing job exploring the cultural context of the lives of their characters. It is books like that that make you look at the world in a new way.