Skip to content

Heuristic

  • GitHub
  • Email
  • Scholar
  • LinkedIn
  • Twitter
  • Instagram
  • Medium
  • Twitch

  • The science, mathematics, and philosophy of rhythm

    Zebra finches use a “critic” in the brain to differentiate between the rhythm of songs of other birds and, through this, learn songs.

    Like the ebb and flow of the ocean, 

    A rhythm emerges from the pen,

    I capture it, imagine it,

    before it disappears. 

    Appropriate rhythm in writing means making sense of the relation between words and phrases. Stress, repetition, fluctuation, rhyme, meter, pattern, juxtaposition, and harmony all come together. These form the aesthetic and intellectual properties of rhythm. For a philosopher studying semantics or neuroscientist uncovering our nature, rhythm poses challenges. I’ve written on the subject with respect to symmetry. Let’s delve into rhythm’s secrets philosophically, mathematically, and scientifically.

    Through much of my scientific writing, I pay close attention to how lengthy my sentences are. Too many long sentences at once can lose the reader in monotonous, cumbersome passages. I especially fall into this trap with description and exposition.  Long sentences bore the reader. Short, astute sentences can feel abrasive and clumsy. We alternate between the brevity of Twitter to the nuance of academic prose. it’s easy to succumb to habits and forget about the appropriate rhythm with which to write. Rhythm is both something we plan in advance and  re-evaluate through reflection and speculation.

    In the realm of aesthetics, philosophers have debated the role of rhythm since the Classical era. In Book III of Plato’s The Republic, Socrates clarified that rhythm and meter are what separate poetry from pure prose. Pre-Medieval philosopher St. Augustine developed a theory of aesthetics based on ideas of rhythm in De Musica. In congruence with the theologian’s religious beliefs, God is the origin of rhythm. We discover these mathematical truths, pre-determined by God, of rhythm. It’s like how Plato believed humans collectively remembered them.

    Emerson’s poem “Merlin” showed the use of rhyme and meter to create rhythm. Particularly the lines that moved back and forth between his own sensations and the way to craft meters of poetry from them showed this. Emerson in this section, showed how the rhyme fit so naturally that it seems like part of human prose. Socrates’ idea of rhyme and meter separate prose from poetry  lets Emerson use this distilled rhythm.

    Thy trivial harp will never please

    Or fill my craving ear;

    Its chords should ring as blows the breeze,

    Free, peremptory, clear.

    No jingling serenader’s art,

    Nor tinkle of piano strings,

    Can make the wild blood start

    In its mystic springs.

    We teach ourselves rules and tips on creating great writing to take into account the effect of the rhythm on the piece. To imagine and care for these aspects of the reader gives the writing a property only observed at a scale larger than individual words. Rhythm comes from how words interact with each other, yet remains limited by our conventions of writing. It emerges when you take a step back from your computer screen and look at the whole picture. Rhythm is this frequency. Time itself limits these perceptions as we read. As such, it reveals deeper features of our subjective perceptions, such as the stress, intonation, and tempo of speech itself.  Yet we speak of it as something deeper than the combined intrinsic content of words themselves. In this sense, it’s like an emergent phenomena, much like evolution selecting certain genetic traits.

    What makes these six clave patterns fundamental are that they reveal maximally even rhythms and maximum sum of pairwise distances between the points as vertices on a tetrahedron.

    In a scientific context, we rely on our empirical observations of rhythm to determine higher truths of rhythm.  Canadian computer scientist Godfried Toussaint finds this connection between shape and musical rhythm. Due to the evenness of six musical clave patterns, they have mathematical significance. That a mathematical algorithm could generate music raises questions about what is music. It raises questions as what sort of mathematical or empirical technique governs what “good” rhythm sounds like.

    Placing the six intervals in histogram form reveal patterns among themselves that may dictate the nature of rhythm as a whole.

    We can find fundamental features among strings of numbers such that the patterns of these features give rise to Euclidean rhythms. These are rhythms created by Euclidean distance, or the straight line distance between the points when arranged in a circle. The numbers show the span between the beginning of successive notes and, thus, represent the simplest way to represent rhythm. This needs much more empirical evidence before showings its truth in all music. Toussaint’s claim that Euclidean rhythms that are reverse Euclidean strings appear to have a much wider appeal. French mathematician Jean-Paul Allouche showed these strings of numbers are like combinations of words. Other mathematicians and computer scientists have developed Euclidean rhythms from Euclidean strings. Toussaint argues that the Euclidean algorithm finds the greatest common divisor of two numbers. It can generate rhythm timelines by using the two numbers as an input to the Euclidean algorithm. The two numbers would dictate the beginning of each note in the rhythm and the span between notes. 

    In the field of cognitive neuroscience, we can study the ways humans and other organisms produce and  test rhythms. Computational neuroscientists Kenji Doya and Terrence J. Sejnowski discovered a “critic” within the zebra finch brain. It lets the finch to differentiate between songs. NMDA receptors, a specific method of chemical signaling in nerve cells, activate to let bird to learn the “correct” song. Scientists Philipp Norton and Constance Scharff found patterns between the nerve and muscle cells. They corresponded with elements of notes and duration of the notes themselves. This is like Toussaint’s study of the beginning of each note determining rhythm. It includes the span between them, both fundamental components of rhythm. This research holds value for finding similar discoveries in the human basis of rhythm as well.

    Much the same way a poet translates nature into word, a scientist would find quantifiable metrics of music. They may raise questions for musicology, geometry, and, with enough empirical evidence, neuroscience. Now take a deep breath and let the waves beat upon the seashore. Detect the rhythm and move along like before. 

    Sources

    Doya, Kenji & Terrence J. Sejnowski (1999). The New Cognitive Neurosciences. II. MIT Press. pp. 469–482.

    G. T. Toussaint, “The Euclidean algorithm generates traditional musical rhythms“, Proceedings of BRIDGES: Mathematical Connections in Art, Music, and Science, Banff, Alberta, Canada, July 31 to August 3, 2005, pp. 47–56.

    Norton, Philipp & Constance Scharff (2016). “Bird Song Metronomics”: Isochronous Organization of Zebra Finch Song Rhythm. Frontiers in Neuroscience. 

    November 6, 2018
    Philosophy, Poetry, Science

  • "Extremely Loud & Incredibly Close": Science, aesthetics, and ethics of trauma fiction

    Picasso’s “Le coq saigné”, painted in France in the mid-20th century could be seen as a response to the traumatic events of World War II.

    It’s miraculous that after two world wars, a Holocaust, and a cold war, “trauma fiction” was only coined in the 1990’s. Trauma is fascinating, whether it’s sexual, verbal, physical, or another form. It could be a force that captures all parts of an individual. Someone undergoing trauma could have their senses and perceptions changed in such a way they question fundamental tenets of themselves. They may experience distrust, fear, and anxiety throughout other experiences. When one tries to write about trauma, it’s not uncommon that languages fails them. How can someone write about such a paralyzed, numb state? What sort of description can do justice to trauma while remaining objectively detached? And trauma itself can force an individual to re-examine moments of their life that they can’t seem to shake off. For fiction writers searching for narratives and themes, there are ways of identifying key concepts of trauma. Instead of focusing on what happened in the past, it’s important to understand why we remember those things. 

    Comparing trauma to a means of survival allows one to view reactions to different psychologically difficult experiences as having some sort of reason to them. Jonathan Foer’s 2005 novel Extremely Loud & Incredibly Close takes a creative, near-experimental approach to the American trauma of 9/11. The novel’s protagonist, a nine-year-old genius Oskar, struggles with the truth of the world after losing his father to the September 11 attack on the World Trade Center. When I read the book in high school, I admired Oskar’s introspective abilities for a child with a severe form of suffering. Yet I felt as though my AP Literature and Composition class didn’t quite capture the true ethical dilemma inherent within Oskar.The magical realism of the novel shows how it is created with techniques that don’t adhere to common realist notions of reality. Instead, the protagonists strategies of dealing with trauma bring him a sense of ethical assertion of the obvious immorality of the terrorist attacks. Oskar’s meaning creates a criticism of political terrorism.

    For Foer to write about 9/11, even four years after it happened, was a risk. In the wake of 9/11 many radio stations, television broadcasts, and even movie theaters avoided showing terrorism or terrorism-related media. But Foer’s attempt to capture a mind with post-traumatic syndrome with such detail shows how trauma causes the mind to repeatedly experience and re-experience events of its past until the mind fully understands it. In re-examining the past, the mind enters fantasy and, in this case, a form of magical realism as it relies on memory and repeatedly encountering past events. The reader is left to wonder whether Oskar’s remembrance of the event is real or solely exists in his mind. Foer even uses creative techniques of writing by showing pages that have multiple letters and words written over them (and over them and over them) in such a way they parallel the memories of crashing of the planes into the towers. Or the rumination of trauma. Over and over again. Or the rumination of trauma. Over and over again. Disrupting his sense of time (and possibly space, too), Foer uses a mythic mood reminiscent of modernist literature like T.S. Eliot and Gabriel García Márquez. 


    Speaking of time and space, Oskar’s deep interest in theoretical physics lets him draw parallels between the past and the present. In fact, it wasn’t until I entered university and began studying physics and philosophy that I saw the extremely close parallels between physics and trauma. As I read Hawking’s physics “story” A Brief History of Time as well as Alan Lightman’s provocative novel Einstein’s Dreams, I realized the components to the significance of Oskar’s interest in physics. Foer invokes the laws of special relativity and quantum physics to demonstrate that Oskar had a keen interest in the fantasy of what lies beyond our senses. In an effort to unify gravity with quantum mechanics, scientists posited an “imaginary” time that is inherent to the directions of space-time, that is, a way of proceeding through time not with the “classical” conventions of steady, unstoppable, linearity. With this “imaginary time”, you can move in circles, backward, and in whatever direction your mind chooses to. It leads to philosophical questions  such as difficulty in knowing the future while we have complete certainty of the past and what is the true difference between the past and the future. It also creates a form of magical realism in which the features of PTSD become our way of surviving in a chaotic, unforgiving universe. Moreover, Foer shares how Oskar studied the physicist Stephen Hawking and his three arrows of time according to the laws of entropy. The first is the easiest to grasp: our psychological understanding of time from past to future. The second is the thermodynamic arrow of time that moves from the past to the future, which dictates how systems of energy progress as the universe expands. This explains why an ice cube outside in the sun melts or why we need Carnot cycles to create engines to use energy. The third is the cosmological time of how the universe is currently expanding. Oskar draws upon this research to wonder how he may travel backwards in time, but, because the three arrows must point in the same direction, it isn’t possible. Foer is by no means the next Stephen Hawking, but his knowledge of physics allow him to incorporate a sort of “mythic” idea of science fiction – events governed by fantasy elements while still grounded in a reality of scientific inquiry. 

    This brings us to the deeper ethical dilemmas of trauma. As part of a greater collection of works Trauma Fiction (by writer Anne Whitehead), Professor of English Cathy Caruth explains that the structure of trauma is a disruption of history or temporality (similar to Oskar’s disruption of space and time) and, as a result, not fully experienced by the victim at the time. For this reason, trauma can cause people to experience persistent and unwelcome thoughts in the future, malignantly effecting recall and recollection. In neurologist Sigmund Freud’s 1939 nonfiction work “Moses and Monotheism”, the  relation between “Man” the problem of becoming human is explored. Freud claims Moses was born to Egyptian nobility and his few followers decided to kill him in rebellion. These rebels would later experience incredible remorse for their action after Moses fused with Yahweh, they would create the “Messiah” in their hopes Moses would return as their savior. This controversial story explores how the material reality of the unconscious can be transmitted from one generation to the next through language in such a way that future generations are forced to deal with an uncomfortable, traumatic knowledge. Yet it would create the language and grounds for discussing trauma and memory for decades to come. In her book, Trauma: A Genealogy, writer Ruth Leys explains that this work by Freud is a place of investigation for memory, trauma, and history that is central to discussions of postmodernism and the Holocaust. By virtue of its inter-textuality and despite its recurrent historicist motifs, Freud’s Moses has also enthralled leading figures in French post-modernist philosophy, who have highlighted its importance for the writing of history, the concepts of suffering and the preservation of experience. Caruth comments, in her her account of psychoanalysis and literature Unclaimed Experience, the story is “a renewal of some of Freud’s earliest thinking on trauma is indicated by his use of the figure of the “incubation period” to describe traumatic latency; Freud had used this figure in his early writing in Studies on Hysteria (1895)”. A sort of incubation period is exactly how Oskar suffers to determine, as a theoretical physicist might, the true nature in our universe. In it, as the deep psychological forces underlying the individual take control, the fear, anxiety, paranoia, obsessiveness, and other dark parts of human nature take root. 

    Freud makes the universality of trauma simple: Trauma seals the fate of man. The cause of our individual psychological difficulties have three salient features, Freud argues: they take place in early childhood, they’re generally avoided through other memories (that attempt to “screen” out the individual’s feelings), and traumatic impressions are generally sexual and aggressive as they attack the ego. Whitehead claims that trauma requires a non-linear literary form through abrupt or immediately self-evident methods. This trauma is usually dormant in which the symptoms are not shown until a later traumatic event in the individual’s life. As Caruth writes, “The experience of the soldier faced with sudden and massive death around him, for example, who suffers this sight in a numbed state, only to relive it later on in repeated nightmares, is a central and recurring image of trauma in our century” (in Unclaimed Experience). We’re not only haunted by the events of our past, but our own psychological difficulty in understanding those events.

    Similar to Oskar’s detachment and disillusion of the world around him after his father’s death, the protagonist undergoes existential crises through his efforts in school and other areas of life. Foer disrupts space, time, and spacetime to show the obsessive, haunting feelings that plague the psyche. From these understandings of trauma, Caruth argues, with the case of trauma fiction, the individual undergoes a “crisis of truth” that extends to the individual’s society and peers. Whitehead further illustrates trauma fiction with her point that the ethical questions raised by the individual’s testimony have an inherent literary feature to them as a result of these sufferings. 

    I absolutely despised reading the novel Extremely Loud and Incredibly Close in high school. I felt as though the narrator was too self-absorbed and couldn’t relate to him much. But now I look back on it with fondness. My English teacher emphasized the importance of taking apart an argument with all its assumptions and methods of reasoning when writing. That value and importance he gave me lead me to study philosophy in college as well as taking apart many other truths and ideals of this world. But I still worry that, despite the humanistic visions Foer had for his novel, these methods of literary can leave us staring at mental health patients as though they were fish in an aquarium. We’re distant and detached from their true suffering as we generalize and conceptualize the traumatic framework. For this reason, it’s important for us to remember in all our arguments, there is always a significant subjective, humane element in trauma. People react to problems differently and the way they affect our world views can sometimes be unpredictable and messy. 

    No matter our literary, scientific, and philosophical efforts, we can’t turn back time. Foer tries, though, at the end of the novel to share images of a 9/11 jumper in reverse order. This slideshow gives the image that the person is not only moving backwards in time, but flying as though they were an angel ascending in their fantasy. A stark contrast to the horror of witnessing suicide, it at least gives the reader a sort of escapism from the trauma. Like all traumatic events, be them historical, psychological, sexual, or of any form in nature, the past remain unchanged. Still, we can turn to science, literature, and philosophy in creating these narratives for the betterment of the future. I’m not looking back in anger.

    November 2, 2018
    Philosophy, Science

  • "Overcoming my fear of poetry"

    header2

    No! I won’t! I won’t write a poem! 

    You can’t make me! Nor will I succumb to my desires. No, no, no…


    I’m a researcher. That’s right. I seek knowledge and certainty. I seek soundness and completeness. I seek objective truths. 


    For I see the world in black and white. Atop a ship in a sea of gray,


    In absolutes, in truth and beauty I can describe the world.


    Still, the mighty roar of the foggy ocean, surrounds me on all sides, 


    through its cloudy mist light cannot penetrate. I fear what lies beneath the surface.


    Einstein was the wisest man alive, as science gives us answers,


    or Aristotle, a thinker like no other, with philosophy, more questions, 


    I search for land, refuge from an infinite sea. I won’t read Coleridge or Whitman or Thoreau. 


    I’ll remain willfully blind to what can’t be described or learned.


    I choose not to forsake my judgement in rhetoric and logic,


    lest I should become overpowered by my desires within.


    I won’t. I’ll lock the chest and throw away the key.


    You won’t get a poem out of me. 

    November 1, 2018
    Philosophy, Poetry, Science

  • Stoicism to improve your life

    “All cruelty springs from weakness.” – Seneca the Younger

    Philosophy, not solely confined to the writing of academics, can, in some ways, be seen as a way of living. For people to turn to philosophy for the answers to their common struggles and for ways of improving their life isn’t as far-fetched as it would initially seem when one studies the role philosophy had for the Ancient Greeks and Romans. For many Americans, Stoicisms holds answers, yet still remains an incomplete explanation of an individuals’ relationship with emotions.  

    Stoicism (with a capital S) refers to a set of philosophical beliefs by scholars of Ancient Greece and Rome. The Stoic philosophers emphasizes that “virtue is the only good,” and humans should act in accordance with their health, money, and leisurely activities in virtuous manners. What many people pick up on today, though, is the relationship between Stoicism and emotion: one that emphasizes acting in accordance with “nature” and avoiding destructive emotions that are caused my faulty judgements. The virtue that one seeks is found by placing one’s will in agreement with nature, and, the virtuous person would find themselves free from anger, envy, wrath, greed, and other malevolent feelings. In contrast to the more colloquial “stoic” (with a lowercase S), one is not forbidden from expression emotions at all, though. Only those in accordance with virtues and in agreement with nature should one exercise. 

    Through its many interpretations and revivals between the Greek and Roman civilizations, people could find a sense of meaning and order in an increasingly confusing and twisted world. Even in today’s era of post-truth discourse and distrust of any forms of media and education, one might choose a worldview driven by the Stoic desire to abandon all unnatural forces and chaos and, instead, live humbly with nature. The rise of the importance of autonomy and agency in progressive movements over the past century could even be argued as causes for these modern-day Stoics to believe they have the power to make decisions for themselves despite unstoppable, immovable forces.

    Many contemporary thinkers, even scholars with training in the social sciences, have turned to the Stoic philosophy. Today’s thinkers and writers emphasize citizens to adopt these Stoic principles to overcome their difficulties and remain strong through times of darkness. Some recent examples include a selected set of discourses, How to Be Free: An Ancient Guide to the Stoic Life and philosopher Massimo Pigliucci’s How to Be a Stoic: Using Ancient Philosophy to Live a Modern Life. Pigliucci draws from his own personal experiences and creates judgements on objective grounds such that he can create a style of living and philosophizing for readers to live moral lives. Even writers on creativity tell us secrets and tips of Stoic scholars to unlock our true potential, such as Paul Jun’s article “The Stoic: 9 Principles to Help You Keep Calm in Chaos.” And, falling into more self-help based learning, Donald Robertson’s Stoicism and the Art of Happiness have proven popular among the general public. Cognitive behavioral therapists Albert Ellis and Aaron Beck(and individuals seeking self-therapeutic methods), can’t change the world, but they can help people change their outlook. The Pathak-Wieten Stoicism Ideology Scale itself holds the potential to gauge an individual’s health. American writer Larry Wallace calls Stoicism a mind-hack. We can look up to Marcus Aurelius and his fortitude in trying to break through to his own self and understand how the forces in the world are. All were equal, from ruler to slave, under these Stoic beliefs. Even I, in writing memoirs about my life experiences and struggles, often appeal to my education in Stoic philosophy in overcoming difficulties that outside of my control. 

    It seems enticing. Prima facie, the idea that people can only face their struggles of things that are in their own control and that they should abandon all else could motivate just about anyone. It seems to toughen people up so that they forget about things that they can’t control. In a way, it’s a cool detachment that can help people understand the darkness. It makes anyone feel that their actions and beliefs are grounded in objective views of the world as they live in accordance with Nature. You also get to call yourself a Stoic, which sounds really cool. Who wouldn’t want to look up to figures like Marcus Aurelius or Seneca the Younger?

    Yet, Stoicism, like many other ways of thinking, has its shortcoming. It can be immoral for one to show attachment of love or affection to anyone in their life if they choose to live on a path of nature. The way Stoics choose to forego certain pleasures, lest they risk falling to dangerous emotions, can mean they aren’t able to form the necessary and beneficial attachments as a human being. 

    Greek Stoic philosopher Epictetus uses a simile of a passenger aboard a shift moving at the hands of nature itself to describe the tricky situation brought upon by Stoicism. The Stoic passenger would have to solely move in accordance with that ship and follow wherever it sets sail. He or she wouldn’t even be able to bring aboard family members, large amounts of food, or anything else they treasured. This simile, though prosaic and simplified, illustrates how Stoic philosophers may give in to forces of slavery, impassioned by their own feelings and at a loss of being able to act in accordance with them. Without even looking towards their fear of death, a Stoic might even choose to die rather than suffer existence. 

    Stoicism is also at odds with the writing of several Existentialism scholars. Philosopher Friedrich Nietzsche argued that Stoics fell to the naturalistic fallacy, that one assumes that something natural is good on the grounds that it is natural. It’s easy to see how this “natural” justification could run into issues with findings of evolutionary biology and their implications on human nature. It’s very troublesome for one to assume that all of our genetic predispositions of actions related to sex, societal norms, and visceral emotions (such as disgust, fear, distrust, etc.) are all natural, normal, and moral. Would we then justify racism on the grounds it’s natural to fear people who are different than us? Other implications like our sexual tendencies could be used to give a moral excuse to acts of sexual assault.

    “But what is philosophy? Does it not mean making preparation to meet the things that come upon us?” – Epictetus (Discourses 3.10.6, trans. Oldfather)

    Finally, I’m very skeptical of cognitive behavioral therapy and mind-hacks in general. I haven’t seen the appropriate evidence to show that cognitive behavioral therapy is an effective form of therapy and much of the evidence I’ve seen don’t show enough methods of cognitive behavioral therapy to prove it’s effective. I don’t trust mind-hacks as appropriate ways or methods to making one’s life or experience better as I’m a firm believer that deliberate reflection and introspection is a much more suitable method. Any philosophy that is promising to lift your mind and make your life better should be examined with a dubious skepticism, especially one that falls on simplistic truths such as changing your worldview and losing a grasp of one’s emotions. Despite theses limitations and disadvantages, one can’t help but admire the strength and perseverance of Stoic philosophers and how that has helped many Americans find peace with themselves. Maybe, even if people don’t capture exactly what the Greek and Roman Stoic philosophers envisioned, they can find a glimpse of it. 

    October 31, 2018
    Philosophy

  • Examining the current paradigm of artificial intelligence (with help from philosopher Thomas Kuhn)

    “Normal science, the activity in which most scientists inevitably spend almost all their time, is predicated on the assumption that the scientific community knows what the world is like. Normal science often suppresses fundamental novelties because they are necessarily subversive of its basic commitments.” – Thomas Kuhn, The Structure of Scientific Revolutions

    My doubts have been getting to me. My questions about the nature of existence and science itself have left me disillusioned and detached from many aspects of my day-to-day routine. I began to slowly and steadily believe that my work as a scientist wouldn’t be valued nor would it be worth doing in any sense. I could barely find answers questions and concerns I wrestled with as a scientist. As I walked through one of the buildings of the National Institutes of Health, I listened to music to drain out the sounds of the world. 

    From my view, something caught my eye. A bookshelf. In the middle of the hallway. I stared at it in disbelief took the headphones out of my ears. Anyone could take a book and leave one for others to read. In the middle of the top shelf sat philosopher Thomas Kuhn’s The Structure of Scientific Revolutions. Interesting, I thought. Looks like the scientists at the NIH aren’t completely aloof to the philosophical underpinnings of science. In this post, I hope to discuss the nature of Kuhn’s paradigm shifts and their relevance in the field of Artificial Intelligence (AI).



    Kuhn’s opus The Structure of Scientific Revolutions catapulted him to stardom in the world of philosophy. In the work, he presented arguments from the history and philosophy of science to show how science undergoes paradigm shifts. A paradigm shift occurs when a new set of assumptions and values replaces the previous one within a given scientific community. Between the Ptolemaic method of understanding the orbits of planets and the modern ways astronomers use (borrowing ideas from Copernicus and other scientific revolutionaries), we need to pay close attention to the assumptions and arguments that lead to our conclusions. The terms themselves in science would change that complicate the method of comparing scientific theories and evidence between these different paradigms. Kuhn particularly held that scientific theories from before and after a scientific revolution cannot be compared in a straightforward way; they are “incommensurable,” because the meanings of familiar terms change in unexpected ways as scientists go from one mode of description to another. One drastic consequence of incommensurability is that there isn’t any such thing as absolute progress from one paradigm to the next — say from before the Copernican Revolution to after, or from classical physics to quantum physics. A new paradigm may be more complete, or simpler, or more useful for answering certain questions compared to the preceding one, but it is not, strictly speaking and on the whole, objectively better.

    There are several examples Kuhn uses to illustrate these paradigm shifts. Kuhn’s analysis of the Copernican Revolution emphasized that, in its beginning, it did not offer more accurate predictions of celestial events, such as planetary positions, than the Ptolemaic system, but instead appealed to some practitioners based on a promise of better, simpler, solutions that might be developed at some point in the future. Kuhn called the core concepts of an ascendant revolution its paradigms and thereby launched this word into widespread analogical use in the second half of the 20th century. Kuhn’s insistence that a paradigm shift was a mélange of sociology, enthusiasm and scientific promise, but not a logically determinate procedure, caused an uproar in reaction to his work. Kuhn addressed concerns in the 1969 postscript to the second edition. For some commentators The Structure of Scientific Revolutions introduced a realistic humanism into the core of science, while for others the nobility of science was tarnished by Kuhn’s introduction of an irrational element into the heart of its greatest achievements.

    Another amazing example of a paradigm shift would be the shift from Aristotelian to Newtonian physics (classical mechanics). Despite the cleverness and elegance of Aristotle’s antiquated observations (that objects have mass and can be governed by forces), scientists found themselves with a fundamental difference in how they regarded what is knowledge and how it can be found. With the groundbreaking discoveries of Newtonian physics, physicists could create rules to describe intrinsic behavior of objects. Through obesrvation and refinement of theory and evidence, Newtonian physics laid the foundation for much of the rest of physics for centuries to come. 

    Vertical section drawing of 18th-century British scientist Henry Cavendish’s torsion balance instrument including the building in which it was housed. This apparatus was used to measure the gravitational constant by observing the force of gravity between two masses. The large balls were hung from a frame so they could be rotated into position next to the small balls by a pulley from outside. Despite how sensitive the experiment was, it did not cause a shift in the understanding of the constant of gravity as other experiments did.

    As a student with deep interests in philosophy and discerning truth in all its forms, I began to ponder and reflect upon the ways Kuhn’s writing would affect my work. I sought to bring the value and meaning of this scientific work in a context such that I could benefit most from it as immediately as possible in my life. Through several weeks spent digging away at philosophical writings by Kuhn, Feyerabend, and Popper, even searching for original papers in physics and chemistry by scientists like Newton, Cavendish, and Archimedes. I began interpreting their arguments and pitting some against others. To put myself in the shoes of these thinkers, I sought to defend and attack their views. Kuhn’s response to philosopher Karl Popper’s notion of falsificationism (the doctrine that falsifiability is the most important feature to determine what is scientific), in particular, is essential to understanding Kuhn’s paradigm shifts. Kuhn emphasizes the importance of verificationism, a philosophical movement that emerged in the 1920s among logical positivists. The verifiability principle claims that meaningful statements must be supported by empirical evidence or logical requirements.


    One may argue AI started at Dartmouth in the 1950’s. Students and professors (among them Allen Newell, Herbert Simon, and John McCarthy) created computers that could function using programs. On both hardware and software sides, computers developed their own ways of playing games such as checkers, solved algebraic problems, and showed proof of logical theories. The moments were spurred with optimism and a bright-eyed look at the future for computer science in general. 

    AI has undergone similar transformations since its creation in the 1950’s. The emergence of data-driven approaches over the past few decades has allowed for more detailed, refined approaches for computers to complete machine learning tasks. This means computers can behave in ways similar to humans in solving complex problems and making decisions about the world in ways unparalleled since the field’s inception. A paradigm shift using the conceptual tools of Thomas Kuhn are very implicit in the field. And one needs to sense and determine the structure of the reovlutions in the field (as Kuhn did for physics) on the fundamental level on which knowledge is acquired. Looking at the successes of the field over the past 70 years, the replacement of these exemplar stories to more corresponds to a shift in goals, methods, and expectations. 

    Understanding the appropriate methods used by those scientists, I began to form a greater, more expressive appreciation of scientific research in general. There’s an objective truth out there, and I don’t interpret Kuhn’s writing as a challenge to that nor do I interpret his work as evidence of relativism in the philosophy of science. Given the growing initiative and significance of AI research, I hope I can elucidate what sort of paradigm have occurred in the field over the past few decades.

    There is a beautiful account of what it means to be doing research in this new paradigm. It can be found in the article “The unreasonable effectiveness of data” published in 2009 by three leading scientists working within Google Research. It reads like the perfect manifesto of a scientific paradigm in Kuhn’s definition, containing success stories, recommendations and directions of expansion.

    Things have changed since then. Emphasis moved from knowledge, logic and reasoning to data, learning and statistics. Information seemed to triumph over wisdom and gratification of immediate results over epistemic truths. Success stories like the ones mentioned in the “Unreasonable Effectiveness” paper started being passed on, from teachers to students, more often than stories involving rules, theorems and logical deductions. Data became central in the new narrative, intelligent behavior was acquired by the system through automatic analysis of vast amounts of data. The results of the analyses on these enormous amounts of data has allowed AI to conquer a multitude of problems in computer vision, text processing, speech recognition, and so forth. And because of this centrality of data, machine learning became the lingua franca of AI researchers, and increasingly this was true for the specific version of learning that had enabled all those success stories: statistical learning. The languages of statistics and of optimization took hold of discourse in the AI literature, replacing that of logic. After all, the stories seemed to be based on modeling the problem of intelligent inference as an inverse problem, solved by maximization of some probabilistic quantity.

    Throughout the decades since then, AI’s research has extended to amazing results in spelling correction, facial recognition, machine translation, information retrieval, and theoretical problems we believed were unsolvable decades ago. Machines can now correct their own spelling and grammar of writers even without knowing the rules on which to act beforehand and, instead, by learning those rules themselves. We’ve moved from emphasizing the importance of a particular domain knowledge to the availability of data and the ability to draw trends and conclusions from that data. This allowed for a paradigm shift (in the words of Kuhn) to occur in which we can adequately and appropriately discuss the AI findings of today. 

    In the 2010’s, AI research to a great degree defined its ability to process large amounts of information and to learn new theories and rules by advances in computer science and mathematics. A series of breakthroughs in applications created a strong business model for a part of AI. It is now possible to transcribe speech, translate text and recognize faces, to a sufficient extent that this can be used in real applications. The stories of current AI do not involve sentient robots or Turing tests, but rather efficient processing of digital content, or the modeling of its consumers and producers. They don’t even dig into details about investigation of general cognitive capabilities. Instead, they’re more focused on the reproduction of very specific behaviors. This is precisely the kind of shift that Kuhn talks about and it has far reaching consequences. Success stories tend to act as templates for future research, to define for new researchers which new puzzles should be solved and how. The adoption of new stories changes the definition itself of the scope of a research field. And, most strikingly, all these breakthroughs were achieved by the use of the same set of techniques and by the same overall approach: the methods of statistical AI.

    Despite what the arguments about paradigms and the relative nature of them to scientific research, Kuhn is not a relativist. Kuhn admits there’s something objectively out there. But he qualifies that this thing-in-itself, like German philosopher Immanuel Kant described, as ineffable and undiscussable. Instead, we need to focus on our approach. 

    We speak as though we could bring ourselves closer and closer to this truth. Like diving deeper into a dark abyss (similar to the way philosopher Friedrich Nietzsche might have warned us about), we can only see some of what is in front of us. And, until we can figure out how to interpret these mysteries in science, we must make sure our methods of speculation and evaluating claims are as close to the objective truth as we can possibly get. Through each shift of a paradigm, we soak in new sources of light and adjust our vision accordingly. Only then can we understand the importance of our current paradigm in artificial intelligence. Through that, we can inch closer to the elusive objective truth as Kuhn envisioned. 

    “Truth, it is said, consists in the agreement of cognition with its object. In consequence of this mere nominal definition, my cognition, to count as true, is supposed to agree with its object. Now I can compare the object with my cognition, however, only by cognizing it. Hence my cognition is supposed to confirm itself, which is far short of being sufficient for truth.” – Immanuel Kant, Logic Lectures

    Unfortunately, I still find myself bogged down by technical details and issues of my work in such a way that these philosophical insights come few and far between. Despite how skeptical and relativistic Kuhn may appear, I believe his methods hold insight for the nature of research in biology and neuroscience is conducted at the NIH. As I dig into the methodology behind recent advances in bioinformatics, machine learning, and computational neuroscience, I became more keen to the rhetoric and logic underlying the insights we derive. Though I stare into the abyss with wonder and fascination at how AI research has evolved since it’s inception, I don’t look back in anger. Only the solemn hope that we’re making the right decisions in our current paradigm.

    I sit at my laptop staring at a black box on dimly lit screen. In it run all my programming commands and shell scripts for carrying out my analyses. I can imagine how my methods of carrying out analyses using scripts in Python would form part of this bigger search for an objective truth. I sit back and speculate on the future of the field and especially the results for computational neuroscience. For Kuhn, Kant, and everyone else, I hope to continue this inquiry.

    October 30, 2018
    Philosophy, Science

  • A modern-day paradigm for computational approaches to psychiatric illness

    Source: https://www.youtube.com/watch?v=lQLsyf64xak

    My research in computational neuropsychiatric genomics on the zebrafish has lead me to investigate what sort of methods and inquiries I could put forward using statistics and algebra. Though zebrafish are an inherently helpful model organism for psychiatric disease I want to extend the nature of zebrafish research such that we can achieve the full potential of psychiatric disorders no matter what species we are studying. The results of artificial intelligence in particular hold promising techniques that extend into biology and neuroscience. As scientists peek into the architectures and algorithms like hierarchical filtering and supervised learning, they can create more detailed and elaborate explanations of biological phenomena. OpenSource platforms in particular need to establish a framework or fundamental principles by which scientists can draw conclusions on the nature of psychiatric disease itself through accounting for the limits of experimental observation. In this post, we’ll discover some of the latest findings in computational neuroscience as they relate to the questions we’d like to answer.

    As the field of neuroscience reaps the benefits brought upon by big data and advances in next generation sequencing, scholars have raised issues and questions brought upon by the questions and challenges they wish to address. Modeling data itself is a very important step for scientists to understand the nature of disease and neuroscientific phenomena. To benefit a field like psychiatry, researchers must find models formulated through prose that rely on several available empirical findings. But psychiatry and neuroscience can borrow principles and techniques from physics and mathematics, particularly those ideals which seek precise and explicit methods of creating predictions. The precision used in characterizing gravitational waves and in discovering the existence of the Higgs Boson are exceptional examples which show promising results for extending these processes into neruoscience. Still, the available models in computational neuroscience and neuropsychiatric genomics remain difficult to falsify and more descriptive than predictive. We see a coming together of disciplines, but not exactly the promising results that we had anticipated. For this reason, scientists must develop precise models drawing from modern neuroimaging techniques and mathematics.

    On the bright side, neuroimaging has shown how complex, interconnected brain changes characterize psychiatric illnesses. This has informed researchers to select treatment targets of new therapies and form predictions on genetic risks for patients (Mayberg, et al. 2005). Many models remain lacking the appropriate mathematics to justify their use, however. Further statistical tests such as cross-validation would be necessary to justify generalizing a linear model from a set of training and test data. Computational researchers have put forward efforts, though, as shown in the issue of Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. The issue showcases techniques from mathematics, physics, and engineering used in creating more rigorous models of psychiatric disorders. The articles in the issue share the premise that brain connectivity is the cause of psychiatric disorders, and, from this, scientists are able to borrow from graph theory, network theory, geometry, and even topology in describing networks and their functions. The tools are certainly available to researchers, but putting them into context and using them the appropriate way remain to some degree elusive. Network science (such as the work of Morgan et al., 2018) in recent months has shown effective results in studying autism, ADHD, and psychosis. The methods of borrowing techniques from one discipline to another require constant validation with the empirical evidence surrounding psychiatric illness to prove their validity.

    Other advancements (such as that of Scholtens et al., 2018) search for unifying principles that can be used to explain any (or at least, many) data. These approaches account for the differences among varying sets of data such that the equations and algorithms that use network theory can be easily generalizable to any issue. Scientists should focus on these fundamentally-driven, elegant solutions to uncover the multiscale complexity of brain function. This method of drawing simplicity from complexity and moving between different levels of function and organization are what would allow neuroscience and neurobiology to actualize the full potential of their disciplines much the same way mathematics and physicists function as well.

    Moving from descriptive to predictive models, as well, is a bottleneck for many models, and some researchers (like Janssen et al., 2018) have laid the foundation for using machine learning techniques to perform these predictions. The complexity and sheer amount of neuroimaging data can be simplified (in accordance with appropriate computational and physical limitations) and used for prediction illness outcome. They authors do warn, though, that overestimation of prediction performance is possible as test data may be overfit to training data. These machine learning methods still need to account for the multidimensional nature of large data sets and create realistic measurements of performance effectiveness.

    Researchers need to engage in further work in determining the soundness and benefits of these models, especially from basic science principles to the clinical setting, to yield the true potential of computational neuroscience. For the complexity of psychiatry and biology to meet the same certainty and precision of mathematics and physics requires constant evaluation of scientific techniques. While the work done in recent years is a good start, there’s much, much more to be discovered.

    Sources
    Janssen, R.J., Mourão-Miranda, J., and Schnack, H.G. Making individual prognoses in psychiatry using neuroimaging and machine learning. Biol Psychiatry Cogn Neurosci Neuroimaging. 2018; 3: 798–808

    Mayberg, H.S., Lozano, A.M., Voon, V., McNeely, H.E., Seminowicz, D., Hamani, C. et al. Deep brain stimulation for treatment-resistant depression. Neuron. 2005; 45: 651–660

    Morgan, S.E., White, S.R., Bullmore, E.T., and Vértes, P.E. A network neuroscience approach to typical and atypical brain development. Biol Psychiatry Cogn Neurosci Neuroimaging. 2018; 3: 754–766

    Scholtens, L.H. and van den Heuvel, M.P. Multimodal connectomics in psychiatry: Bridging scales from micro to macro. Biol Psychiatry Cogn Neurosci Neuroimaging. 2018; 3: 767–776

    October 25, 2018
    Medicine, Science

  • Getting the best of both worlds with data journalism

    Screen Shot 2019-04-11 at 11.38.00 PM
    Read this article in the Catalyst here…
    October 25, 2018
    Science

  • Under Your Skin: Molecules and Cells for Touch and Pain

    womandryingherself
    After the Bath, Woman Drying Herself
    Read this article in the Catalyst here…
    October 25, 2018
    Science

  • How to maintain intellectual character in an era of fake news

    “Truth Coming Out of Her Well” – Jean-Léon Gérôme (1896)

    To speak of our own intellectual character requires a tremendous amount of humility and generosity. To speak broadly, intellectual character is all about embracing truth, criticism, and ideas in a way that’s justified, fair, and ethical. It’s very easy for many scholars and students to treat their own intellect in arrogant, bodacious ways that only serve to satisfy our selfish desires. In today’s era of fake news and post-truth, we find abundant examples of trying to win arguments out of sheer pride and vengeance or spreading misleading or false information to make one’s self appear better. To determine the ways these intellectual conflicts and conversations reflect our moral character means understanding what intellectual character is and how to maintain it in today’s society.

    When we interpret information and draw conclusions from it, at the background of all the judgements we make is our intellectual character. If I choose to spread false information, does that make me a liar? Or, perhaps more subtly, if I choose to misrepresent information, what does that say about me? At what point do our accomplishments define who we are, and where do we draw the line about what determines our character?

    Many people would rather live in ignorance of these truths and ideals. It’s much easier for us to call ourselves paragons of ideas and criticism without putting ourselves through the struggles and challenges to ourselves. Ask anyone who engages in serious reflection of their own life and their abilities to act in a manner that reflects those ideals they determine. As courageous as it is for an individual to bring their self to heed to moral values, it’s far more tempting for us to shun moral rules and guidelines and, instead, put our selfish goals first.

    Fake news is at least partly created by the tactics used to distort facts and spread deliberately misleading information. The way we may choose to highlight certain ideas that suit our own agendas instead of representing them as they truly are introduces our subjective biases which twist our thoughts and ideas. In this sense, the transmission of fake news reflects an individual’s ability to behave as a moral human being and account for the responsibility they have as a person. However, it’s this very connection between discourse and moral character which fuels the fake news tactics to begin with.

    On a psychological level, we find ourselves gauging the effectiveness of rhetoric, rather than its morality or justification. The “winner” of an argument would be deemed as the person who makes the other person look worse, instead of allowing a detachment between thoughts and the human themselves. The dimension of these actions is psychological as we naturally rather invest in ideological self-image and appearance rather than deep, careful understandings of themselves. The tactics employed become a self-defense mechanism for shunning ideas we find uncomfortable or dangerous to hear. Instead of creating appropriate justifications for difficult-to-digest and contradictory ideas, we engage in delusional habits that cause us to pretend. Fake news emerges from this. It’s very much in the guise of truth, yet acts in a way counter to everything we hold true and sacred about intellect itself.

    In our efforts to get claims right (in a way that is truthful, justified, and honest), we have reasons that are instrumental and inherent. We may want statisticians to be as accurate as possible in prediction election outcomes for the sake of giving us a clear idea of the future, but we also may want authors to share a story for their true intentions and purposes so we may find aesthetic pleasure in its value. These methods of getting things right govern our behavior in the fake news era, and the way we construct priorities and moral rules shows how those meanings of “rightness” manifest themselves. On top of this, we care about who we are as people. For us to act as noble people, we should want to be noble people. But this puts us as odds with ourselves. We fail to recognize the negative, imperfect aspects of who we are and many people will go to great means to avoid and ignore these dire issues. Any reasonable person would react negative to the notion that they’re prejudiced or dishonest in what they’re saying. This paradoxical way of trying to be a good, moral human being while engaging in methods to ignore our darker sides is what reveals our true intellectual character. Being able to confront this confusion and represent truth no matter what becomes the most difficult moral endeavor for anyone.

    Some further questions to ponder:

    (1) Does intellectual virtue depend on our social environment? Or is it inherent within us?

    (2) When is it appropriate to make any sort of claim about the character of a human being with regards to the information they communicate?

    (3) How can we fight fake news at any level while respecting our own intellectual character?

    Let’s put our own character on the stand and cross-examine it. That way we can fight fake news in all its forms.

    October 25, 2018
    Education, Philosophy

  • How war shapes a country: a review of Nora Krug’s “Belonging”


    Pondering difficult questions of her own cultural background, German author Nora Krug asks the questions of what belonging is and what that means to her. To belong to a culture of Germans responsible for the unspeakable atrocities of World War II meant Krug was challenging the very idea that she should belong to that culture. Though she was born several dozen years after the fall of the Nazis, the actions would cast a shadow on her life. Searching for answers, Krug’s graphic memoir wrestles with home and her self.


    For a German civilian to recognize and understand the actions of Nazi Germany would shake anyone to the bone. Like a scientist studying her own brain through fMRI or a philosopher accounting for his personal story with depression, Krug both detaches herself from who she is while becoming intimately close to it. It’s a delicate balance between self-criticism and appreciation for the value it is that makes Krug’s story tricky and challenging. Krug fortunately approaches these issues and limitations by capturing the images of Nazi Germany memories and stories with an empathetic brushstroke. By invoking symbolic images and a subdued art style, Krug invites the reader to join her in asking intense questions that represented history. “Are Jews evil?” “What is my home?” “Where do I belong?” “Who am I?” These questions accompany stylized pictures of people that appear both fundamentally flawed in their thinking and terrifyingly real. I find myself shocked, yet soothed that my reactions and perceptions are okay to experience.

    To manage and interpret these feelings of guilt and shame mixed with a pride that any ordinary individual would hope to have for themselves, Krug’s interviews and anecdotes account for the abhorrently evil actions that shaped the past. To be a German is to understand the notion of Heimat, or the German word for the place that forms us. As humans, this responsibility to society and humanity in general means they must account for their decisions. For Germans, this means a humble, gentle remembrance of what mankind is capable of and determining what that means for the future. Other aspects of the memoir, such as the tender pacing between panels and scenes allow the reader to become truly close to Krug’s thoughts. The shock and sorrow the reader experiences parallel the shared responsibility Germans have for recollecting and understanding the meaning of their past. Contrasting the realistic photographs with comical, nearly bizarre, human faces, Krug almost invokes a dark sense of humor. This would be humor that one may realize their own dark history to fully move on and recover as a nation. 

    Does war ever leave a country? Or does it plague mankind forever? A German may worry that sort of patriotism might be a reminiscent eulogy of the days of Nazi Germany. Despite the end of the war and the dismantling of Nazi Germany, the humans of today continue to struggle to understand their purpose and meaning in life. One might even argue that the journey of looking for meaning is much more important the destination itself. Similar to the Myth of Sisyphus, we imagine ourselves content in grappling with questions of existence despite never having completely satisfying answers. Nora sets out to really find the truth about what her family did in what seems like a way to absolve her of her guilt. There’s no deux ex machina or dramatic catharsis of guilt and tragedy. Krug only wants answers. She wants to know what happened even if it does’nt make her feel better. For her to put this paramount truth above all else gives her a much more objective and sublime look at her own past. I hope the reader can pick up the book and wonder what their past means for them.

    October 25, 2018
    Education

Previous Page Next Page

 

Loading Comments...
 

    • Subscribe Subscribed
      • Heuristic
      • Already have a WordPress.com account? Log in now.
      • Heuristic
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar