Skip to content

Heuristic

  • GitHub
  • Email
  • Scholar
  • LinkedIn
  • Twitter
  • Instagram
  • Medium
  • Twitch

  • Here’s looking at UK, from scientists across the pond

    When I heard the news, I couldn’t believe it. It seemed like something out of a populist fantasy of protecting “the homeland” or an electorate overthrowing imperial control in an Ancient Athenian democracy. The UK voted to secede from the EU with the Brexit referendum. With that, my worldview was shaken again. Overcome with uncertainty and worries, different groups of people expressed their concerns about the UK leaving. Quite surreal, depressing, and anxious for everyone across the globe. Let’s hope the science will stay ablaze.

    Scientists, one of the most opposed groups to the Brexit referendum, face issues with funding and research on international projects with the UK out of the EU. International students at UK universities could experience larger hurdles in gaining admission as EU nationals are up to 1 in 5 students at some universities, Times Higher Education reports.  Their future is uncertain, said Theresa May. Nature, however, one of the most respected journals falsely reported that UK scientists would be put at the “back of the queue” for the International Thermonuclear Experimental Reactor (ITER).  Still big names like Stephen Hawking warned, “we’ve become reliant on EU funding. We get back a little more than we put in, and associated status will need to address this. But the other thing we need to do, and what UK academia needs to do, is get much better at lobbying government.”

    I sincerely hope the science community continues the way it should. While there’s a lot of worrying that can be done, there’s much more science to do. I’ll keep listening to the Sex Pistols and Stone Roses while hoping for the best for the future.

    July 4, 2016
    Science

  • Ethics in cruise control: self-driving dilemmas

    “Nietzsche, take the wheel.”

    Self-driving cars may take us wherever we want to go, but they won’t know where unless we tell them. And with the first death of a man riding autopilot in a self-driving car, we find ourselves with the same old questions we’ve always had. What should an autonomous car do when human life is at stake?

    The ethical dilemma of whether a car should swerve out of the way to kill one person in order to save others is a form of the trolley problem. Now, engineers, scientists, and policymakers ask the big questions philosophers have debated for centuries.

    The question is driven by our ethical concerns for the general public. We make these decisions with regards to the effects they have on our society as a whole. Self-interest usually takes the backseat, and our idealistic utopias emerge. We’d love to live in a world in which a car can simply calculate the risks and benefits of various decisions to which action to take. And, in this sense, the decisions of self-driving cars aren’t too different from our current methods of engineering. When we make airbags, we design them so that they would save as many people as possible while taking into account the costs of manufacturing.

    The experimental ethics approach of Jean-François Bonnefon, researcher at the Toulouse School of Economics, probes the big questions for answers. By surveying the public and designing experiments that take into account various situations of self-driving cars, we can get a general idea of how people view them. The researchers ask questions to the public about different scenarios and how they would want their cars to act. This includes the general situations of swerving out of the way of a crowd of people in order to hit one person, but also more variable specific cases, such picturing yourself in the self-driving car, as opposed to someone else. It would take those opinions of the public into account so that the designers of self-driving cars can program their cars with their thoughts in mind.

    It seems reasonable and straightforward. Cars would perform the actions which result in the fewest deaths. But let’s not let the idealism of utilitarian motives get the best of us. The same study showed that, though people preferred cars to drive this way, they wouldn’t want to buy cars or be the drivers of them. No one wants to be the driver of the car that makes the decision to swerve and kill a single person in order to save the lives of a greater number of people. The findings illustrate the conclusion quite well. When people have an overall goal for the public in mind, that is, protecting the lives of as many people as possible, they agree it should be pursued. But every individual person doesn’t find it in their own self-interests to do it.

    What this means is we need a greater social change in our understanding of ethics before we can put our own solutions into action. Some sort of a collective understanding of individual decisions being part of a bigger picture would lessen the burden on the single consumer in buying a self-driving car. The experimental ethics work should also encompass what the outcomes of those automatic driving decisions are.

    Jean-François Bonnefon, Azim Shariff, & Iyad Rahwan (2015). The social dilemma of autonomous vehicles Science, 352(6293), 1573-1576 (2016) arXiv: 1510.03346v2

    July 2, 2016
    Philosophy, Science

  • Physics-based biology and blindness to the abstract

    Biologists can have an uneasy relationship with math and theory. The field prioritizes pushing forward the conventional status quo at the expense of abstract theorizing and risky experiments. Bill Bialek, theoretical physicist at Princeton University, said, “we cannot expect that the biology community itself will create a genuinely receptive audience for theory.”

    I’m criticizing the culture of biologists, but, in all fairness, other scientists, like physicists, suffer from their own issues as well. Physicists fear complexity and ambiguity in their work. But I can testify that these issues biologists suffer from harm research and can be solved with a greater understanding of the abstract.

    From the first biology course I took in high school, I was bored. Everything was about memorizing vocabulary and information. It was easy to understand why I quickly ran to physics. But, even as much as some of the tests supposedly focused on “application” instead of “memorization,” the magic was never there. Even though some exams would emphasize “applying” certain concepts to answer questions, the rigor of the reasoning was never anything more than superficial knowledge of facts. Even when I entered university, the inquiry, creativity, logic, and other forms of critical thought were gone. Instead, the biology curriculum was always about knowing information and regurgitating it on exams. The consequences are that we’re setting ourselves up for siloed cultures, risk-averse research, and regressive thought.

    Physics, on the other hand, gave me what I wanted. In my physics courses, I could understand how problems made sense from questions to answers without worrying about memorizing too much. I could use mathematical equations to explain phenomena, rather than accepting them for the way they were. Thankfully, my upper level biology course (AP Biology) shifted the focus away from memorization and towards more effective ways of learning, but physics always had that charm that other disciplines lacked. As Bialek explains, a theoretical physicist can sit at a computer with a pen and paper in order to perform research as much as he/she wants to. An experimental biologist requires all the equipment of a lab and stringent requirements of conducting research in order to get work done. These are generalizations and simplifications that don’t account for the nuances and differences in individual instances of both physics and biology, but the cultural distinction stands. The field of physics can allow for that exploration and creativity that biology needs to grasp. Physics research builds and re-builds upon theoretical premises much more easily than biology does.

    I still find biology beautiful, with some areas even more interesting than physics. For me, the research on evolution and genetics is far more interesting to me than the high-energy particle collisions at CERN or the solid-state materials work. And the utility of biological research, from medicine to social sciences, is often viewed as more valuable than many areas of physics.

    I don’t believe the field of biology is broken beyond repair, nor that it is in some sort of crisis. It’s more of an issue that is holding the discipline back that should be worked against. We need a universalist ethos that allows for greater cross-cultural transmission between the two biology and physics. Interdisciplinary fields such as bioinformatics and biophysics are a step in the right direction for both fields, but a more fundamental change in the way biologists approach problems is necessary. Biologists can think through problems the same way physicists do. This could mean using statistical techniques from thermodynamics in simulating genetic evolution. Or it could be looking at analogues between elegant physics equations and nature’s tendency for simplicity. Maybe physics can learn from biology, as well, if physicists embraced complexity in their work.

    With greater reverence for creativity and abstract in new ways of looking at biology, scientists can be encouraged to take more risks. And the risks will yield greater results, quelling the fears that taking risks is harmful to science. And maybe I can enjoy my biology classes a bit more. 

    June 27, 2016
    Science

  • Who cares about the ill? The moral grounds of mental health

    Philosophers like to argue about our values. We can’t simply stop at empathizing with different belief and conditions of other people. We must know where those values come from in order to address the thorny ethical dilemmas that plague our lives. Dr. Agnieszka Jaworska at UC Riverside delineates various forms of moral standing in which humans help each other. And, when it comes to mental health, this sort of moral standing understanding might be just what we need.

    When we ask why we care about the people we value, the answer might appear obvious. If we value something, surely we care for it on account of that value. But caring is complicated. We often care about people out of social ties, and we care about our own selves in different ways. These grounds can be emotional, like our abilities to desire, or more reason-based, such as our abilities to determine what actions and behavior we can perform. We say we should save the live of a human being instead of, for example, the life of a chicken, on the human being’s ability to reason.

    A grasp of our moral standing would aid in our treatment of the mentally ill. A patient with late Alzheimer’s disease might only find him/herself with bodily needs, such as food and water. Babies and even unborn children may be subject to speculation with responsibility, autonomy, integrity, and other factors depending on their physiological structures. And caring can cover the emotional aspects we normally associate with the action. As Jaworska explains, the internalities of caring on human behavior can’t be ignored. The way we care about things gives us desires and attitudes that we don’t simply experience for a moment or two, but absolutely “own” as part of ourselves. This act of owning an attitude gives rise to the capacities of caring. And Jaworska argues that our capacities for emotion are actually enough for our grounds for the cognitive and reflexive capacities of caring. Other opinions include those who follow Kant and claim that, instead of an emotional ground, the capacity for caring comes from a reason-baed ability to form decisions. From these capacities, we can talk about those who suffer from disease (especially with mental health issues) on the right page.

    This means our understanding of medicine and medical education needs this moral grounding. More power to the fields of philosophy and the rest of the humanities.

    June 22, 2016
    Medicine, Philosophy

  • A genetic moral code

    Screen Shot 2019-04-12 at 1.50.45 PM

    We may be making progress in understanding the genetic code, but how much of our moral code is under the same scrutiny?

    The scientific community has been at it for decades. Talks of the potential for CRISPR-Cas9 to genetically modify organisms for better or for worse have infected our thoughts and discourse almost like a virus. It’s even gotten to the point at which I’m somewhat tired of how my newsfeed is blowing up with news of how genetic engineering is such a huge ethical problem with very little thought or opinion put into developing and finding solutions for it.

    In this way, genetic enhancement may be seen as an extension or similarity to our current methods of breeding for specific traits. They should be viewed with greater regulations, though, with the moral costs that come along with them.

    It’s better to establish a moral code and determine what problems might arise from them. This means that, with the number of ways we can and should carry out regulations, they should adhere to central ideas and principles that can be enforced and understood. This way, those problems can be addressed win the future with structure and clarity from the way humans carry out actions. We shouldn’t behave in such a way because it produces the best outcome nor because it might appear to be the safest. Protecting fundamental ideals can give us something to hold onto through precarious and changing innovations.

    To do make some sort of system of rules, we must analyze our pieces of knowledge and oft-repeated statements in our discourse of genetic engineering.

    Some have considered comparing genetic engineering to natural selection. It might seem reasonable to think that genetic engineering is an extension or similar vein as natural selection that we observe in nature and can, therefore, be viewed with less suspicion and fear. But evolution’s dissimilarities with genetic engineering, notable in how serendipitous and amoral the former is, show that this comparison couldn’t hold any weight. Darwin himself wrote, “What a book a Devil’s chaplain might write on the clumsy, wasteful, blundering low & horridly cruel works of nature!”An approach to genetic engineering viewed the same way natural selection works would fail to understand the implications and power of our artificial tools.

    Others have suggested that the ethics and scientific progress are at a race with one another. I often hear people say “Science has so much potential that humanity hasn’t caught up to it.” But this metaphor breaks down considering science and ethics aren’t at odds with one another. Science lacks a moral direction and makes no humanistic statement outside of our own interpretations of scientific knowledge. It’s also a very scientistic statement that could imply we are now facing new moral questions when the moral questions we are asking ourselves are the same questions we’ve been asking ourselves since the beginning of mankind.

    With these thoughts in mind, such a moral code should be grounded in what the philosophy or humanistic disciplines dictate as necessary and beneficial to society, as opposed to what science has shown to be beneficial. To avoid the pitfalls and shortcomings of these metaphors and comparisons, we need to understand the moral codes of genetic engineering and enhancement in terms established through critical thought and speculation. Science will be useful in the future, though, when using policy-based models of social research and data-driven theories, but, as of now, we need a firm foundation before we can get there.

    May 21, 2016
    Medicine, Philosophy, Science

  • Mario Chase: a simple game of cat and mouse

    My friend Andrew made a neat little simulator for playing a game of Tag, or, in this case, Mario Chase. Check it out here!
    May 21, 2016
    Science

  • Mind-reading: understanding how the brain learns

    A rendering from da Vinci’s sketch.

    We like to break down large problems into smaller ones and look at the whole picture as a sum of its parts. The pieces of a jigsaw puzzle form an image, or different letters come together to form words. There are times when a larger picture isn’t just a zero-sum games, the whole is greater than the sum of its constituents, or even when a chain is only as strong as its weakest link. No matter how you look at the bigger picture, there are different ways we can find a bigger thing among smaller parts.

    The neural networks of the brain are one of those bigger things. Researchers at Google DeepMind have recently created a model of a neural programmer-interpreter, NPI, a type of compositional neural network using various forms of small and large networks in scales of complexities to store memory and execute functions optimally. NPI can train itself to perform tasks on small sets of elements that can be generalized to much larger sets with remarkable results. An example would be rotating the entirety of an image when you want a specific orientation by analyzing the location of a small set of the image’s pixels. The NPI can use various algorithms for sorting, addition, and trajectory planning in different types of neural networks with significant accuracy. NPI performs these tasks through long short-term memory networks.

    In a simple example of an LSTM, different tasks (shown with numbers connected to letters) can be performed and concatenated with one another to create a network. These networks are often repetitive and can vary in the ways memory is transferred between each step. 

    In what initially may sound like an oxymoron, long short-term memory, or LSTM, networks use long sequences of occurring loops. Each loop forms a list or a step in an overall network with each other. When designing software, these networks have been used for various practical purposes including speech recognition, translation, language understanding and other means.

    What does this all means for learning? It simply means more efficiency for how computers can process new information. While we can use this for making cooler and better computers, there are still many barriers we face to understanding how the human mind works with such a technology. We talk a lot about how the brain is like a computer that can be programmed for various functions and tasks the same way we tell our laptops and phones to update a Facebook status or send a text. But the mind might not be so easily “computerized” after all. Before can dive into the implications NPIs or LSTMs have on our research of the brain, we need to understand how well we can even structure the brain as a computer to begin with.

    Frances Egan, philosophy professor at Rutgers University, gave a talk at IU a few weeks ago about the Computational Theory of the Mind. In philosophy, this theory uses the mental processes of the mind as computational processes. A physical system computers just in case it implements a well-defined function. It’s a mechanical account of thought that can be modeled, and it’s physically realizable as well. We can model or simulate a thought, a network, or anything similarly if we know the science behind it. It’s a very abstract approach to physical detail that can also map out physical states (or whatever state the neural network is in) into something mathematical.

    Computational models seem like common sense. When you’re navigating your house at night with the lights off, you probably have to rely on an inner sense to determine where you are. You may be a couple of feet from your dinner table or a few steps behind the television. In any case, you would probably “add” and “subtract” these distances relative to one another to figure out where you are in your house. Similarly, a vector addition of the underlying neural networks can give us our theories of the mind as well. An LSTM or NPI can use such a process in mapping out networks of the mind and working things from there. We can make predictions about future states and explain a lot of our mind through this theory.

    Whatever the case may be, science and philosophy will always be at ends with one another in the unraveling of the mind, the brain, and everything in between. 

    May 18, 2016
    Philosophy, Science

  • A Sisyphean nightmare


    The following is an excerpt from a piece of writing I’ve been working on…



    Though I wasn’t able to sleep, I did fall into phases of unconscious semi-sleep on the barren concrete floor for times of half an hour every now and then. My dreams were only thin white noise that blocked the music-induced trance that had filled my bus ride. The buzzes and scratches of the melodies returned in ominously steady beeps. The music paraded fear and darkness like a Latin-infused tango that searched for a partner. The skies sang the words along with the rhythm in the language of synthesized melodies. In some way, too, it was all in my head. I had put myself through the song and dance, and my mind only tried to cope with it as best as it could. As I looked above, there were tints of rose and violet that danced around in the night sky. I would frequently see the dull images of prisoners stomped down by the world of security.

    My dreams were a colorfully vivid atmosphere of coal and iron gray that only complemented the dangerous brown shade of my urine when I would use the toilet. I would frequently awaken due to my aching bones and churning stomach. Though I could never get sleep, I would still feel visions in my semi-sleep. On occasion, I would feel brief visions of bright hues, orange, lime, and magenta skies that were the only glimpse into the aesthetic realm that my subconscious would grant me. In the skies flashed pictures of faces, distorted laughing smiles and grins that shivered my spine. I looked left and right. There was nothing. Only shadows that took the form of demons that cackled with the wind. Their words smeared and washed the skies. The muddy, grimy consciousness engineered a landscape before me. These demons flashed with bright white and red, the palette-colors of this dark world. The laughter greyed into song and dance that left me tickled and deranged. I couldn’t make out a word of what the voices were saying, nor could I understand such a language and mind so disturbed. The only thing on me were shackles that kept my arms and legs to the ground.

    When I realized I couldn’t move, I looked straight forward like a deer caught in headlights to the blackness of the night. But it was only to figure out that a metal mouthpiece strapped to my head kept me from making a sound from my mouth. My body was completely black, as well. I could feel my t-shirt and pants pressed against my body, but couldn’t see a thing beyond on or near me. Only skies above my head that flashed and shook with euphoric colors.

    My shackles contained the fear that I had built inside of me my whole life. The skies would flash in and out of the shades while I stood at the foot of desolate, decrepit mountains. They taunted my soul as I struggled up the hills, while only falling upon myself in a Sisyphan fashion. The buzzing scratches of 8-bit melodies and robotic synthesized drums that resonated in my memory told me I was trapped in this world like a rat at the hands of a game-master, as though I were only being simulated in a world controlled by another being. Except in this world I could understand what was real and what wasn’t. I couldn’t feel a consciousness that could give me the grounds of control, responsibility, deliberation, emotion, or anything else that might have awoken me from a horrible nightmare.

    Time passed by like eternity, and the melodies clicked and clanked like clockwork. I could feel no empathy, sorrow, nor pain. Man was dead. All that was left was the machine. 

    May 2, 2016
    Philosophy

  • A story of serendipity

    Leonid Afremov “Farwell to Anger”

    Read this article in the Indiana Daily Student here…

    May 2, 2016
    Education

  • Google’s stronghold on our health

    Google’s AI has fun with animal faces, but what about the company’s access to our bodies themselves?

    Just when you thought the tech giant couldn’t get any more powerful, Google is taking greater steps into the game of healthcare information.

    In the aftermath of the Panama Papers leak, journalists, policymakers, and everyone in-between have voiced their concerns about who has access to what data. Many fear the potential for danger from having those in power have access to too much data. Now it’s been revealed Google’s artificial intelligence company DeepMind, which recently gained attention for defeating the world champion at the board game Go, has access to healthcare data of over one million patients in the United Kingdom, the New Scientist reports. Unlike other news reports of giant companies having access to large amounts of information, Google’s stronghold on our data-sharing means its securing that information for the purpose of artificial intelligence programs, something many need to understand.

    My friends and I have been buzzing about the potential of deep learning. It’s easy to see how a robot can perform a task using a calculation such as defeating someone in a board game. My buddy Ji-Sung Kim, undergraduate at Princeton University, developed deepjazz, a software that creates jazz music. Composing music is an activity much more human than solving a mathematics problem so developing an AI that can do so pushes the limits of what computers are capable of.

    With the new data-sharing agreement, Google has access to millions of patient data. As if the monolith didn’t control enough of everything in our lives, we need to give new considerations to the rights of patients, physicians, and everyone else in terms of healthcare information. Unlike, for example, collecting information on whose social media you follow or how many memes you have, mining the data of our healthcare presents an issue about our personhood. Our health is part of who we are, so we must be much more careful and use revised notions of responsibility, liberty, autonomy, and other ideals. And unlike accumulating data for the purpose of zeroing in on terror threats, fighting disease and epidemics using big data seems much more grounded in a moral sensibility. It neither encourages xenophobia nor blockades free speech. Many people trust Google. One might feel more comfortable knowing their data is with them rather than in the hands of a shady politician.

    Regardless of where the answer lies, let’s bring the issue under criticism and and enjoy the computerized music. 

    April 30, 2016
    Medicine, Science

Previous Page Next Page

 

Loading Comments...
 

    • Subscribe Subscribed
      • Heuristic
      • Already have a WordPress.com account? Log in now.
      • Heuristic
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar