Tag: Science
-
Mario Chase: a simple game of cat and mouse
My friend Andrew made a neat little simulator for playing a game of Tag, or, in this case, Mario Chase. Check it out here!
-
Mind-reading: understanding how the brain learns

A rendering from da Vinci’s sketch. We like to break down large problems into smaller ones and look at the whole picture as a sum of its parts. The pieces of a jigsaw puzzle form an image, or different letters come together to form words. There are times when a larger picture isn’t just a zero-sum games, the whole is greater than the sum of its constituents, or even when a chain is only as strong as its weakest link. No matter how you look at the bigger picture, there are different ways we can find a bigger thing among smaller parts.
The neural networks of the brain are one of those bigger things. Researchers at Google DeepMind have recently created a model of a neural programmer-interpreter, NPI, a type of compositional neural network using various forms of small and large networks in scales of complexities to store memory and execute functions optimally. NPI can train itself to perform tasks on small sets of elements that can be generalized to much larger sets with remarkable results. An example would be rotating the entirety of an image when you want a specific orientation by analyzing the location of a small set of the image’s pixels. The NPI can use various algorithms for sorting, addition, and trajectory planning in different types of neural networks with significant accuracy. NPI performs these tasks through long short-term memory networks.
In what initially may sound like an oxymoron, long short-term memory, or LSTM, networks use long sequences of occurring loops. Each loop forms a list or a step in an overall network with each other. When designing software, these networks have been used for various practical purposes including speech recognition, translation, language understanding and other means.
What does this all means for learning? It simply means more efficiency for how computers can process new information. While we can use this for making cooler and better computers, there are still many barriers we face to understanding how the human mind works with such a technology. We talk a lot about how the brain is like a computer that can be programmed for various functions and tasks the same way we tell our laptops and phones to update a Facebook status or send a text. But the mind might not be so easily “computerized” after all. Before can dive into the implications NPIs or LSTMs have on our research of the brain, we need to understand how well we can even structure the brain as a computer to begin with.
Frances Egan, philosophy professor at Rutgers University, gave a talk at IU a few weeks ago about the Computational Theory of the Mind. In philosophy, this theory uses the mental processes of the mind as computational processes. A physical system computers just in case it implements a well-defined function. It’s a mechanical account of thought that can be modeled, and it’s physically realizable as well. We can model or simulate a thought, a network, or anything similarly if we know the science behind it. It’s a very abstract approach to physical detail that can also map out physical states (or whatever state the neural network is in) into something mathematical.
Computational models seem like common sense. When you’re navigating your house at night with the lights off, you probably have to rely on an inner sense to determine where you are. You may be a couple of feet from your dinner table or a few steps behind the television. In any case, you would probably “add” and “subtract” these distances relative to one another to figure out where you are in your house. Similarly, a vector addition of the underlying neural networks can give us our theories of the mind as well. An LSTM or NPI can use such a process in mapping out networks of the mind and working things from there. We can make predictions about future states and explain a lot of our mind through this theory.
Whatever the case may be, science and philosophy will always be at ends with one another in the unraveling of the mind, the brain, and everything in between.
-
Google’s stronghold on our health

Google’s AI has fun with animal faces, but what about the company’s access to our bodies themselves? Just when you thought the tech giant couldn’t get any more powerful, Google is taking greater steps into the game of healthcare information.
In the aftermath of the Panama Papers leak, journalists, policymakers, and everyone in-between have voiced their concerns about who has access to what data. Many fear the potential for danger from having those in power have access to too much data. Now it’s been revealed Google’s artificial intelligence company DeepMind, which recently gained attention for defeating the world champion at the board game Go, has access to healthcare data of over one million patients in the United Kingdom, the New Scientist reports. Unlike other news reports of giant companies having access to large amounts of information, Google’s stronghold on our data-sharing means its securing that information for the purpose of artificial intelligence programs, something many need to understand.
My friends and I have been buzzing about the potential of deep learning. It’s easy to see how a robot can perform a task using a calculation such as defeating someone in a board game. My buddy Ji-Sung Kim, undergraduate at Princeton University, developed deepjazz, a software that creates jazz music. Composing music is an activity much more human than solving a mathematics problem so developing an AI that can do so pushes the limits of what computers are capable of.
With the new data-sharing agreement, Google has access to millions of patient data. As if the monolith didn’t control enough of everything in our lives, we need to give new considerations to the rights of patients, physicians, and everyone else in terms of healthcare information. Unlike, for example, collecting information on whose social media you follow or how many memes you have, mining the data of our healthcare presents an issue about our personhood. Our health is part of who we are, so we must be much more careful and use revised notions of responsibility, liberty, autonomy, and other ideals. And unlike accumulating data for the purpose of zeroing in on terror threats, fighting disease and epidemics using big data seems much more grounded in a moral sensibility. It neither encourages xenophobia nor blockades free speech. Many people trust Google. One might feel more comfortable knowing their data is with them rather than in the hands of a shady politician.
Regardless of where the answer lies, let’s bring the issue under criticism and and enjoy the computerized music.
-
Chronic discrimination at the IU School of Medicine
-
Overhauling scientific research with teaching
-
The limits of artificial intelligence
-
Scientists are people too

The Zen of Python is a perfect illustration of aesthetics in science.
Fiske, S., & Dupree, C. (2014). Gaining trust as well as respect in communicating to motivated audiences about science topics Proceedings of the National Academy of Sciences, 111 (Supplement_4), 13593-13597 DOI: 10.1073/pnas.1317505111







