Skip to content

Heuristic

  • GitHub
  • Email
  • Scholar
  • LinkedIn
  • Twitter
  • Instagram
  • Medium
  • Twitch

  • Can’t stop the gods from genetic engineering

    The future is now. Genetic engineering, or modification of our biological genomes, has made tremendous strides over the past few years. This power would allow us to potentially find cures for otherwise untreatable diseases. But, as with many places of the intersection between science and humanity, we find ourselves in a tangle of ethical conundrums. Who can decide how to significantly affect someone’s genetic offspring?

    A new technology, CRISPR (pronounced “krisper”) has recently seen numerous success over the past few years so much so that, when a group of scientists in China began using it to experiment on human embryos last April, the world realized we had to stop and tell ourselves “Wait a second! We’re not so sure this is the right thing to do.”

    When the International Summit on Human Gene Editing convened during the first week of this month, hundreds of professionals from scientists to ethicists rigorously debated the ethical issues posed by the newfound genetic engineering technology. After the conference, they posted their conclusions of their conference online. Among their conclusions are mostly middle-ground strategies, such as allowing gene editing research on human embryos without going through the process of pregnancy or only allowing modification of Germline cells for instances of disease in which no other cure or solution is available for health and safety of the offspring (along with an understanding of safety, risks, broad consensus, and other requirements).

    While the conclusions were general (as though were expected to be, as we are still in the baby steps of discussing the effects of CRISPR), hopefully we can get more detailed, nuanced approaches in the future.

    The summit also concluded that we needed an international committee for detailing what we can and can’t do when it comes to gene editing, so this would definitely be a step in the right direction in light of different cultural norms with enforcing and overseeing CRISPR research.

    Most of the solutions seem safe and plausible, but, in particular, why should we need a broad social consensus for proceeding with application of gene editing? Shouldn’t those decisions be made by a few experts, not a general majority opinion? We generally look at values of democracy and majority in making decisions when it comes to political or social theory. And it makes sense on the surface that, if more people agree with a solution, it will be easier and more effective (in a utilitarian sense) to carry out. But why should scientific theory (which has risky, yet pertinent health consequences) be decided the same way? The laws of science are never debated through majority opinion but, rather, academic rigor. If we’re going to take a empirical perspective (including risks, benefits, outcomes, etc.) of gene editing, then we should support decisions which are proven to be better, not by the ones which the largest number of people agree with.

    All the while those who cry for greater scrutiny and more opinions pose unnecessary restrictions to incorporate a larger number of opinions that only slowing down scientific research.

    Give the scientists what they need as soon as possible. Let’s hope we can understand this.

    December 24, 2015
    Science

  • Our fragile minds under surveillance

    Maybe Hippocrates would have envisioned greater security for his patients’ health information.

    When it comes to issues in mental health, the day-to-day problems of the mentally ill seem like they might be more than enough for anyone to handle. Mental illness is stigmatized, increasingly rising, difficult to detect and cure, culture plays a role in it, and psychiatry still struggles as the most scientifically backward field of medicine. I’ve written about the privacy of mental health data, the role culture plays in mental health, the nature of disease, and a bit of our stigma of depression, but I’ve yet to tackle one certain mystery: our existential threat to our minds.

    Though every physician’s Hippocratic Oath includes a promised respect to privacy, it would have been difficult for Hippocrates to foresee a future in which we might have the power to know the very minute details of our mental behavior. Brain imaging technology and mental health records would allow anyone, from the a sneaky politician to your future employer, to know you inside-out. For a person suffering from mental illness, this can mean forfeiting liberties and luxuries to your privacy. With recent “social experiments” from sites like Facebook and Okcupid to collect information about users, many of us were outraged by how we could be “used” in such a way. Maybe the CEO’s behind those experiments were convinced of “Ockham’s Twitter” (among competing sources of information, the one with 140 characters or less should be selected), but many of us fear that we were one step away from a dystopian future of government surveillance. And how can anyone feel safe knowing their thoughts are being policed?

    In addition, though our scientific research on the brain can lead us to great understandings, we’re still far from knowing the ways our physical brains are connecting to areas of sociology, ethics, and philosophy. It’s hard to find a model of the mind isn’t completely either dualist or reductionist. But that isn’t to say we aren’t making progress. And, with a greater understanding, we can finally answer the tricky questions neurotechnological research poses to us. The issues, from who should have access to your mind, who can collect/share that data, under what conditions might we need to control it, or anything similar that makes us feel uncomfortable can finally be addressed. If we can bring our models from neuroscience, cognitive science, philosophy, and everything in-between together, then we can get a better picture of who we are and provide for a better future. As neuroethicist Kathinka Evers says, “one of the proposed goals of human brain simulation is to increase our understanding of mental illnesses, and to ultimately simulate them in theory and possibly in silico, the aim being to understand them better and to develop improved therapies, in due course.” Is the black box finally being unveiled?

    Speaking of mental illness, our issues of mental health could provide insight into our cultural understanding of ourselves, as well. Under a re-branding of mental illness (as a cultural phenomena that can still be treated rather than a biological defect to be shunned), we put value in the way we think, not just as algorithmic organisms, but entirely valuable to how we find meaning in our lives. We want to know that what we’re doing has a purpose, a motive, value, or any other sort of meaning that can give us a reason to keep on living. We worry that our efforts aren’t worth it or that we truly have no control over our lives. These existential desires may manifest in mental illness in the form of anxiety, depression, or even PTSD. But we can also look at these existential desires as ways of searching for meaning and truth, and, in that case, maybe our mental issues could be seen the same way.

    “People are always selling the idea that people with mental illness are suffering. I think madness can be an escape. If things are not so good, you maybe want to imagine something better. In madness, I thought I was the most important person in the world.” – John Nash

    If we collect more informations about ourselves, are we really more secure? It’s easy to feel insecure, anxious, distracted, or overloaded in any way by the sheer amount of knowledge and information that we have. But let’s remember that we might have actually just shifted our focus to abstract sources of information (that still needs to be verified, justified, proven, and shown to other forms of epistemic certainty before we can really put it to any use) and we can’t let our rhetoric and discourse surrounding information put us at the hands of information itself. We talk about becoming distracted easily by the 21st century abundance of information, but these similar struggles have been around for centuries and, instead of accepting that we’ve become more estranged by it, we need to put ourselves above it. Seen this way, our worries of living in the Information Age are very similar to the existential crises we face of how much control we have over ourselves and who we really are.

    With how much progress we are making on understanding the brain, whether it’s under the security camera, the surgical knife, or even the pen of the textbook, we still need to re-evaluate our values before making any big decisions. We’ve seen ways previous initiatives of mental health data collection have failed and the progress of understanding ourselves is tediously slow, but, in the past few decades, we’ve seen that we can re-emphasize the individual, autonomy, and self-constraint in systems of record-keeping. Transparency and willful control of what we can reveal about ourselves might increase our trust have gained popular support. And, with all the greater number of ways we can understand ourselves, we need to re-think our privacy as a new sort of autonomy.

    Rather than the old days of privacy being something that was entirely separate from what the general public could see, we need a new way that can incorporate all the different ways our minds are being studied. We should look at the type, quality, and power of information that anyone collects about our minds. Such a paternalistic society in which we are entirely controlled by people above us does no good, not for our individual mental health nor for the society’s plans. And we can find new purposes and values in the setting of the 21st-century surveillance state. Maybe the promising results of the interdisciplinary nature of neuroscience will give us a new definition of identity, and we can feel safer knowing that we can explain who we are. 

    December 23, 2015
    Medicine, Philosophy, Science

  • The monsters we fight (and the ones we save) in our simulated horrors

    When we talk about our moral behavior and epistemic access to knowledge, video games would be the last place anyone would expect to serious discussion. Existentialism, ethics, and bad puns come together.

    While most games barely go beyond traditional tropes of scoring points, solving puzzles, and defeating bad guys, there are a few that really do something different. Philosophically, it seems like video games are nothing but consumption, giving us some sort of reward for work or action and offering a way to experience pleasure from artificially simulated worlds. It might seem as though they offer no value of the human condition as far as aesthetics, ethics, or anything else is concerned. But, like novels, plays, and poetry, video games give us a simulated atmosphere for looking at our actions, thoughts, and values in a world we create. In video games, we understand why you might want to score points or defeat bad guys. A protagonist might save the world or rescue the princess, but why does he/she stop the bad guys and save the good guys?

    Undertale is a role-playing video game in which you play as a human child lost in a world of monsters. As you meet monsters on your way back home, you learn their deep history, including the war between monsters and humans, the struggle for survival and happiness, and, of course, the bad puns. But, in contrast to the cliché of heroes beating up enemies, it’s up to you to decide whether or not you want to kill monsters or befriend them. The decisions you make influence what happens in the game. If you become a genocidal maniac, everyone fears you in their eerie, ominously wasted atmosphere. Your malevolence insidiously overcomes your soul as you show less attention to the refined, small beauties of your surroundings and become self-obsessed with your own image and sense of worth. However, if you make progress pacifically and amiably, then you enjoy the colorful characters, vibrant narrative while saving both monsters and humanity through the power of love.

    The questions come up pretty quickly. If you choose to kill a monster who wants to kill you, then how do you justify your actions? Are you doing so out of self-defense or is there something else, like pleasure or power, at play? When you become friends with monsters, how do you know you can really trust them? Unlike, for example, gobbling a ghost in Pac-man, Undertale forces you to not only wonder these questions for yourself, but characters in the game directly want to know why you do things the way you do. As you play the game, you’ll find yourself developing relations to the lovable characters in the game and questioning the reasons and values behind everything you do. Why are you doing what you do?

    pictured: Mercy

    Before understanding the ethical premises of Undertale, we must understand how characters in the game actually have knowledge of who they are. After all, in a video game, characters are only computer-controlled algorithms that appear and disappear whenever anything is necessary in the game. They have no rational choice nor free will beyond what the game allow for them. But do any of them know this? The epistemic purpose (and, later, the question of the motive) comes from a character named Sans, a hoodie-sporting skeleton with powers of time-space manipulation and lame jokes. Sans, unlike most other characters, knows that the player can save and reset the game. And, from this knowledge that everything in his simulated world will only be restarted one day, he has adopted an existential nihilistic view of the world. He doesn’t see much value in what most other characters do. He plays along with the other characters’ lives as though they were truly meaningful and realizes that it’s okay for the protagonist to kill some enemies because you’ve “got to do what you’ve got to do.”

    A video game character knowing he/she is in a game is surprisingly similar to the epistemic “brain in a vat” problem, which philosophers have debated as arguments for and against skepticism, solipsism, and consciousness. Can we truly know anything if we cannot rule out the possibility that we’re all just brains in a vat being simulated everything we experience and perceive about the world? While it may seem like a purely theoretical question that doesn’t actually change anything we do in our everyday experiences, some philosophers argue the “brain in a vat” problem has significance for artificial intelligence or theories of our identity. Could we put individuals into the bodies of others? Daniel Dennett for example has argued that it is physically impossible for a brain in a vat to replicate the what makes us a human being without a vat. In his story “Where Am I?” he describes a “transplanted” human brain being in a new body retaining original personality, but with new physical characteristics.

    Someone like Sans would take the “brain in a vat” as a serious issue to how one should act towards others and understand there is truly no intrinsic purpose to life. The protagonist fights to survive and might kill monsters, and, at the end of the day, that’s all that will happen. Sans would still embrace an objective ethical framework to life, as he shows mercy or cruelty to the protagonist based on their actions, but, because he knows everyone is only living in a computer simulation, he might as well be just a brain in a vat with no control over anything (from his health, memory, and other values). We can explore how the knowledge of a simulated video game affects how characters should or shouldn’t behave.

    Knowing that there are characters in the game well-aware of their situation in their universe and characters behave differently based on how you act towards them, it’s hard to imagine how there couldn’t be an objective ethical framework in place. Some characters might take a virtue-based ethics approach that you should do things that lead towards the virtues or that are in accordance with virtues. The characters exhibit emotions and passions like we do (leading to hilarious in-game quests such as dating skeletons, playing game shows, or flirting with planes), and the actions you choose to take influence what happens later on. These interactions allow the player to gain practical wisdom, and, as an Aristotelian might say, would allow you to understand how to be a virtuous person. In a simulated video game, it might appear as though there can’t be any objective source of knowledge, but, as characters behave with one another, they still perceive practical experience and wisdom of what has happened, and, from that, they can see what might happen. In this sense, a virtue-based ethics approach would appropriately describe the world of Undertale and, perhaps, our fictional realities in general.

    What about other ethical frameworks? Would a utilitarian framework help us understand how characters behave? If we take the utilitarian notion that we should do what maximizes happiness for the largest number of people, then the game makes it clear we should only kill only when necessary (out of self-defense), and never “just because you can.” Killing unnecessarily would cause characters to distrust and show vices towards you, leading to less happiness for you and others in a way that we want the best future. It seems as though this explanation helps us understand Undertale’s actions, as well.

    But, with our epistemic limits that we are in video game, any ethical framework has its issues. How could characters have knowledge of their own future upon which to compare what happens against? They have no free will, no motives, no reasons for doing anything other than the fact that the protagonist has made one decision or another. Since most characters act without knowledge of the outside world (or knowledge that they’re in a computer simulation), they have no way of determining what’s going to happen for themselves or what might happen, it’s up to the player to decide their fate. Unlike Sans’ motives for acting towards the protagonist with knowledge they are in a video game, it doesn’t make sense why anyone should behave a certain way or the other.

    And maybe this explains that existential nihilism works.

    No need to lay on the ground and feel like garbage.

    Since Sans is the only character with knowledge of the outside world and he still behaves with an ethical framework, maybe we understand how our systems of ethics still have intrinsic value just because they are even if we have no objective way to prove them. Maybe the characters in Undertale behave certain ways towards you based off of your actions just because that’s “the way things are.” Or maybe our ethical frameworks still work in some other way we don’t fully understand. And this means we shouldn’t let our existential crises and epistemic limits to knowledge hurt our own behavior. Even with knowledge that we’re in a simulated horror of a video game, we can understand how to act towards each other. We shouldn’t let our limits to knowledge stop us from acting certain ways.

    Now that winter break has started, I’ve finally had time to relax. And, though I haven’t seriously played a video game in about 5-6 years, the hours I’ve spent on Undertale have made me think about the nature of ethical value video games have, and the rest of life itself. Even if you don’t play video games, it might teach you a thing or two. 

    December 23, 2015
    Philosophy

  • Sexism in science

    xkcdhowitworks
    Pretty much.
    Read this article in the Indiana Daily Student here…
    December 22, 2015
    Science

  • Why learning about ethics doesn’t make you more ethical

    Philosophers “are always on the outside making stupid remarks.” – Richard Feynman

    At the heart of everything we do, we hope there’s a message. We hope there’s a meaning behind what we learn and the work we create. When we learn and contribute to society, we always hope that what we do not only makes the lives of other people better, but adds value or meaning to our own selves. At the end of the day, what are we if we’re not making ourselves better people?

    In my journey of exploring philosophy and ethical issues that people face, many people I’ve met have recommended for students to take courses in ethics in order to learn how to become more ethical people. I disagreed. Though we can learn a lot from ethics courses, people with greater knowledge of ethics aren’t necessarily more ethical people themselves. Ethics courses might teach you about ethical frameworks and philosophies upon which we establish systems of justice, law and similar things, but they aren’t going to tell you that you should give money to a charity or call your mother every once in a while. Philosophers of ethics are mostly busy debating the meaning of morality and the foundation for our ethical knowledge rather than telling the general public they should stop eating meat because it’s morally wrong. And the idea of an ethics course making students more “ethical” seems downright silly.

    If knowing more about ethics doesn’t make us better people, what does? Well, in the most philosophical sense, we tell ourselves that “ethics” makes people better. But, from our empirical observations, there’s got to be something than just learning ethics that makes us better human beings.

    Can we take a truly empirical approach to understanding what makes us moral? What if we could measure our understanding of ethics the same way we measure our behavior, health or information? If we follow the advice of my colleagues and institute more ethics courses that we require students to take (whether it’s bioethics, public health ethics, business ethics, or any similar course), then we often run into the Heinz dilemma, an ethics dilemma in which a person’s response dictates his/her stage of “moral development,” according to the the Kohlberg model, of psychologist Lawrence Kohlberg.

    And it fails.

    If anything, the Kohlberg model is empirical evidence that social scientists suck at philosophy.

    The “levels” or “stages” of moral development of Kohlberg’s model fails to understand how we truly make ethical decisions. As University of Virginia psychologist Jonathan Haidt writes, the way we reason for motivations is more like “a lawyer defending a client than a judge or scientist seeking truth.” It posits that we can reason “better” and develop better “ethical skills,” and, even if we could do such things, what makes us so sure that our definitions and ideas of “justice” are completely “objective” to which we can gauge our own ethical behavior? In the same paper, Haidt continues that moral action is more closely related with “moral emotion,” rather than our own moral reasoning. The Heinz dilemma demonstrates how we easily take ethics for granted on the surface. We like to think that some of us are just “good” and others are “bad,” and there’s a scale on which we can easily put everyone.

    Then what good are courses in ethics, if not for teaching us how to live ethically? In a world driven by employability and how much of an effect you will have on the world, ethics courses sell themselves by promising solutions to the questions of tomorrow. They promise professional development through skills in communication, reasoning, analysis and other areas. But, if we aren’t seeing more ethical behavior with more courses in ethics, then we need something different. Something deeper.

    As physicist Richard Feynman said, philosophers are always on the outside making stupid remarks. During his stay at an ethics conference, he pointed out how academics would use big words to try to appear smart. Though Feynman became ridiculously tired of philosophy entirely, he had a point. Apart from any tendency we have to cling onto phrases and words that sound good at the expense of actual understanding, we forget about what actually has qualities of truth, justified, provable, and other qualities. The philosophers might make stupid remarks on the surface (which is a huge problem), but, deep down, there’s something bigger that we’ve taken for granted. We’ve accepted the surface without the depth. And maybe Feynman’s remark helps us understand the problems in the way we teach ethics.

    We can’t take our ethics courses for granted as ways of making us more ethical people. If being an ethical person were as simple as taking a course in ethics, then all of problems would be solved by now. We need to dig deeper and think about the reasoning and motives behind what we do. We need to think about the epistemic issues in the way we think. We can be more ethical people by changing our understanding of any form of education, be it English, Science, History, or anything else, rather than anything an “ethics course” might teach us.

    Feynman understood this well, and he’s not even an ethicist. 

    December 14, 2015
    Philosophy

  • Taking risks for a brighter tomorrow

    Chillin’ with Vinton #FatherOfTheInternet

    When I attended the Emerging Researchers National Conference in DC last February, Vinton Cerf, one of the “fathers” of the internet, gave an incredibly inspirational speech about his journey through life. He humorously opened up with, “Well, I’m not sure what you all would love to hear from an old fart like me,” with a playful attitude that eased the tension in the banquet hall.

    Regardless, as I paid close attention, my heart began pounding and my feet tapped when I realized Dr. Cerf used to work with Dr. Geoffrey Fox, a physics professor I used to work under. I began to think, “Oh man! We share a network connection! I’m not sure what that means or if that’s special, but I bet he’d be super impressed if I told him that I used to work with Dr. Fox!”

    As Dr. Cerf spoke about his undergraduate years studying mathematics at Stanford to the revolutionary discoveries that would lead him to work on the biggest and brightest projects of the future (including, well, creating the Internet), he remained nobly humble, yet confident about himself. He would constantly describe moments in his life when he would almost refuse teaching positions at universities as he felt as though he couldn’t see himself doing such an amazing thing, but also situations in which he would fight against criticism to push forward the “next big thing,” whether that was Massive Online Open Courses (MOOC’s) or the beginnings of the telephone.

    One thing Dr. Cerf repeatedly emphasized in his talk was the importance of taking risks. Risk-taking seemed like something a mostly-undergraduate audience could get easily behind. Who wouldn’t want to tell the starry-eyed dreamers to do things that might have horrible consequences? We’ve become so used to getting ourselves hurt in everything in we do and say in our two decades of experience that to do so in science, as well, would be a synch!

    After Dr. Cerf finished his speech, I leaped to the microphone in front of the audience in preparation to ask him a question in front of the rest of the attendees. I could feel the strengthened vibrations of my heart against my chest like a train plowing over several lives in a twisted version of the Trolley problem. I desperately wanted to tell him about my work with Dr. Fox, but also ask an intelligently formed question that would help the rest of us learn something and maybe even get Dr. Cerf to remember who I was.

    When it came my turn to ask a question, I introduced myself (making sure to name-drop Dr. Fox and appreciate Dr. Cerf’s praise that I was working with such an amazing scientist) and asked Dr. Cerf how we could improve the credibility of online courses. He responded that we could improve credibility by improving quality, and, though many have criticized online courses as a failure, that doesn’t mean anything about the future of learning when we come to accept it.

    I walked away from the moment buzzing with hype. I couldn’t believe I had actually talked to one of the fathers of the Internet! Not only that, but maybe he’ll remember me or something.

    But, while I wondered how amazing a mind it really took to make a difference in the world, I couldn’t help but wonder, how important is risk-taking for scientists (but also for the rest of the world)?

    As I looked around, I realized there may be signs that risk-taking is more important now than ever.

    Many have lamented the loss of innovation in scientific research. Michael Hanlon, London-based science writer, pines the days in which scientists and inventors made revolutionary leaps and discoveries in the mid-20th century. In the post-WWII decades, we saw televisions, the Internet, space initiatives, environment-friendly revolutions, globalization, branching aviation, progress for several subjugated groups, and even the discovery of DNA. The future was beautiful and bright, and fears and worries were far. But, things have changed since then. Many of our scientific endeavors have lead us to false hype, such as the claims made in Neuroscience research or the lack of discovery by the Human Genome Project. Theoretical physics research has remained in the “doldrums,” as political activist Ron Unz puts it, with the recent discovery of the Higgs Boson particle being something that we’ve already known to exist (but only empirically confirmed) and the talk about the 2014 smoking-gun Cosmic Inflation discovery being, well, just dust. Meanwhile, our generation’s innovations of social media encourage conformity and groupthink while student activism has reduced to attacking free thought and silencing dissenting opinions necessary for social progress.

    Similarly, Roberta Ness, Vice President for innovation at the U Texas School of Public Health, says universities have abandoned the glorious luxury of risk. Ness has spent much time encouraging researchers to take innovative research that, though prone to failure, can overturn whole scientific paradigms, expand assumptions, and change points of view. But, with the slowing of innovation, it might be time for us to re-evaluate our tendencies to avoid risk.

    Have we truly lost innovation at the cost of risk-aversion?

    Researchers Daniel Kahneman and Amos Tvserky would say that we, human beings, are naturally loss-averse. We like to weigh opportunity costs for different actions and choose ones in which there are low chances of bad effects. That’s why we might avoid listening to new music when we already know which ones we like, and that’s why it’s very difficult for us to understand how we’re much more likely to die in a car accident than a terrorist attack.

    Others would say that our emphasis on immediate, tangible results blinds us to greater discoveries we can only realize through risk-taking. And we want to make sure we see the benefits that are most important and easiest to get. But, with the mind-blowing amount of knowledge and information there is, it’s far too difficult to discern which results are actually most important to us, so it might actually be better for us to focus more on identifying what types of results we could obtain from our efforts, rather than pushing the status quo of results we’ve already obtained. In other words, we should always be looking at how important our results really are rather than accepting that what we’re already doing is most important. In his 1945 report “As We May Think,” Engineer Vannevar Bush wrote about the need to get scientists “most conducive to the creation of new scientific knowledge and least under pressure for immediate, tangible results.”

    But why are we so risk-averse? Risk-taking poses a lot of threats to ourselves. We worry about dangerous outcomes, and, if our current techniques have been working, why switch to something new? It seems ideal, practical, and efficient to avoid taking risks. But these claims miss the mark of the nature of risks.

    Being risky doesn’t mean being stupid.

    One things’ for sure, we need to clarify that taking risks isn’t about doing things that are dangerous or have high rates of failure. Rather, we need to be mindful of more abstract, unknowable benefits from the things we do. The future is riddled with randomness, uncertainty, and chance, but we should embrace what we can to account for those discrepancies rather than avoiding them altogether. This means taking into account outliers, hindsight bias, and limits of what we know.

    We shouldn’t fool ourselves into thinking our lives can be written into narratives or completely told by science. As economist Nassim Nicholas Taleb puts it, “Scorn of the abstract causes us to favor contextualized thinking over more abstract matters.” Taking risks should go beyond what dice rolls and calculations, but we should actually be aware of the context in which we make predictions about the future. By this, I mean that, if we understand probability and risk-analysis through economic and scientific theories, then we should take into account the epistemological grounds on which those theories were conceived and the nature of their phenomena before applying them to different situations.

    We can’t assume that, just because one scientific research method has been giving us results that we should stop taking the risk of considering other research methods. We shouldn’t become so caught-up with the immediate, seeable results that we forget about the uncertainty of the grounds on which we based them. And, with all the factors we can account for, taking risks would probably be something that’s, not only good for us, but necessary for improvement and innovation.

    As undergraduates, we dream big. We’re idealists, hoping to be innovators that can launch the next biggest start-up, discover the next theoretical physics model or invent the sexier cell phone. If there’s one thing in the future that we’re going to change, it should be risk-taking.

    Maybe risky business is just like uncomfortably asking a question to a well-established professional. Just because it’s dangerous doesn’t mean it isn’t good for you.

    December 14, 2015
    Philosophy, Science

  • We need philosophers – and the liberal arts too

    David – The Death of Socrates

    Read this article in the Indiana Daily Student here….

    December 11, 2015
    Education, Philosophy

  • Never Let Schooling Interfere with Education

    “The Education of Jupiter” Jacob Jordaens
    As students, many of us struggle to realize the true gifts of a college education. We might learn and education ourselves for other purposes such as preparing for future careers, earning good grades, developing professional “skills”, and other reasons, but none of them come close to what really matters when we learn about things. In reality, a career-focused education might not even be the best way to prepare for a career, good grades might not teach you everything you should understand from a course, and professional skills might just be a way to replace important skills with marketability. For these reasons, it’s clear we need a better understanding of what it means to learn and what’s really important from our education. As students, it’s our duty to emphasize the “purpose,” whatever it may be, and meditate on these values as a 21st-century philosophical inquiry.

    Before I continue, I must clarify what I mean by a “purpose” of college. If we take the “purpose” of college to be some sort of “personal fulfillment,” then the question of “what is the purpose of education” is very trivial and becomes some sort of search for a “feeling.” But we can explore a non-personal version of “meaning” that explains our actions as students with an ethical dimension (such as in regards to moral or rational norms). We can talk about what a student “should” or “shouldn’t” in line with what’s important in college. For example, if one may suggest, we could explore a utilitarian approach based off deontological (moral rules) reasons for action that maximize our benefit from education (ie., if these purposes that students have give the greatest benefit, then those purposes are the “moral rules” that students “should” adhere to.) This way, we would fulfill the purpose of an education in the most “efficient” way possible, if such efficiency is defined this way. In other words, we can lay down different frameworks for decision-making based off our “purpose” of an education.
    In addition, when speaking about these issues, it’s important have to be as diplomatic and courteous as possible in our discourse. Inevitably, there are problems students face and attitudes/behaviors that are harmful that we wish to fight, but we do not want students to approach these issues in way that they are attacked, condemned, or hurt in any way to their dignity. Unfortunately, everyone interprets things differently. But I do think that if we are as diplomatic as possible (ie., constructive criticism, qualifying viewpoints, acknowledging alternative opinions, etc.), then we can make the best impression.

    But what’s wrong with our purpose?

    If we don’t explore the true reasons why we are learning, we subject ourselves to an atmosphere that lacks humanism, curiosity, aesthetics, ethics, empathy, other values necessary for education or the workforce. Classrooms become stripped of values and reduced to places of “handing out a grade”As students and citizens, we separate into distinct academic cultures, and we become hostile to new ideas and thoughts that we’re not familiar with. 
    Similarly, training students solely for future careers doesn’t seem possible nor effective (at least, prima facie). We don’t know what problems the future will hold or what skills will make us successful in the future. And, when only searching for things that will help us in the future, we are myopically cutting off value from things we cannot immediately observe and determine. For example, a student may choose to major in computer science because he or she believes it will offer immediately lucrative technical skills for a career, but he or she might not realize other skills from fields in which benefits are not as easily noticeable. The humanities (and other liberal arts areas) suffer the most, since our utilitarian purposes may not account for those areas enough. And, in the short four years of a college career, there’s no way we can ever learn all the things necessary to address the problems of the future. 
    On the most personal level, we suffer from increased burnout, negative mental health effects, and a loss of purpose in our lives. Oh, and also, we become very boring people. No one likes boring people.

    Well, how do we approach this issue?

    The best way to approach the issue is to talk about it. We can promote discussion, writing, ethical questions, critiquing of ourselves through our activities and organizations. We should think about what’s most important and why those things are important to us, as opposed to other things. For example, we say we want to get good grades while we’re in college, but what makes “good grades” something that is more praiseworthy than, say, knowledge of things beyond the course syllabus (such as the philosophy of physics that is not covered in a physics course). And, if those reasons why those things are more praiseworthy lead you into arguments that are not in line with your purpose of college, then why? Does that have any effects on you or others? If all you’re getting from a physics course is understanding how to solve equations, then maybe you need something more, such as knowing how to think like a physicist or what value science has. 
    “You must never let schooling interfere with education” – Grant Allen, science writer. (‘Eye versus Ear,’ in ‘Post-Prandial Philosophy,’ p 129) The quote is often mistakenly attributed to Mark Twain.
    We can always emphasizing General Education requirements and the Humanities for what value they should have, but as long as the value from those things extends beyond simple “requirements.” If your required ethics course doesn’t teach you the important values and virtues of free thought necessary for growth, then it might as well be another item on the checklist. But if it’s something you’re willing to ponder and reflect upon, then it might have some value to you. 
    Also, writing exercises have been shown to promote positive effects, especially among subjugated groups.

    What should we ask ourselves?

    There are questions we can ask ourselves to guide our introspection and reflection of ourselves. (I know some of these are big questions but we can start with smaller, simpler ones)

    Basic value:

    -What is well-roundedness?
    -What is the purpose of volunteering? 
    -What are your intentions or motives when you volunteer? 
    -What is professionalism? 
    -How does what you do prepare you for the future?
    -What skills do you obtain from your education/experience?

    Self-affirmation:

    -What is important to you?
    -Why are those things important to you? Is there anything in common among them?
    -How are you going to achieve it?
    -Why is it that you can do that?

    Education:

    -What type of experience do you want to gain from your classes?
    -What type of introspection can you do?
    -What larger meaning is there? How do the things that you learn fit into a bigger picture?
    -What is the purpose of your education?

    What can I read to learn more?

    Here are some readings I’ve personally enjoyed.

    Academia:

    Terry Eagleton “The Slow Death of the University”
    Paulo Freire “Pedagogy of the Oppressed”
    Charles Weingartner and Neil Postman “Teaching as a Subversive Activity“
    William Deresiewicz “Excellent Sheep”
    Jackson Lears “Liberal Arts vs. Neoliberalism”
    Martha Nussbaum “Not for Profit”

    Philosophy:

    Terry Eagleton “The Illusions of Postmodernism”
    Wendy Brown “Undoing the Demos: Neoliberalism’s Stealth Revolution”
    Bernard Williams “Philosophy as a Humanistic Discipline”
    And, until next time, keep fighting the good battle.
    December 6, 2015
    Education

  • Complacency in the Medical-Industrial Complex

    Long gone are the days of scientists only locked up in labs, secluded from everything but their microscopes and calculators. Now, more than ever, scientists find themselves writing reports and grant proposals, managing jobs, sitting on committees, and delivering lectures.

    Scientists work in issues at the forefront of policy, ethics, law, and other areas of society. Though these duties may be as fluid as viscous liquid or as dynamic as biological evolution, scientists and non-scientists alike struggle everyday with understanding science’s role in society.

    When Lisa M. Lee, Executive Director for the Study of Bioethical Issues, gave her talk “Handling Obstacles to Ethics in Public Health,” she spoke to an audience of professors, physicians, and other professionals about the current state of affairs in public health ethics and bioethics. She spoke from her background in bioethics, including her work in public health surveillance and privacy. But, as I sat in the front row of the lecture hall, I couldn’t help but wonder, if scientists have expanded their roles in other areas, why was there still such a huge gap between science and policy?

    No matter whether you’re a physicist or a lawyer, we’ve taught ourselves to be complacent. With the slow death of the liberal arts education and the reduction of college down to a means to manufacture employees, students have anguished over how to get into medical school or make a successful living in the future, but forgotten about the important values of humanism necessary for personal growth. We need to encourage science as a way to seek the truth, of both the economy and virtues. No doubt, science should make money, but it should also teach us values such as wonder, curiosity, and humility in the world.

    Dr. Lee suggested requiring ethics training programs for graduate students. I was dubious of this solution because, while it may help students understand ethics, a requirement can only do so much to foster curiosity and humanism before encouraging complacency and discouraging innovation.

    Students who aspire to become physicians suffer from this complacency. As pre-medical undergraduates, we have long paths in front of us before becoming a practicing physician. We spend four years taking courses like organic chemistry, physics, and biology while completing the Medical College Admissions Test (MCAT). Our required courses are rigid, standardized, and structured upon what the officials dictate to be important. As we sit in crowded lecture halls, hoping for a good grade or a recommendation letter, our pre-medical overlords teach what’s going to be on the exam, nothing more and nothing less. Along the way, we volunteer, shadow, and engage in extracurriculars before entering a four-year medical school program. After that, we have residency and training before becoming a fully-practicing physician. With such a long, stringent path, it’s easy to forget about what’s really important and how to “live in the moment.” Instead, we succumb to utilitarian, consumeristic motives as we value information over wisdom, marketability over authenticity, and dogmatism over free thought. And, when we aren’t prepared for the future, the “Medical-Industrial Complex,” as Dr. Lee puts it, thrives.

    Compare the path to becoming a doctor to that of future lawyers, who intern for attorneys as soon as they enter law school. While law students get to see the employable fruits of their efforts almost immediately after college, medical students have to spend much more time preparing before witnessing the value of what they’re being taught. It’s much more difficult for pre-medical students to truly ponder how their courses will give them benefits in the future, and, especially with stringent and demanding science course-loads, it’s easy to lose sight of more valuable goals of methodic inquiry and rationalization in the tense, competitive world of exam scores, GPAs, “informational texts,” and oft-repeated “problem-solving skills.” And, since most pre-medical students take a large number of science courses, the medical curricula is centered around STEM goals of economic prosperity and political motives.

    Pictured: my organic chemistry class.

    It might sound absurd to describe the current state of affairs in medicine as a “medical-industrial complex,” but it makes much sense in the context of the militarization of education. With the Independent Task Force’s “U.S. Education Reform and National Security” in 2012, our education has been structured in the following areas: “economic growth and competitiveness, physical safety, intellectual property, U.S. global awareness, and U.S. unity and cohesion.” The highly controversial report has been criticized and praised by professors nationwide, and some, most notably in those in the humanities, have expressed concerns for its methods of re-structuring education. In the report, “What is Education?”, Professor of Hebrew and Comparative Literature at Berkeley Robert Alter writes, “Should a teacher’s motives for introducing seventh-graders to science be that she is preparing cadres of future technicians who will be able to design bigger and better defenses against ICBMs?” A future in which students are put in and out of the school-military pipeline is frighteningly grim, and structuring an ethics curriculum would certainly be an effective counter to these woes. And with our war history in Vietnam, Iraq, Afghanistan, and other places, there is a pressing need for us to understand culture, history, language, and other aspects of the human condition in order to address the issues of the future. Sure, we could pump more students into “critical” departments of language and culture based on our political concerns (such as foreign languages of Spanish, Arabic, or Chinese). But, in order to truly address the ethical dilemmas of tomorrow (including political concerns), there must be a change in education that runs much deeper than simply forcing students to take a course in ethics, literature, or history. It must be a change in the way we think about those courses.

    In “What is Education?” James Engell, Professor of English and Comparative Literature at Harvard University, writes:

    Studies show that in our schools, public or private, student cheating, dishonesty, and plagiarism are on the rise. School administrators in many locations have themselves been caught cheating in order to make the performance of their students look better. Federal and other studies indicate that scientific misconduct and falsification of scientific data are increasing problems. A society without ethical education cannot expect either good government or real security, no matter what shape its laws take or how “reformed” its educational system. The damage done may come slowly but the rot is deeper. The Report says nothing about ethical or moral aspects of education.

    Engell continues that these “moral” and “ethical” shortcomings in education have been brought on by a certain “blindness” in accepting results has cost us much in “disease prevention, agricultural production, sustainable resources, and, most troubling, in respect for the procedures and results of science itself.” In order to address the ethical issues of tomorrow, we need a fundamental shift in the way we approach our classes to fight the complacency and individual shortcomings that give rise to our blindness. Though Engell isn’t a scientist, his concerns for the addressing scientific misconduct come from an appeal to morals and ethics. These moral and ethical considerations come from elements of an education that the humanities emphasize, including the human narrative, critical speculation, heedless skepticism, and empathy. From a more “ethical” look at the world, by emphasizing the education as a humanistic search for truth and justice, we can rise above the typical requirements and capital greed of the militarized education system, whether it’s in the sciences or the humanities, and fight the power.

    We can only address the ethical issues in science, medicine, and public health through a thorough examination the values we instill in ourselves through education. Those of us who can break from the complacency of everyday life to higher ideals, including courage, justice, and compassion, will be ready to fight the problems of tomorrow.

    November 28, 2015
    Medicine

  • No One Scientist Should Have All That Power

    Science says so, so it must be true. (Source)

    Whether or not we have been aware of it, we put a lot of faith into people who study science. It doesn’t matter if I’m writing for my university newspaper or meeting other students at a party. Whenever I tell people about my scientific interests, many people are impressed (as they should be). But the praise should stop there. The value of studying science is that you become knowledgable about science (along with other values similar to empathy, language, and rational thought), but it doesn’t make anyone qualified to speak about other fields. Despite this, we’ve taken too much about science for granted. Too much of the public has come to authoritatively trust science as a objectively true source of knowledge to the point where scientists are deemed more moral, trustworthy, valuable, and overall better people on these faulty assumptions.

    We’ve been seeking more objectively true ways of learning in several disciplines. From the computer scientists studying history through algorithms and theory to the statisticians studying intelligence differences among different socioeconomic classes, science has infected areas of study beyond what we could have imagined long ago. Even in literature, much of the discussion of the value of a novel has been reduced down to what the story says about society and the human condition rather than the nature of ideas themselves. (As though anyone needed to read Mark Twain to understand why slavery is immoral!) While these utilitarian methods of scientific inquiry may have value in discovering information we wouldn’t otherwise know, we mustn’t let science replace our senses of objectivity and the value of non-science fields. Unfortunately, many of us equate “science” with “objectively true,” and, as a result, we end up fooled by the dazzling allure of science. (Besides, “objective” shouldn’t mean anything other than “lacking personal influences.”) And that’s where the power of science lies. The dominance of STEM fields as sources of undisputed, eternal truths means we let scientists get away with fooling us with non-scientific information.

    Take, for instance, physicist Lawrence Krauss’s “A Universe from Nothing,” which supposedly explains how the something could have come out of nothing. While atheists beat their chests in boast of the book as a “victory for science,” Krauss claims no real physicist has voiced objections to his book. Maybe he’s never heard of a few examples…

     “Cosmologists sometimes claim that the universe can arise ‘from nothing’. But they should watch their language, especially when addressing philosophers. We’ve realised ever since Einstein that empty space can have a structure such that it can be warped and distorted. Even if shrunk down to a ‘point’, it is latent with particles and forces – still a far richer construct than the philosopher’s ‘nothing’. Theorists may, some day, be able to write down fundamental equations governing physical reality. But physics can never explain what ‘breathes fire’ into the equations, and actualised them into a real cosmos. The fundamental question of ‘Why is there something rather than nothing?’ remains the province of philosophers.” – Martin Rees

     “The concept of a universe materializing out of nothing boggles the mind … yet the state of “nothing” cannot be identified with absolute nothingness. The tunneling is described by the laws of quantum mechanics, and thus “nothing” should be subjected to these laws. The laws must have existed, even though there was no universe. … we now know that the “vacuum” is very different from “nothing”. Vacuum, or empty space, has energy and tension, it can bend a warp, so it is unquestionably something. As Alan Guth wrote, “In this context, a proposal that the universe was created from empty space is no more fundamental than a proposal that the universe was spawned by a piece of rubber. It might be true, but one would still want to ask where the piece of rubber came from.”” -Alexander Vilenkin

    “The fundamental physical laws that Krauss is talking about in “A Universe From Nothing” — the laws of relativistic quantum field theories — are no exception to this. The particular, eternally persisting, elementary physical stuff of the world, according to the standard presentations of relativistic quantum field theories, consists (unsurprisingly) of relativistic quantum fields. And the fundamental laws of this theory take the form of rules concerning which arrangements of those fields are physically possible and which aren’t, and rules connecting the arrangements of those fields at later times to their arrangements at earlier times, and so on — and they have nothing whatsoever to say on the subject of where those fields came from, or of why the world should have consisted of the particular kinds of fields it does, or of why it should have consisted of fields at all, or of why there should have been a world in the first place. Period. Case closed. End of story.” – David Albert

    But, despite these philosophical objections from other scientists, Krauss masquerades science as a reliable answer to philosophy and charades the public into thinking he has the expertise to write such a work. And, in other areas, we see scientists invoking “scientist privileges” on policy, ethics, theology, and even social justice while dumbing down the public in the process.

    In the worst scenarios, scientists sell-out and become celebrities. Parroting “empirical objectivity,” Dawkins, Harris, Tyson, Nye, Sagan and even Krauss have insidiously driven ignorance in quests to push personal agendas thinly veiled as “science.” Much like any other cult of personality (from Stalin to Mussolini to Martin Luther King Jr.), the public puts a dangerously large amount of trust in their opinions by nature of them being well-respect scholars in their own field. As a result, we end up with physicists talking philosophy, biologists writing theology and neuroscientists dictating ethics without the appropriate training to speak about those fields outside their areas of expertise. As much as we like to think physics can tell us whether or not God exists, those questions will remain philosophical in nature. Regardless of what neuroscience can explain about our emotions and behavior, the human element of who we are will always be answered by the humanities. Under the STEM pseudo-prestige, the celebrities write books, host T.V. shows and control the thoughts of the general public while the real scientists laboriously tire in labs.

    Though I don’t mean to compare the lives of scientists such as Carl Sagan to all decisions of political leaders like Benito Mussolini, the scientist’s cult of personality recognizes the proper tactics to dogmatically influence masses of people. Muslim minister Malcolm X once said, “I’m sorry to say that the subject I most disliked was mathematics. I have thought about it. I think the reason was that mathematics leaves no room for argument. If you made a mistake, that is all there is to it.” How ironic that, despite the revolutionary’s distaste for mathematics, scientists would use their own “room for no argument” card to persuade others much the same way any politician would. Even if Malcolm X became a scientist, he could have easily made a difference in American history with his own power. During the 1957 Johnson Hinton incident, the activist demanded hospital treatment of Johnson Hinton, an imprisoned black man beaten by officers. Soon enough, Malcolm X found himself surrounded by hundreds of activists.

    Malcolm X addresses a rally in Harlem in 1963. An almost-speechless officer stated, “No one man should have that much power.”

    While figures like Malcolm X exert such a powerful influence through rhetoric of political rallies and historic events, scientists often seize on their own sources power to sway the thoughts of the public. But how did things end up like this?

    It’s easy to look at our education system as a cause of this scientism, and there might be truth in it. If we’re teaching students the wrong way, the effects will be self-evident. Throughout the education, students are generally placed in standardized, structured roles meant upon which they build their identity. We specialize ourselves into isolated fields, whether it is Math, Chemistry, Business, English, Pre-Professional fields, or whatever else we choose. Apart from the general understanding and liberal arts “love of learning” that comes from engaging with a multitude of disciplines, it becomes difficult to instill a respect and willingness to defer to expertise in other fields. When the physicist realizes a philosopher studies fundamental questions of nature, it’s difficult for the physicist to realize how a philosopher’s approach might differ from his or her own. Similarly, it’s much easier for the general public to trust a charlatan physicist (who talks about whatever the public wants to hear, rather than the truth) who writes a popular science book rather than a respected academic scholar. Since this issue spreads across different disparate fields, we should take a look at our fundamental ways of understanding one another before we rally the streets screaming “fight the power.”

    Imagine a situation in which a speaker is trying to persuade a listener. Whenever the speaker makes a persuasive claim, the speaker provides references, cite examples, and used other techniques in explaining an argument (even if we’re not trying to espouse viewpoints on one another). In this environment, there’s a pressure in which the listener doesn’t completely understand what the speaker is saying, whether the listener doesn’t know what the speaker means, doesn’t understanding his/her logic, or anything else that causes the listener to not know something. This pressure permeates throughout our debates and discussions, and, as a result, we resort to social and psychological behavior. Confirmation may lead the listener to believe whatever he/she wants. Ad hominem halo effects run wild. In the fear of appearing ignorant, as long as it’s good enough to get along with the speaker, the listener is moved and both parties understand. We end up with a social benefit to appearing as though you value knowledge, but no incentive to actually value it.

    A burden and a blessing, scientists have been thrust into a position of power. All the while, it sends mixed messages encouraging skepticism and free thought, but also accordance with deference to authority and scholasticism. While the public does need a better understanding of science, we need to understand which questions scientists might not be qualified to answer. Give questions about God and the origins of the universe back to the philosophers. Give politics back to the political scientists. Give social justice issues back to whoever (I don’t actually know). And, when we meet unknown or obscure areas of study, we must reject necessary preconditions to accepting expertise in those subjects we know very little of. Let scientists speak, but only after they’ve familiarized themselves with whatever they want to speak about. While we must avoid scientism through a more humanistic look at scientists, perhaps it’s best for us to see science as a helpful guide, but not a supreme deity. 

    November 25, 2015
    Science

Previous Page Next Page

 

Loading Comments...
 

    • Subscribe Subscribed
      • Heuristic
      • Already have a WordPress.com account? Log in now.
      • Heuristic
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar