|Rembrandt’s painting of a woman with breast cancer|
Philosophers like to argue about our values. We can’t simply stop at empathizing with different belief and conditions of other people. We must know where those values come from in order to address the thorny ethical dilemmas that plague our lives. Dr. Agnieszka Jaworska at UC Riverside delineates various forms of moral standing in which humans help each other. And, when it comes to mental health, this sort of moral standing understanding might be just what we need.
When we ask why we care about the people we value, the answer might appear obvious. If we value something, surely we care for it on account of that value. But caring is complicated. We often care about people out of social ties, and we care about our own selves in different ways. These grounds can be emotional, like our abilities to desire, or more reason-based, such as our abilities to determine what actions and behavior we can perform. We say we should save the live of a human being instead of, for example, the life of a chicken, on the human being’s ability to reason.
A grasp of our moral standing would aid in our treatment of the mentally ill. A patient with late Alzheimer’s disease might only find him/herself with bodily needs, such as food and water. Babies and even unborn children may be subject to speculation with responsibility, autonomy, integrity, and other factors depending on their physiological structures. And caring can cover the emotional aspects we normally associate with the action. As Jaworska explains, the internalities of caring on human behavior can’t be ignored. The way we care about things gives us desires and attitudes that we don’t simply experience for a moment or two, but absolutely “own” as part of ourselves. This act of owning an attitude gives rise to the capacities of caring. And Jaworska argues that our capacities for emotion are actually enough for our grounds for the cognitive and reflexive capacities of caring. Other opinions include those who follow Kant and claim that, instead of an emotional ground, the capacity for caring comes from a reason-baed ability to form decisions. From these capacities, we can talk about those who suffer from disease (especially with mental health issues) on the right page.
This means our understanding of medicine and medical education needs this moral grounding. More power to the fields of philosophy and the rest of the humanities.
We may be making progress in understanding the genetic code, but how much of our moral code is under the same scrutiny?
The scientific community has been at it for decades. Talks of the potential for CRISPR-Cas9 to genetically modify organisms for better or for worse have infected our thoughts and discourse almost like a virus. It’s even gotten to the point at which I’m somewhat tired of how my newsfeed is blowing up with news of how genetic engineering is such a huge ethical problem with very little thought or opinion put into developing and finding solutions for it.
In this way, genetic enhancement may be seen as an extension or similarity to our current methods of breeding for specific traits. They should be viewed with greater regulations, though, with the moral costs that come along with them.
It’s better to establish a moral code and determine what problems might arise from them. This means that, with the number of ways we can and should carry out regulations, they should adhere to central ideas and principles that can be enforced and understood. This way, those problems can be addressed win the future with structure and clarity from the way humans carry out actions. We shouldn’t behave in such a way because it produces the best outcome nor because it might appear to be the safest. Protecting fundamental ideals can give us something to hold onto through precarious and changing innovations.
To do make some sort of system of rules, we must analyze our pieces of knowledge and oft-repeated statements in our discourse of genetic engineering.
Some have considered comparing genetic engineering to natural selection. It might seem reasonable to think that genetic engineering is an extension or similar vein as natural selection that we observe in nature and can, therefore, be viewed with less suspicion and fear. But evolution’s dissimilarities with genetic engineering, notable in how serendipitous and amoral the former is, show that this comparison couldn’t hold any weight. Darwin himself wrote, “What a book a Devil’s chaplain might write on the clumsy, wasteful, blundering low & horridly cruel works of nature!”An approach to genetic engineering viewed the same way natural selection works would fail to understand the implications and power of our artificial tools.
Others have suggested that the ethics and scientific progress are at a race with one another. I often hear people say “Science has so much potential that humanity hasn’t caught up to it.” But this metaphor breaks down considering science and ethics aren’t at odds with one another. Science lacks a moral direction and makes no humanistic statement outside of our own interpretations of scientific knowledge. It’s also a very scientistic statement that could imply we are now facing new moral questions when the moral questions we are asking ourselves are the same questions we’ve been asking ourselves since the beginning of mankind.
With these thoughts in mind, such a moral code should be grounded in what the philosophy or humanistic disciplines dictate as necessary and beneficial to society, as opposed to what science has shown to be beneficial. To avoid the pitfalls and shortcomings of these metaphors and comparisons, we need to understand the moral codes of genetic engineering and enhancement in terms established through critical thought and speculation. Science will be useful in the future, though, when using policy-based models of social research and data-driven theories, but, as of now, we need a firm foundation before we can get there.
|Google’s AI has fun with animal faces, but what about the company’s access to our bodies themselves?|
Just when you thought the tech giant couldn’t get any more powerful, Google is taking greater steps into the game of healthcare information.
In the aftermath of the Panama Papers leak, journalists, policymakers, and everyone in-between have voiced their concerns about who has access to what data. Many fear the potential for danger from having those in power have access to too much data. Now it’s been revealed Google’s artificial intelligence company DeepMind, which recently gained attention for defeating the world champion at the board game Go, has access to healthcare data of over one million patients in the United Kingdom, the New Scientist reports. Unlike other news reports of giant companies having access to large amounts of information, Google’s stronghold on our data-sharing means its securing that information for the purpose of artificial intelligence programs, something many need to understand.
My friends and I have been buzzing about the potential of deep learning. It’s easy to see how a robot can perform a task using a calculation such as defeating someone in a board game. My buddy Ji-Sung Kim, undergraduate at Princeton University, developed deepjazz, a software that creates jazz music. Composing music is an activity much more human than solving a mathematics problem so developing an AI that can do so pushes the limits of what computers are capable of.
With the new data-sharing agreement, Google has access to millions of patient data. As if the monolith didn’t control enough of everything in our lives, we need to give new considerations to the rights of patients, physicians, and everyone else in terms of healthcare information. Unlike, for example, collecting information on whose social media you follow or how many memes you have, mining the data of our healthcare presents an issue about our personhood. Our health is part of who we are, so we must be much more careful and use revised notions of responsibility, liberty, autonomy, and other ideals. And unlike accumulating data for the purpose of zeroing in on terror threats, fighting disease and epidemics using big data seems much more grounded in a moral sensibility. It neither encourages xenophobia nor blockades free speech. Many people trust Google. One might feel more comfortable knowing their data is with them rather than in the hands of a shady politician.
Regardless of where the answer lies, let’s bring the issue under criticism and and enjoy the computerized music.
|“Clearly Rosy had to go or be put in her place. … The thought could not be avoided that the best home for a feminist was in another person’s lab.” – James Watson, The Double Helix.|
|Maybe Hippocrates would have envisioned greater security for his patients’ health information.|
When it comes to issues in mental health, the day-to-day problems of the mentally ill seem like they might be more than enough for anyone to handle. Mental illness is stigmatized, increasingly rising, difficult to detect and cure, culture plays a role in it, and psychiatry still struggles as the most scientifically backward field of medicine. I’ve written about the privacy of mental health data, the role culture plays in mental health, the nature of disease, and a bit of our stigma of depression, but I’ve yet to tackle one certain mystery: our existential threat to our minds.
Though every physician’s Hippocratic Oath includes a promised respect to privacy, it would have been difficult for Hippocrates to foresee a future in which we might have the power to know the very minute details of our mental behavior. Brain imaging technology and mental health records would allow anyone, from the a sneaky politician to your future employer, to know you inside-out. For a person suffering from mental illness, this can mean forfeiting liberties and luxuries to your privacy. With recent “social experiments” from sites like Facebook and Okcupid to collect information about users, many of us were outraged by how we could be “used” in such a way. Maybe the CEO’s behind those experiments were convinced of “Ockham’s Twitter” (among competing sources of information, the one with 140 characters or less should be selected), but many of us fear that we were one step away from a dystopian future of government surveillance. And how can anyone feel safe knowing their thoughts are being policed?
In addition, though our scientific research on the brain can lead us to great understandings, we’re still far from knowing the ways our physical brains are connecting to areas of sociology, ethics, and philosophy. It’s hard to find a model of the mind isn’t completely either dualist or reductionist. But that isn’t to say we aren’t making progress. And, with a greater understanding, we can finally answer the tricky questions neurotechnological research poses to us. The issues, from who should have access to your mind, who can collect/share that data, under what conditions might we need to control it, or anything similar that makes us feel uncomfortable can finally be addressed. If we can bring our models from neuroscience, cognitive science, philosophy, and everything in-between together, then we can get a better picture of who we are and provide for a better future. As neuroethicist Kathinka Evers says, “one of the proposed goals of human brain simulation is to increase our understanding of mental illnesses, and to ultimately simulate them in theory and possibly in silico, the aim being to understand them better and to develop improved therapies, in due course.” Is the black box finally being unveiled?
Speaking of mental illness, our issues of mental health could provide insight into our cultural understanding of ourselves, as well. Under a re-branding of mental illness (as a cultural phenomena that can still be treated rather than a biological defect to be shunned), we put value in the way we think, not just as algorithmic organisms, but entirely valuable to how we find meaning in our lives. We want to know that what we’re doing has a purpose, a motive, value, or any other sort of meaning that can give us a reason to keep on living. We worry that our efforts aren’t worth it or that we truly have no control over our lives. These existential desires may manifest in mental illness in the form of anxiety, depression, or even PTSD. But we can also look at these existential desires as ways of searching for meaning and truth, and, in that case, maybe our mental issues could be seen the same way.
|“People are always selling the idea that people with mental illness are suffering. I think madness can be an escape. If things are not so good, you maybe want to imagine something better. In madness, I thought I was the most important person in the world.” – John Nash|
If we collect more informations about ourselves, are we really more secure? It’s easy to feel insecure, anxious, distracted, or overloaded in any way by the sheer amount of knowledge and information that we have. But let’s remember that we might have actually just shifted our focus to abstract sources of information (that still needs to be verified, justified, proven, and shown to other forms of epistemic certainty before we can really put it to any use) and we can’t let our rhetoric and discourse surrounding information put us at the hands of information itself. We talk about becoming distracted easily by the 21st century abundance of information, but these similar struggles have been around for centuries and, instead of accepting that we’ve become more estranged by it, we need to put ourselves above it. Seen this way, our worries of living in the Information Age are very similar to the existential crises we face of how much control we have over ourselves and who we really are.
With how much progress we are making on understanding the brain, whether it’s under the security camera, the surgical knife, or even the pen of the textbook, we still need to re-evaluate our values before making any big decisions. We’ve seen ways previous initiatives of mental health data collection have failed and the progress of understanding ourselves is tediously slow, but, in the past few decades, we’ve seen that we can re-emphasize the individual, autonomy, and self-constraint in systems of record-keeping. Transparency and willful control of what we can reveal about ourselves might increase our trust have gained popular support. And, with all the greater number of ways we can understand ourselves, we need to re-think our privacy as a new sort of autonomy.
Rather than the old days of privacy being something that was entirely separate from what the general public could see, we need a new way that can incorporate all the different ways our minds are being studied. We should look at the type, quality, and power of information that anyone collects about our minds. Such a paternalistic society in which we are entirely controlled by people above us does no good, not for our individual mental health nor for the society’s plans. And we can find new purposes and values in the setting of the 21st-century surveillance state. Maybe the promising results of the interdisciplinary nature of neuroscience will give us a new definition of identity, and we can feel safer knowing that we can explain who we are.
Long gone are the days of scientists only locked up in labs, secluded from everything but their microscopes and calculators. Now, more than ever, scientists find themselves writing reports and grant proposals, managing jobs, sitting on committees, and delivering lectures.
Scientists work in issues at the forefront of policy, ethics, law, and other areas of society. Though these duties may be as fluid as viscous liquid or as dynamic as biological evolution, scientists and non-scientists alike struggle everyday with understanding science’s role in society.
When Lisa M. Lee, Executive Director for the Study of Bioethical Issues, gave her talk “Handling Obstacles to Ethics in Public Health,” she spoke to an audience of professors, physicians, and other professionals about the current state of affairs in public health ethics and bioethics. She spoke from her background in bioethics, including her work in public health surveillance and privacy. But, as I sat in the front row of the lecture hall, I couldn’t help but wonder, if scientists have expanded their roles in other areas, why was there still such a huge gap between science and policy?
No matter whether you’re a physicist or a lawyer, we’ve taught ourselves to be complacent. With the slow death of the liberal arts education and the reduction of college down to a means to manufacture employees, students have anguished over how to get into medical school or make a successful living in the future, but forgotten about the important values of humanism necessary for personal growth. We need to encourage science as a way to seek the truth, of both the economy and virtues. No doubt, science should make money, but it should also teach us values such as wonder, curiosity, and humility in the world.
Dr. Lee suggested requiring ethics training programs for graduate students. I was dubious of this solution because, while it may help students understand ethics, a requirement can only do so much to foster curiosity and humanism before encouraging complacency and discouraging innovation.
Students who aspire to become physicians suffer from this complacency. As pre-medical undergraduates, we have long paths in front of us before becoming a practicing physician. We spend four years taking courses like organic chemistry, physics, and biology while completing the Medical College Admissions Test (MCAT). Our required courses are rigid, standardized, and structured upon what the officials dictate to be important. As we sit in crowded lecture halls, hoping for a good grade or a recommendation letter, our pre-medical overlords teach what’s going to be on the exam, nothing more and nothing less. Along the way, we volunteer, shadow, and engage in extracurriculars before entering a four-year medical school program. After that, we have residency and training before becoming a fully-practicing physician. With such a long, stringent path, it’s easy to forget about what’s really important and how to “live in the moment.” Instead, we succumb to utilitarian, consumeristic motives as we value information over wisdom, marketability over authenticity, and dogmatism over free thought. And, when we aren’t prepared for the future, the “Medical-Industrial Complex,” as Dr. Lee puts it, thrives.
Compare the path to becoming a doctor to that of future lawyers, who intern for attorneys as soon as they enter law school. While law students get to see the employable fruits of their efforts almost immediately after college, medical students have to spend much more time preparing before witnessing the value of what they’re being taught. It’s much more difficult for pre-medical students to truly ponder how their courses will give them benefits in the future, and, especially with stringent and demanding science course-loads, it’s easy to lose sight of more valuable goals of methodic inquiry and rationalization in the tense, competitive world of exam scores, GPAs, “informational texts,” and oft-repeated “problem-solving skills.” And, since most pre-medical students take a large number of science courses, the medical curricula is centered around STEM goals of economic prosperity and political motives.
|Pictured: my organic chemistry class.|
It might sound absurd to describe the current state of affairs in medicine as a “medical-industrial complex,” but it makes much sense in the context of the militarization of education. With the Independent Task Force’s “U.S. Education Reform and National Security” in 2012, our education has been structured in the following areas: “economic growth and competitiveness, physical safety, intellectual property, U.S. global awareness, and U.S. unity and cohesion.” The highly controversial report has been criticized and praised by professors nationwide, and some, most notably in those in the humanities, have expressed concerns for its methods of re-structuring education. In the report, “What is Education?”, Professor of Hebrew and Comparative Literature at Berkeley Robert Alter writes, “Should a teacher’s motives for introducing seventh-graders to science be that she is preparing cadres of future technicians who will be able to design bigger and better defenses against ICBMs?” A future in which students are put in and out of the school-military pipeline is frighteningly grim, and structuring an ethics curriculum would certainly be an effective counter to these woes. And with our war history in Vietnam, Iraq, Afghanistan, and other places, there is a pressing need for us to understand culture, history, language, and other aspects of the human condition in order to address the issues of the future. Sure, we could pump more students into “critical” departments of language and culture based on our political concerns (such as foreign languages of Spanish, Arabic, or Chinese). But, in order to truly address the ethical dilemmas of tomorrow (including political concerns), there must be a change in education that runs much deeper than simply forcing students to take a course in ethics, literature, or history. It must be a change in the way we think about those courses.
In “What is Education?” James Engell, Professor of English and Comparative Literature at Harvard University, writes:
Studies show that in our schools, public or private, student cheating, dishonesty, and plagiarism are on the rise. School administrators in many locations have themselves been caught cheating in order to make the performance of their students look better. Federal and other studies indicate that scientific misconduct and falsification of scientific data are increasing problems. A society without ethical education cannot expect either good government or real security, no matter what shape its laws take or how “reformed” its educational system. The damage done may come slowly but the rot is deeper. The Report says nothing about ethical or moral aspects of education.
Engell continues that these “moral” and “ethical” shortcomings in education have been brought on by a certain “blindness” in accepting results has cost us much in “disease prevention, agricultural production, sustainable resources, and, most troubling, in respect for the procedures and results of science itself.” In order to address the ethical issues of tomorrow, we need a fundamental shift in the way we approach our classes to fight the complacency and individual shortcomings that give rise to our blindness. Though Engell isn’t a scientist, his concerns for the addressing scientific misconduct come from an appeal to morals and ethics. These moral and ethical considerations come from elements of an education that the humanities emphasize, including the human narrative, critical speculation, heedless skepticism, and empathy. From a more “ethical” look at the world, by emphasizing the education as a humanistic search for truth and justice, we can rise above the typical requirements and capital greed of the militarized education system, whether it’s in the sciences or the humanities, and fight the power.
We can only address the ethical issues in science, medicine, and public health through a thorough examination the values we instill in ourselves through education. Those of us who can break from the complacency of everyday life to higher ideals, including courage, justice, and compassion, will be ready to fight the problems of tomorrow.
When speaking about the rights of an individual to his/her personal information, it’s easy to overlook the “personal” nature of mental health data. And, within the rhetoric of mental health, we spend a lot of time expressing the behavior, feelings and thoughts of those who suffer from mental illness, but we forget about the a deeper issue: who should know about it?
How much can more data actually help us? With so much information about ourselves, it’s easy to be misguided. Some, like Jesse Singal of NYMag’s “Science of Us,” have decried the calls of certain mental health issues at universities tremendously serious, yet suffering from confirmation bias or similar statistical fallacies.  If anything, we might just well be overwhelmed by how much we know about ourselves scientifically that we forget the humanistic aspect of ourselves. Basic science research in psychiatry hasn’t reached the goals it has claimed to make over the past few decades. Will turning to the social sphere, like Insel suggests, do any better?
As science and medicine call for greater access to information for the purpose of research and clinical treatment, privacy becomes an issue. When we collect information from an individual, whether it’s a medical record from a hospital or a meeting with a school therapist, we have to protect his or her rights. How can we make sure data doesn’t fall into the wrong hands? What if a scientist’s data is used without permission or for unintended purposes?
“But we would want you and your family members…to take part because we want to have that information…And that’s obviously also going to be very sensitive but very important because it’s such a problem in this country.” – Francis Collins