Skip to content

Heuristic

  • GitHub
  • Email
  • Scholar
  • LinkedIn
  • Twitter
  • Instagram
  • Medium
  • Twitch

  • Should Competitiveness Drive Education?

    The first Heraean Games began as an annual foot race of young women in competition for the position of the priestess for the goddess, Hera.

    When my friends I were discussing possible medical schools to apply to, one of my friends explained how she chose not to apply to the Pritzker School of Medicine at the University of Chicago after she heard stories of the “competitive” nature of the students. Speaking as a student who loves excitement and challenge of my courses, no matter how I could obtain the experience, I would definitely enjoy a great amount of competitiveness among my own peers, even in the setting of grinding doctor-hopefuls. Bear in mind that competitiveness is not the same as rigorous, and, given our dissenting beliefs about the situation, it is not immediately self-evident how competitiveness should affect us anyway. This is important because, among pre-medical students, we’ve become so tunnel-visioned and focused on the goal of entering a medical school that “competitive applicant” has become synonymous with “good applicant.” This diction implies that a competitiveness is inherent to all good medical school applicants because we know we must compete against other amazing students. Intrigued, I wondered whether or not there was a healthy amount of competition that would produce the best doctors.

    I’ve always loved healthy competition. But, of course, I’ve seen the good, the bad, and the ugly. I’ve studied with friends who always try to one-up my responses to scientific questions in order to find the right answers. I’ve witnessed the faces of jealousy and bitterness on my friends faces when I mention my summer internship at a big-name college. And I’ve met students who cower in fear at a contrary opinion or an unfamiliar idea.

    As I’ve already mentioned, everyone has different points of view on competitiveness. David Papineau, Professor of Philosophy at King’s College London and the City University of New York, remarked that women are fewer in number than men in philosophy because “Where many men will relish the competitive challenge and enjoy the game for its own sake, many women will see it as the intellectual equivalent of putting balls in pockets with pointed sticks, and conclude that they could be doing something better with their lives.” Indeed, certain fields, such as philosophy or economics, are much more driven by competition than other fields, and these honorable tests of combat and rigor  In addition, competition distinguishes itself from other forms of rigor in that there exists a social component to it.  In light of this, it may seem like competition truly is as trivial as a joust between knights or a foot race among Greeks. If one believes competition is just “playing a game” against others, then he/she might not be inclined to seek competition. Not only does this allow us to manufacture competitiveness through economic, social, biological, and scientific behavior, but we may also observe competition as a phenomena in and of itself.

    Parzival’s tale of jousting and chivalry may exemplify the game of competition that we encounter today. Is education the same way?

    Competitiveness is unavoidably everywhere in academia & education. We compete with each other for positions in graduate programs, different research labs work on the same problems through respective approaches, and, whether we like it or not, our course grades are usually at least somewhat determined by how well we do with respect to other students in the class. And, for pre-medical students, we have a unique kind of competitiveness among each other. More generally, through research, discourse, and teaching, knowledge (at least, in academia) itself is a competition among ideas. When we test the conductivity of nanomaterials or inquire about Schopenhauer’s view on Kant, we are inviting our presupposed beliefs to be contested against those of others. The same way that natural species that are most apt to survive among challenges in a biological environment, any flaw or uncertainty should and must be removed from our foundation of knowledge.

    Education is hard. Is competition the only source of this difficulty? If so, could we attain identical or similar rigor necessary to achieve our goals without the harms of competition? Imagine a world in which students would sit through exams and classes to receive grades, and anyone who could score above a certain percentile, with a fixed number of volunteer hours, and an adequate score on some external scale would be admitted to medical school or receive an appropriate GPA. By this, I mean that, without competition, students could not have any judgement or obtain any source of improvement that comes from his/her relation to his/her peers. We would need to construct an objective, external framework to provide feedback and criteria on a student’s performance. Certainly, education would still be very difficult, but, apart from the measure of a student’s performance on this scale, a student would only face an “internal” struggle to find motivations to improve him/herself from within rather than the long-lasting dream to be the best epistemologist or heart surgeon in the world. From competition to enter medical school, we’ve seen cheating and desperation. It seems almost unavoidable that, if you are forced to cheat to secure the best grade possible, that you must ask your friends for disclosed information about exams and essays. But among philanthropic initiatives, we’ve seen greater results. Adam Smith may provide a framework for finding a harmony among competing individuals, “led by an invisible hand to promote an end which was no part of his intention,” but a the harmful “two-sidedness” of competition is socially wasteful between individuals in a biological environment becomes socially wasteful (as there is no biological “social contract” that we find in economic theory”. This Pareto-inefficient biological competition only serves to increase the difficulty for other individuals without adding value in and of itself. In light of these studies from different fields, is it possible for us to obtain an objective framework of competitiveness for education? Even with our selfishness, destruction, hatred, jealousy, and violence that stem from competition, those like Gandhi may secede that competition is the aggressor of human behavior.

    I believe we can embrace an ethical approach to a competitive education. Competition in education encourages social ties among peers who share the same goals. There may be downsides such as anxiety and desperation that promote sly behavior out of self-interest, but if we use competition among students to promote constructive criticism, honest feedback, and desires to improve oneself while remaining avoiding the corrupt consequences of competition, we may be okay. An education system without competition would rely too heavily on an “external” objective framework that is difficult to create and inefficient in promoting the qualities a student needs. Competition among students draws out the worst in us, but it is our responsibility to control our behavior regardless of how the student next to you behaves.

    But if my struggle to improve and get work done is about as intellectual as a sport between combatants, then I should just be happy I’m enjoying it. 

    July 27, 2015
    Education

  • Academic Burnout: Taking Breaks and Breaking Habits

    “It isn’t the mountains ahead to climb that wear you out; it’s the pebble in your shoe.” — Muhammad Ali

    Though this saying has been around long before Muhammad Ali, I think this sentiment explains the ways we forget about what’s really important to us during bigger endeavors. It’s very relevant to burnout that students and faculty face. When my friends and I recall the troubles and trials we have faced over our college years, we often draw upon the long nights with problem sets, lab reports, essays. I sometimes wonder that, when I feel tired after finishing a Logic proof, whether or not I will truly be able to handle much more difficult and demanding work in the future. My initial impression of future problems is that the endurance necessary for a graduate-level education or any cool career is much greater than anything I have to sit through during my four years working towards an undergraduate degree. I’ve even been considering taking a gap year or two before graduate/medical school. It becomes apparent, though, that, in order to prepare ourselves for challenges and problems of the future, we must learn how to adapt to the minor struggles that will continuously wear away at ourselves over a long period of time. I think Dr. Richard Gunderman put it best when he wrote, “Burnout is the sum of hundreds and thousands of tiny betrayals of purpose, each one so minute that it hardly attracts notice.”

    My personal habits and behaviors during academic semesters tend to follow a similar pattern: I start out semesters with a type-A personality about my schedule in which I wake up early, exercise well, attend classes, research, and get to bed early. But, throughout the group-work required by problem sets and poor planning/organization of events, I find myself slowly shifting towards the more erratic, unpredictable schedule of a night owl who can’t fall asleep unless he has drained his mind while browsing Reddit. It would usually seem impossible or highly unlikely for me to be able to achieve complete autonomy on my schedule and daily behavior with the unpredictability and instability of the college life. The semester would finish with my poor habits falling into summer and winter breaks only for the cycle to repeat once the semester starts once again.

    Burnout, certainly emblematic of bad habits that are molded together over time, has its roots in something deeper than social phenomena of how much pressure people face when working. Burnout is more of a personal issue in which one fails to find meaning or value in what he/she does anymore. Ayala Pines, an Israeli psychologist who has done extensive research on burnout, has written that “The ones who had some traumatic experience related to insurance when they were children—their house burned down or whatever—they can work for a long time without burning out because they came to the profession with a calling. They feel their work is significant.” If students can work to find significance of whatever they do, then they too will be sure to work optimally in the future and, hopefully, avoid burnout. The college student who fails to find meaning in what he/she does over time is much more likely to suffer burnout than the one who can sincerely take the time to appreciate a true purpose in his/her education. Perhaps while I assumed that my temporary academic burnout at the end of the college semester was due to poor decisions of the nature of my lifestyle during the school year, if I were to model my daily and short-term decisions in the context of wider, overarching purposes that gave significance to my education, I might be able to develop an endurance that will drive me throughout my career. Instead of studying and working for vapid, lazy reasons, it might be more helpful for us to engage in a healthy self-reflection of what our purpose is in our daily routine. This would eliminate desires to procrastinate, plan poorly, and other unhealthy manifestations of a lost purpose.

    In order for us to tackle academic burnout, we must understand how habits are formed and how to break the bad ones. Similar to the minor struggles that work away at our purpose over a long period of time, temporary examples harmful behavior will create bad habits that will eventually instill into greater behavior, or worse, character. And only a poorly-written blog post that attempts to find value can save me from the reality of meaninglessness of my education. 

    July 24, 2015
    Education

  • “What’s in a rose? That which we call a name”: Semiotics in Science

    “What’s the use of their having names,” the Gnat said, “if they won’t answer to them?”
    “No use to them,” said Alice; “but it’s useful to the people that name them, I suppose. If not, why do they have names at all?”
    “I can’t say,” the Gnat replied.
    -Lewis Carroll 
    Through the Looking-Glass 

    Did you know that quarks and gluons have an assigned “color” (red, green, blue), even though those particles don’t actually appear as those colors to our eyes? Physicists chose red, green, and blue as colors to describe them despite the fact that their emitted wavelength of light does not lie in the visible spectrum. Rather, the names were, more or less, arbitrary and for the sake of pleasure. But assigning a name to something in science has a very different meaning depending on which discipline we choose to speak of. The naming conventions and rationale behind choosing specific names for things in biology is quite different from the way physicists assign names to things. When the mathematicians and alchemists of the Islamic Golden Age and Roman Empire wanted to construct a expression with a single degree of freedom that could be solved for, we chose the mathematical variable “x” because it was the closest sound we had to the Arabic word (شيء) “shay,” which means “thing.” It was also chosen by European mathematicians like Descartes due to the significance of “x” in signifying things that are “unknown” or “hidden” (ie., X-rays, the Latin prefix xeno-). What a beautiful way to express, quite literally, any “thing.” But, more fundamental to our understanding of the universe than the actual names we give to things, we can explore how we give meaning to certain things. Rather than the study of meaning itself (semantics), an inquiry of naming in science may align more with how meaning is created (semiotics).

    If our duty as scientists is to make discoveries into the unknown, it is implicitly our duty to assign names to things. Theories, proofs, equations, species, molecules, software, and anything that has a name in science has a bit of history and human in it. I feel as though, as a researcher in the biological sciences, we spend much more time talking about names than a researcher in which, a field in which names are, more or less, chosen arbitrarily. After all, Eco chose the title for his novel “The Name of the Rose” partly because “the rose is a symbolic figure so rich in meanings that by now it hardly has any meaning left.” This is not to suggest that physicists may not show creativity and subjectivity in their roles, as Murray Gell-Mann’s discovery of the “quark” arrived from a verse from Finnigan’s Wake. Maybe the arbitrariness of naming in physics stems from a physicists desire for simplicity and reductionism and a modern “form from function” ideology that pays no attention to semantics. Ironically, it also means that the search for aesthetic beauty and elegance of mathematics and physics causes us to overlook the allure and blessings of meaning in names. This silliness of naming in physics may be reflected in how the common names of phenomena and theory are often misleading. The “God Particle” was originally meant to be the “Goddamn particle”, but the latter’s lack of appropriateness and the former’s similarities of its discovery to the searching for the origins of the universe caused the change in its name. Following this, we found the general public thoroughly confused about the significance of the discovery of the Higgs Boson. And the name”Big Bang” was merely chosen “To create a picture in the mind of the listener,” (Mitton. Fred Hoyle: A Life in Science. Cambridge University Press), not that the Big Bang model occurred long ago and ceased to have an affect on what the universe of today.

    Three quarks for Muster Mark! Sure he has not got much of a bark And sure any he has it’s all beside the mark.

    While Aristotle was the first to classify all living things, our modern system of biological taxonomy is mostly derived from Carl Linnaeus’ work on classifying organisms. Unlike in physics, naming in biology is a big deal. Quite recently, we’ve even witnessed the birth of “bioinformatics” given to the intersection of computer science, statistics, and biology, while the computational fields of physics and chemistry were never, and never will be called “physinformatics” or “chemoinformatics.” When Darwin spoke “On the Origin of Species,” certainly he was speaking about distinct species of animals and plants he witnessed on the H.M.S. Beagle, but did we ever really have a good definition of what a “species” is? The Nomenclature codes on assigning names to organisms has undergone numerous revisions since the days of Aristotle. “In practical terms, the problem comes down to the need to impose a discrete classification (taxonomy) upon an essentially continuous phenomenon; i.e. biodiversity” (Chambers). This is incredibly important for biologists to communicate the new, unknown species and phenomena of the living world to one another. Given the daunting nature of this problem, one might be inclined to follow the position of Robert J. O’Hara, and write the problem of what reason to choose one thing as a “species” over another reason as irrelevant and unimportant. As Anna Graybeal summarizes O’Hara’s position, that “we cannot know which organisms are best grouped as species because we do not know what will happen in the future.” O’Hara describes the naming systems of modern biology folllowing from “cartographic generalizations”, as in, we give names to the geographical importance of biological phenomena. We use advanced and ambitious technology to map the evolutionary trees of all living things the same way we might construct a map of the physical world. We could easily overlook the problems of classification by recognizing that the names given to “species” and individuals on a small level depend upon future predictions of those evolutionary “states”, and, therefore, the meaning of “species” is irrelevant. Graybeal, on the other hand, herself favors an approach to what a “species” is that would accommodate the small-scale processes of biological reproduction that give rise to relationships and lineages of phylogenetics. But, instead of trying to define “species”, Graybeal hopes to recognize the types of descent and interbreeding that give rise to the diversity of life. In addition, through our newfangled “bioinformatics”, we could explore how to add taxa or branches to the difficult problems of biology when appropriate and an examination of the most optimal ways tell us how to do so.

    Will science give us the answers to the best ways to name things? And, if we let the fate of the universe decide how we assign names to scientific phenomena, then that might give science the “objective” truth that it needs to prosper. Through our discoveries in physics, biology, and any field in-between, we can look at what our names of the universe say about us. 

    July 24, 2015
    Science

  • "What’s in a rose? That which we call a name": Semiotics in Science

    “What’s the use of their having names,” the Gnat said, “if they won’t answer to them?”
    “No use to them,” said Alice; “but it’s useful to the people that name them, I suppose. If not, why do they have names at all?”
    “I can’t say,” the Gnat replied.
    -Lewis Carroll 
    Through the Looking-Glass 

    Did you know that quarks and gluons have an assigned “color” (red, green, blue), even though those particles don’t actually appear as those colors to our eyes? Physicists chose red, green, and blue as colors to describe them despite the fact that their emitted wavelength of light does not lie in the visible spectrum. Rather, the names were, more or less, arbitrary and for the sake of pleasure. But assigning a name to something in science has a very different meaning depending on which discipline we choose to speak of. The naming conventions and rationale behind choosing specific names for things in biology is quite different from the way physicists assign names to things. When the mathematicians and alchemists of the Islamic Golden Age and Roman Empire wanted to construct a expression with a single degree of freedom that could be solved for, we chose the mathematical variable “x” because it was the closest sound we had to the Arabic word (شيء) “shay,” which means “thing.” It was also chosen by European mathematicians like Descartes due to the significance of “x” in signifying things that are “unknown” or “hidden” (ie., X-rays, the Latin prefix xeno-). What a beautiful way to express, quite literally, any “thing.” But, more fundamental to our understanding of the universe than the actual names we give to things, we can explore how we give meaning to certain things. Rather than the study of meaning itself (semantics), an inquiry of naming in science may align more with how meaning is created (semiotics).

    If our duty as scientists is to make discoveries into the unknown, it is implicitly our duty to assign names to things. Theories, proofs, equations, species, molecules, software, and anything that has a name in science has a bit of history and human in it. I feel as though, as a researcher in the biological sciences, we spend much more time talking about names than a researcher in which, a field in which names are, more or less, chosen arbitrarily. After all, Eco chose the title for his novel “The Name of the Rose” partly because “the rose is a symbolic figure so rich in meanings that by now it hardly has any meaning left.” This is not to suggest that physicists may not show creativity and subjectivity in their roles, as Murray Gell-Mann’s discovery of the “quark” arrived from a verse from Finnigan’s Wake. Maybe the arbitrariness of naming in physics stems from a physicists desire for simplicity and reductionism and a modern “form from function” ideology that pays no attention to semantics. Ironically, it also means that the search for aesthetic beauty and elegance of mathematics and physics causes us to overlook the allure and blessings of meaning in names. This silliness of naming in physics may be reflected in how the common names of phenomena and theory are often misleading. The “God Particle” was originally meant to be the “Goddamn particle”, but the latter’s lack of appropriateness and the former’s similarities of its discovery to the searching for the origins of the universe caused the change in its name. Following this, we found the general public thoroughly confused about the significance of the discovery of the Higgs Boson. And the name”Big Bang” was merely chosen “To create a picture in the mind of the listener,” (Mitton. Fred Hoyle: A Life in Science. Cambridge University Press), not that the Big Bang model occurred long ago and ceased to have an affect on what the universe of today.

    Three quarks for Muster Mark! Sure he has not got much of a bark And sure any he has it’s all beside the mark.

    While Aristotle was the first to classify all living things, our modern system of biological taxonomy is mostly derived from Carl Linnaeus’ work on classifying organisms. Unlike in physics, naming in biology is a big deal. Quite recently, we’ve even witnessed the birth of “bioinformatics” given to the intersection of computer science, statistics, and biology, while the computational fields of physics and chemistry were never, and never will be called “physinformatics” or “chemoinformatics.” When Darwin spoke “On the Origin of Species,” certainly he was speaking about distinct species of animals and plants he witnessed on the H.M.S. Beagle, but did we ever really have a good definition of what a “species” is? The Nomenclature codes on assigning names to organisms has undergone numerous revisions since the days of Aristotle. “In practical terms, the problem comes down to the need to impose a discrete classification (taxonomy) upon an essentially continuous phenomenon; i.e. biodiversity” (Chambers). This is incredibly important for biologists to communicate the new, unknown species and phenomena of the living world to one another. Given the daunting nature of this problem, one might be inclined to follow the position of Robert J. O’Hara, and write the problem of what reason to choose one thing as a “species” over another reason as irrelevant and unimportant. As Anna Graybeal summarizes O’Hara’s position, that “we cannot know which organisms are best grouped as species because we do not know what will happen in the future.” O’Hara describes the naming systems of modern biology folllowing from “cartographic generalizations”, as in, we give names to the geographical importance of biological phenomena. We use advanced and ambitious technology to map the evolutionary trees of all living things the same way we might construct a map of the physical world. We could easily overlook the problems of classification by recognizing that the names given to “species” and individuals on a small level depend upon future predictions of those evolutionary “states”, and, therefore, the meaning of “species” is irrelevant. Graybeal, on the other hand, herself favors an approach to what a “species” is that would accommodate the small-scale processes of biological reproduction that give rise to relationships and lineages of phylogenetics. But, instead of trying to define “species”, Graybeal hopes to recognize the types of descent and interbreeding that give rise to the diversity of life. In addition, through our newfangled “bioinformatics”, we could explore how to add taxa or branches to the difficult problems of biology when appropriate and an examination of the most optimal ways tell us how to do so.

    Will science give us the answers to the best ways to name things? And, if we let the fate of the universe decide how we assign names to scientific phenomena, then that might give science the “objective” truth that it needs to prosper. Through our discoveries in physics, biology, and any field in-between, we can look at what our names of the universe say about us. 

    July 24, 2015
    Science

  • The (Wrong) Reasons to Become a Doctor: A Medical Ethicist’s Perspective

    This post is written from the point-of-view of pre-medical students, but I believe the issues and topics that I discuss can be applied to any undergraduate student who has a desire to learn. 

    As we search for meaning in our lives, we worry most about “Why do we want to become a doctor?” Indeed, as our fragile souls are knocked and swayed by existential crises and moments of doubt and insecurity by the overture of every Chemistry exam or weekend of volunteering, our searches for meaning and satisfaction ultimately leave us with only our constructed answers. Though it would be ridiculous to make decisions of the rest of our life in response to the temporary moodiness that mark any neophyte, whether we like it or not we, undergraduates, are forced to ask ourselves what we want to do with our lives and why. It’s important to put things into a bigger perspective that, while we are here to learn about ourselves and the rest of the world, we should not feel pressured to forget about our purpose.

    You’ve probably heard the far too-oft repeated buzz-purpose “to work with people” or “help people.” I mean, it seems like an easy option, right? Despite its banal general feeling, wouldn’t want to work with other people or help others? It seems to be the prime quality of an empathetic, righteous human being. Throughout volunteering, extracurriculars, research, and whatever else we do, we find ourselves doing things out of a love of helping others and working with people. Digging deeper, we ask ourselves what it really means to “work with people.”

    Yesterday, I had the wonderful opportunity to speak with Dr. Daniel Sulmasy, Medical Ethics Professor at the University of Chicago Pritzker School of Medicine, and ask him if he had made any observations about the motives and purposes of medical students in their work. I explained to him how I have already spoken with several different professors about how our motivations for why we learn and behave may have significant implications. Specifically, I asked Dr. Sulmasy about our pressure to act and do things for “utilitarian” purposes, rather than finding a “truer” meaning behind the things we do.  He responded that there was indeed a difference between students pursuing actions for “Intrinsic”, rather than “Instrumental” purposes. One can imagine that studying for an exam because learning organic chemistry reactions has an intrinsic value that chemists and scientists truly appreciate may be more helpful and successful than students studying for the sake of obtaining a decent grade on an exam. Perhaps we, pre-medical students, should use these intrinsic values of our academics to find deeper meanings beyond “a desire to work with people.” As another example, we find students who want to study poetry for the sake of satiating a desire to understand the human condition and the beauty of art at odds with students who study how to program software that can detect viruses in human DNA. Sulmasy continued that the battle at the hands of our motives and purposes is a test of sincerity that is very often showed in graduate & medical school applications. And one does not need to throw away the motive of “utilitarianism” altogether. We could argue that, if students approach their studies for more “intrinsic” purposes, then society will have more motivated and mindful scientists & doctors that would provide a greater practical benefit for everyone. And, of course, these issues are exhibited among all students, not just pre-medical students. Aside from the amusing alliteration, his comparison drew from examples of doing things for the purpose obtaining a reward out of their pre-medical/medical experience exhibited by his students.

    Back to the troubling idea of the proper reasons for becoming a doctor, we could choose to say that “we want to solve the problems of the world.” No matter whether you’re studying theoretical mathematics or creating irrigation systems in Sudan, the world has a lot of problems. And the “problem-solving” rhetoric has pervaded through all of society, especially in STEM fields. But what is “problem-solving”? This seems clear enough on the surface. The college education allow students to approach different types of problems. Who wouldn’t want someone to be able to think through problems for them? We hear about this a lot, especially in mathematics. We talk about how the U.S. needs more problem-solvers, how STEM is going to gift us with these important skills, and even how our medical schools place a huge emphasis on it. But what does “problem-solving” really mean? 

    While it is true that the pre-medical career and STEM courses give us amazing problem-solving abilities, the rhetoric for pursuing these opportunities is written with a solid practical underline. But let’s not write this off so quickly as a meaningless materialism. After all, we do want students to become scientists, engineers, and doctors so that they will help solve the problems of society. And this desire to help society may stem from humane virtues of empathy and love of what humans do. But, in reality, I do not believe it is even possible for students to learn how to solve the problems of society during our four years of sitting through lectures and laboratory discussions. Even for the most professional jobs, the administration of academia can’t reasonably give us the specific skills to solve problems that the world will face tomorrow. And I do worry, though, that our overemphasis in the rhetoric of these values of a STEM education causes us to lose sight of the similar values that can be obtained from studying the Arts & Humanities (similar to my previous comparison of a student studying poetry vs. a study studying software engineering). After all, while a Computer Science major may know how to write lines of code that can develop the next Uber, the English major will understand the complicated “expectations vs. reality” of Silicon Valley that will tell us what the good ideas are. It is still unclear how we should look at our education as a way of developing problem-solving skills because we don’t have a good idea of where those problems will lie now or in the future. With any doubt, we know that the liberal arts education must be protected to ensure students can obtain the full value of the disciplines and material that we choose to study.

    Is our notion of “problem-solving” too broadly defined? If we look at our college education as a way to learn how to solve problems, then this certainly seems to satisfy both a utilitarian purpose that will help us in future careers and a more “scholastic” purpose that will do justice to the reasons why we should learn. Is this why we struggle to find meaning in our college education? All we know is that we must go beyond the superficial feelings of desires to “work with people” and “help others” of the American education system to know what we really want to know.

    July 24, 2015
    Education, Medicine

  • The Unspoken Harm in the (Pre)-Medical Experience: On History and Education

    When the United States established its medical school system, we could have easily chosen to mimic Europe and create an alternative to the undergraduate degree for students who specifically wanted to become doctors. Instead, we created an idea of a “pre-medical student” who would finish a four-year bachelor’s degree in addition to pre-medical requirements before entering medical school. This would allow a unique, liberating approach in which we embrace the fruits of a liberal arts education while simultaneously preparing for professional school.

    The pre-medical requirements include required coursework, volunteering, extracurriculars, shadowing experience, a good GPA & MCAT score, recommendation letters, and maybe a hobby or two. As the information of these requirements disseminates through the advisors, organizations, faculty, and students of college campuses, we perceive strict, superficial standards on what pre-medical students should do that are, actually, quite arbitrary and variable. By “arbitrary,” I mean, we aim for illusory goals and set standards to ensure we complete X, Y, and Z before we apply for medical school. We follow the footsteps of students above us, never stopping to question the effects this dogmatic approach to our education has on us. I will explore what effect pre-medical requirements have on our education, but, since there is very little literature and research done on the issues pre-medical students face, it would be more helpful for us to first understand the problems of education in medical school to see if they reflect among our behavior. Through a study of the history of medical education, we can understand how we, pre-medical students, can forward in the uncertain 21st century.

    Before medical schools were firmly established in the States, soon-to-be doctors would serve menial duties under experienced physicians for a required term that would culminate in clinical activities. By the late 1800’s, the first medical schools, constructed to be intertwined with hospitals, gave a breath of education of the classics alongside clinical training. After completing a bachelor’s degree (including knowledge of philosophy, mathematics, and Latin), students completed a year of training before an apprenticeship and lectures. The overall structure of this seems similar to the way a pre-medical student must complete required coursework before entering medical school, residency, and securing a career.

    The ominous, exciting turn of the 20th century into the darkness that was “Hand mit Ringen” (Hand with Rings), the first medical X-ray.

    But how has the medical education changed over time? With advances in technology, science, and epistemology, medical education has always struggled to keep up with preparing students to approach the biggest problems of the future while retaining the human values that we cherish. Through progress and breakthrough in science, medical schools emphasized new technology and research-driven approaches in their education. This has forced medical schools to embrace confusing tendencies that attempt to balance the art and science of medicine with one another. realized we needed a professional ethos to guide us, and students and physicians would reflect on the changes that flew before their eyes. Maybe the course of history in the education of medicine guided through nostalgia. Even as far back as 1953, we find those who “lament the fact that the personal tie between teacher and pupil no longer plays so vital a part in medical education, and they would urge a return to the earlier mode of teaching.” But why stop there? In 1910, years after the establishment of a science-driven Medical School at Johns Hopkins, educator Abraham Flexner reminisced the thoughts of one student,

    “Our teachers were men of fine character, devoted to the duties of their chairs; they inspired us to enthusiasm, interest in our studies, and hard work, and they imparted to us sound traditions of our profession; nor did they send us forth so utterly ignorant and unfitted for professional work as those born of the present greatly improved methods of training and opportunities for practical studies are sometimes wont to suppose.” 

    Despite the progressively aggregating amounts of knowledge and its effects on human health, Flexner recognizes that the educators of the past were never as useless as the present students might think. Although Flexner does not suggest that medical students of his time were succumbing to an ominous, self-obsessed era of scientism and it would be ridiculous to assume that amazing advances in the sciences are driving students of medicine amok, there is still a balance of the sciences and humanities to be ascertained in order to completely realize Flexner’s vision of the American Medical Educaiton. For example, as the United States embraced the advancements of a science-driven era, some had hoped that the study of history of medicine would create a “counterbalance” to the “reductionist hubris” that plagued a physician’s knowledge in the science-dominated era.

    I would occasionally browse Reddit on my phone as I sat in the crowded lecture hall during my Organic Chemistry II course this past semester. I would only look up to the notes on the projector when the professor had finished writing something new for me to copy into my notebook. Despite my distraction, I would try to review my notes over and over again until the professor would un-pause the streaming on his feature length film “Organic Chemistry II Notes.” We glazed at the information, soaking in anything and everything in the comfort that the professor would “only teach the things that would be on the test”, “this is how you solve the problem,” complete with practice exams and sample problems identical to those on the exam. I paid attention, though, as I believe understanding the puzzles and forces that governed chemical reactions was incredibly interesting. But, I lamented that the joy of learning science was reduced to a requirement that we were forced to suck-up.

    I dropped that course later on in the semester.

    Luciano Nezzo, 1856, A tooth-drawer concealing the dental key from the patient

    Studies as recent as 2004 have examined the role that our modern economy has played on the medical education. “Habits of thoroughness, attentiveness to detail, questioning, and listening are difficult to instill when learning occurs in a clinical environment more strongly committed to patient throughput than to patient satisfaction. In addition, it is difficult to imagine how caring attitudes can be developed when medical education is done in a highly commercial atmosphere.” Necessities such as “questioning” and “thoroughness” are similar to the skills that I believe undergraduates need. I believe these issues of a “commercial atmosphere” among medical students parallel those that undergraduates face. Our business-like emphasis on fulfilling pre-medical’s requirements to please admissions officers causes issues of inhumane utilitarianism and dogmatic groupthink. 

    Viewing the the undergraduate education as a means of gaining something (whether that fools’ gold is skill, experience, recommendation letters, grades, or even the ambiguous buzzword “professional experience”) moves us towards utilitarianism, a motive with harmful consequences. We would have a materialistic view that loses the humanistic value given by an education. This materialism is both marked by a lack of interest in wonder and awe of scientific phenomena (similar to Flexner’s emphasis on the combined role of humanism with science) and greedy reward-driven pre-medical requirements (similar to mindset of medical education in a modern economy). While it is ridiculous to say all pre-medical students engage in extracurriculars solely for the purpose of becoming a better medical school applicant, there is no doubt that many pre-medical students are motivated by the idea there is a practical benefit in what they do. This practicality-driven view of our education also gives rise to anxiety and envy. These characteristics of a utilitarian approach to our pre-medical career may cause issues if we are ever to fulfill Flexner’s dream of finding a human value in medicine.

    Most importantly, if students are given a list of requirements that they must complete and achieve, then we will begin to see the fulfillment of those requirements as the purpose of our education. This, as I’ve explained with evidence from Harvard Medical School, should not be the case.  Similar to the reductionist attitudes that Flexner criticized over a century ago, we cannot simply view our activities as means of gaining a practical benefit. And it forces us to aim towards the “end goal” of gaining experience in these requirements before we have given enough thought to the meaning and challenges of those “goals. Learning how to ace an interview and approach ethical issues in medicine become means of preparing students for the future rather than methods of discovering the true purposes in those activities. By this, I mean that pre-medical students are often encouraged to approach the requirements directly without adequate and appropriate training or discussion of those subjects. 

    All in all, we need more discourse among students. The lack of discussion, debate, and criticism about these issues demonstrate the groupthink that cause us to lose sight of what the value of an education should truly be. We mustn’t reject all the things pre-medical students have done nor devalue any benefit that we have already obtained from those activities. Rather, we should build upon what we’ve done and ask what the true meaning of a medical education should be.

    As the old saying goes: Those who fail to learn from history will probably become doctors.

    July 16, 2015
    Education, Medicine

  • Rhetoric and Models of Learning: Memorization vs. Application, Bloom’s Taxonomy

    I’m only four weeks into my internship at the Conte Center for Computational Neuropsychiatric Genomics at the University of Chicago, and I’ve already heard about three different people tell me the mathematical aphorism, “All models are wrong, but some are useful.” Of course, it would be ridiculous to suggest that a model, graph, or diagram of any sort, by its own nature, should or could completely represent everything about reality. But students and researchers in areas of mathematics and statistics understand very well that data and information can often be misleading. We often look for theory in data without knowing what data actually represents, and the way we communicate scientific information can have on how another person perceives it.

    When I explored the origin of this quote, I discovered it was by the famous mathematician George E. P. Box:

    Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.

    (Box uses William of Occam to reference Occam’s razor by stating that scientists must make the fewest assumptions, or an “economical description”, when making scientific hypotheses.) I would think that Box’s statement to encourage scientists to put focus on elegance along with the actual information that one is attempting to convey when creating a model. Especially in the biological sciences, it’s easy to be inundated by the amount of information and complexity of the explored phenomena. A move towards some type of a specific design of a model, even if for the utilitarian sake of conveying information effectively, would be welcomed.

    Though the way we structure the models we write in scientific journals and publish in newspapers is very important, the idea that our models are always wrong may have similarities to a deeper meaning in communication: the rhetoric we use is not always perfectly clear. During the first day of my Organic Chemistry I Lectures, our professor explained to us the way we were going to learn the material for the course that would help us be as successful as possible (both in the course and in our futures). He drew a graph on the chalkboard that looked something like this:

    I’m not even going to calculate the R²-value for this one.

    As students in the STEM fields, were all too familiar with this moral lesson: focus on application rather than memorization. I think it was safe to assume that most of the students in the class understood how it is much more useful to learn how to apply problem-solving techniques, and other concepts rather than memorizing definitions and facts in a science course. You should try to understand the reasons why a chemical reaction occurs in an organic chemistry course rather than trying to memorize the placement of electrons and arrows. You should be able to identify how to use theories how to solve a mathematics problem rather than being able to memorize equations. Without starting a discussions of whether or not this is actually a good model (after all, this drawing is not a serious scientific model), there a few things that are important to discuss when talking about this distinction between “memorization” and “application.”

    The first issue that comes to my mind is that memorization and application are not two opposite, separate entities. In order to be able to “apply” concepts, it is implied that one must already have those concepts or something about those concepts “memorized.” When I understand how to “apply” Schrodinger’s Equation to predict the potential of particles in a physics equation, I must have “memorized” what each variable means. If I were to categorize facts and pieces of information necessary for an exam into those that must be “memorized” and those that must be “applied, then it would be clear that there is a huge overlap between the two and we would be left in vaguely-defined terms. This ambiguity can be observed in the tests of various classes, as well, as we see professors defend their tests and curricula as emphasizing “application” rather than “memorization” without actually exploring the skills necessary to do well in those courses.

    But maybe there are better (or less wrong) models for explaining how students learn? Some of my professors have tried using Bloom’s taxonomy to alleviate some of these issues.

    Named after educational psychologist Benjamin Bloom, this model attempts to view learning as a hierarchy in which one starts with the most basic skills at the bottom and moves up towards the “higher-level” skills. One must first begin to memorize information, then understand it, then such-and-such until he/she reaches whatever level satisfies him/her. By viewing “Apply” as simply a level above “Remember,” it is clear that one must be able to remember information and details before applying them. This seems to get rid of the first issue that I described. But it’s still a shame why so many students and professors spread around Bloom’s taxonomy without taking a look at what Bloom himself had to say about it:

    The phenomenal growth of the use of the Taxonomy can only be explained by the fact that it filled a void; it met a previously unmet need for basic, fundamental planning in education.

    Indeed, when Bloom created a model of the different ways students learn, it wasn’t built upon any logic or rationale, but, rather, was only meant to serve as a way for teachers and professors to classify different ways that students learn. It’s also not exactly clear how a certain test in a university course may require distinct processes that can be easily placed into this hierarchy, although, it would be very difficult to adequately explain these processes through any type of model.And, as researcher Richard Morshead points out:

    I call attention to it because it functions as an important part of the rationale currently used by the authors to support their entire triparted taxonomic project. Here, they point out that when distinguishing between affective and cognitive objectives, they are not to be interpreted as suggesting that there exists a parallel distinction built into the basic fabric of behavior. (p. 45) They assert that the Taxonomy is purely an analytic abstraction. Its division into three domains, cognitive, affective, and psychomotor, is an arbitrary arrangement that seems to best reflect the way in which educators have traditionally classified teaching objectives. (p. 47) It does not reflect intrinsic separations within behavior….

    As the authors themselves point out, all too frequently our descriptions of the behavior we want our students to achieve are stated as nothing but meaningless platitudes and empty cliches. (p. 4) If our educational objectives, they continue, are actually to give direction to the activities of both students and teachers, we must “tighten” our language by making the terminology with which we express our aims more clear and meaningful.

    In David Flinters’ review of “Bloom’s Taxonomy: A Forty-Year Retrospective” by Lorin W. Anderson, Lauren A. Sosniak, he asserts:

    Moreover, a dilemma common to both communication theory and educational practice is that the more similar a “message” (in this case the Taxonomy) is to the beliefs of its target audience, the more likely it will be embraced. 

    So, should we throw away Bloom’s taxonomy completely? Probably not. But professors and students need to discuss it within the context that Bloom himself laid out for it. It’s important for us to not limit our abilities to think and learn simply based off how we want or believe the way human beings think through course material. 

    July 14, 2015
    Education

  • A Natural Talent for the Natural Sciences

    One day, during the first semester of my freshman year at Indiana University, I was sitting among a group of other students as part of a meeting for an organization. As we went around introducing ourselves, a blonde female student told us that she was majoring in Mathematics. This was met with shock from the other students, complete with audible gasps and double takes. And there’s no doubt that the initial reaction from our group was at least, in part, motivated by the fact that we realized there was a girl who wanted to study mathematics.

    What is really keeping minorities and women from entering certain STEM fields? For the past few months, there has been a lot of buzz around a study by Science that showed how our perception of whether or not success in certain academic disciplines is due to hard work or due to natural talent dictates how easily we can diversify the people in those fields. Women and minorities more readily enter academic fields that in which you can succeed, supposedly, due to working as hard as you can, as opposed to having some sort of innate characteristic. Regardless of how true it is that one can become successful in certain academic fields due to natural talent, the study really only shows what we perceive to be true, rather than anything that is actually true about how to become a successful researcher/scientist.

    But the variable of “hard work vs. natural talent” skews the actual way that human beings become good at a certain disciplines in a way that represents a bigger problem among our society. We’ve clung to the idea of “Nature vs. Nurture” without realizing its shortcomings. Instead of trying to divide different abilities into “hard work” and “natural talent” implies that we are not dynamic, growing, and ever-shifting beings. When students become good at mathematics, it may often seem “natural” because mathematics is something that can come naturally to students, but this “natural” ability is only attained through years and years of deliberate practice. The other issue that the article ignores is that “hard work” and “natural talent” do not undermine one another. A person with a greater natural aptitude does not have less of a capacity to perform hard work, and vice versa. In addition, the sole existence of external forces (those that are not in our control) that affect students (whether or not that student was born to the right family, the color of that person’s skin, etc.) do not reduce how much control has over his or her actions. Granted there might be factors that have an affect that we can’t control (e.g, being African American would cause people to perceive you as unintelligent in a classroom), but at the end of the day, we’re still the same rational human beings.

    To more accurately explain the way human beings work, it is better to use the variable “factors out of an individual’s control vs. factor’s in one’s control.” This is, probably, what the researchers at Science implicitly meant to propose by exploring whether or not success was due to hard work or natural talent. However, seen this way, then we would have to take an ethical stance on the skills and characteristics of human beings (dubbing certain characteristics as worthy of individualistic autonomy while claiming others are simply the result of the uncontrollable forces.) The decisions here are tricky, because, with the poorly-defined and unexplored assumptions about the self (and how much control we have over our actions and lives), we could run into bioethical questions of eugenics, gene manipulation, and totalitarianism that are similar to those of any dystopian future novel.

    Perhaps the article that explores our perceptions of success in disciplines says less about whether or not one can be truly successful in a field and more about our rationalist ethics philosophies. in Science And, thus, western civilization finds itself at ends with its own autonomy. The more we give to the free will or control of the individual, the more we distrust the bigger picture.

    July 12, 2015
    Science

  • Re-imagining the Self and Freedom from Distraction

    Last Thursday, I visited the Ryerson and Burnham Library of the Art Institute of Chicago where I meandered the solitude granted by the bookshelves of cultural theory. Away from the hustle and bustle of crowded exhibits, I sat against a wall with a book on American History sitting in my lap. I savored the academic freedom of choosing what to study from a myriad of books and the personal freedom granted being far from the nature of the city and research lab. But, in the most fundamental sense, what is freedom? I personally greatly enjoy the freedom given to me by virtue of working in a dry lab. Since I perform my entire scientific research on a computer, I am not burdened by the physical limitations of experimental science, and there exist swaths of knowledge only a few keystrokes away. Does having more options and opportunities give us more freedom? Is freedom something that we should strive to achieve at all costs?

    I would love to read some of Matthew Crawford’s new book, The World Beyond Your Head, if I didn’t have six other tabs open at the current moment (RescueTime only works to save my sanity so much). Crawford, a fierce critic of distraction culture, draws from philosophical rhetoric of Locke and Descartes to describe the way we, human beings, approach the idea of freedom. I am comforted by Crawford’s argument that technology only has a small responsibility of the reason why we are so enthralled by distraction. As I struggle to find a solace in the bombardment and conformity of information and ideas that prevail through today’s culture, Crawford’s wisdom helps me understand what we, human beings, truly find important from a humanistic point-of-view. Though his arguments are too complicated to be completely explained in the medium of a blog post, one might be interested in learning how the anxiety and worry that our individual autonomy needs to be saved from the clutches of authoritarian tyranny fails to appreciate the individual. Inspired by the advertisements that clutter our urban setting, Crawford chooses to explain social phenomena with heavy, fundamental philosophy.

    When he’s not spending his spare time building motorcycles, Crawford explores from gamblers to chefs as he deconstructs the human self from our modern tendencies. Even though it the tendencies and currents of contemporary culture are often too complex and strife with external influences to be analyzed philosophically, Crawford’s reasoning backs up the description of our current society very appropriately. With a BS in physics and a PhD in philosophy, Crawford is one of the few researchers so strong-willed to put a question mark at the end of the most basic assumptions about freedom and opportunity that have been firmly ingrained into the Western mind. We can blame the Enlightenment for our notion that “more opportunities” implies “more freedom.” But should we throw away any idea of Kantian thought in our current work environment? Not so. According to Crawford, even the most elemental workers (such as welders and construction workers) can find a true sense of individuality under the authorities of society. Similarly, my desire to be free, that could possibly be granted by the isolation of a library in an Art Institute, might only be caused by my perceived regulations and limitations by higher-ups.

    The modern discourse of the intersection of philosophy and science is strife with controversy and strong opinions, but one famous physicist had some interesting thoughts on individual freedom or, rather, free will. In Einstein’s interpretation of Schopenhauer, he writes:

    My Credo[Part I]“I do not believe in free will. Schopenhauer’s words: ‘Man can do what he wants, but he cannot will what he wills,’ accompany me in all situations throughout my life and reconcile me with the actions of others, even if they are rather painful to me. This awareness of the lack of free will keeps me from taking myself and my fellow men too seriously as acting and deciding individuals, and from losing my temper.”

    (Schopenhauer’s clearer, actual words were: “You can do what you will, but in any given moment of your life you can will only one definite thing and absolutely nothing other than that one thing.” [Du kannst tun was du willst: aber du kannst in jedem gegebenen Augenblick deines Lebens nur ein Bestimmtes wollen und schlechterdings nichts anderes als dieses eine.])

    And, in light of this, Einstein formulated his famous discoveries on, well, light. Despite the enticing appeal of a physicist speaking about philosophical ideas that would create the setting for his greatest discovery, we must not be quick to jump to conclusions that questions about free will and human nature can ever be explained by science. However, those people who truly examine the way we approach science and philosophy have an important word on the rest of society. In tune with the philosophical undercurrents of society, Einstein’s reflections on the political events of the mid-20th century would later guide the movements for peace and humanism that, too, result from a relentless desire to be free. Under Einstein’s strong influences on social justice and disdain for the extravagance, we entered an age of insecurity and uncertainty about the future. And, as I lean back in my chair and sip coffee while running genetic analysis software in the comfort of my dry lab, maybe I should be grateful for not being able to do a thing.

    July 12, 2015
    Philosophy

  • Drawing Value from our Stories; The Mark of the Mature Man

    I wanted to share a small story that I recently had which sheds some light on the way pre-medical students (and undergraduates in general) perceive their education. For our medical school admissions applications, we are required to write personal statements that show more about who we are and why we are each amazing candidates for the path to becoming a doctor. Unlike the stringent requirements of completing activities (volunteering, research, extracurriculars, etc.) this gives each of us a unique way to show who we really are. But it raises a lot of fundamental questions about us.

    How do our experiences shape who we are? Does the value of a person come from him/herself or from his/her experiences? What gives a person commendable qualities? I believe that it is not the experience that improves a person, but, how the person reacts and acts upon struggling situations; and the most formative experiences are not romanticized, but everyday reflections of our lives.

    “The mark of the immature man is that he wants to die nobly for a cause, while the mark of the mature man is that he wants to live humbly for one.” – Wilhelm Stekel

    Here at my internship at the University of Chicago, I was eating out with a few of the other interns one night. After we ordered our food, I was sharing a stressful experience of mine a few years ago. I explained the experience in great detail and saw the others react in surprise. It was an experience which was shocking, challenging, and unique. It was something that probably no other student would have gone through, and it definitely changed me as a person. The other interns suggested it would be a great essay topic for the medical school application. I disagreed.

    The first issue that I would take with using my story in a medical school application was that it was an experience that was caused by something that was outside of my control and forced upon me. If I were to mention it in an essay, it would seem as though I was being “opportunistic.” Instead, I wanted an individualistic, empowering approach that would save us from the existential crisis of fate and fortune. I like the idea that value comes from what we make of our stories, and not from the experience itself, and this may remedy some of the neurotic woes of experience-grinding and resume-padding that plague our education. Instead of searching for meaning and motivation like they are things that we can obtain and find in our activities, it might help to view the value of our education as what we create. I’m not sure of the extent to which the idea that our own interpretation of experiences dominates over experiences themselves can be applied, but we can revisit the idea later.

    The other issue with my story was that it draws something “emotional” from the reader. I believe an AdCom would feel uncomfortable or bored when reading sob stories and heart-wrenching tales about how you went to Guatemala and witnessed the “tragic atrocities” that thousands of people have to face or about how your perspective changed when your grandmother died of cancer. Any emotion-inducing story can have a deviant opportunistic appeal and, while it may have a legitimate impact on your life, it is contriving to suggest that it is the reason for your success & worth as a student.

    For example, I was once speaking to another pre-medical student who told me about an experience in which he helped save a girl from committing suicide that he would write about in his medical school essay. Regardless of the fact that saving a person from committing suicide is something any rational human being would do, there was an emotional reaction (including “shock”, “sympathy-inducing”) from this story as an appeal. Sure, you may have learned about the fragility of human health and soul, but, is the cause for this realization coming from the emotional (pathos) appeal or from rational rumination on what is important to human beings? Kant, Sartre, and Freud didn’t formulate hypotheses on the value of human life and health due to emotional experiences. They did so due to their reason, and the emotional appeal is unnecessary.

    For it is not enough to know what we ought to say; we must also say it as we ought (Aristotle, Rhetoric. III.I.9)

    And, if the appeal of the situation is that it is “unique,” then one might think it would stick around in the memory of AdComs for a bit longer. But, ultimately, if it is a subconscious appeal, then AdComs would be aware of those sorts of appeals (and would, therefore, take into account), and, if it a conscious appeal, then AdComs would not give preference anyway to someone just because he/she is “different” from the rest. We do not choose to venerate Plato and Socrates because we remember them better nor because they invoke emotions and unique experiences. We choose to do so because they were the best at their profession. Similarly, undergraduates should not draw value from memorability or prose, but from reverence for human nature. Besides, if it truly takes some stressful situation to make you a better person as opposed to the humble experiences that we have on a day-to-day basis, then that just shows how difficult it must be for you to improve as a person in the real world.  In light of this, why would anyone think that having a emotional or unique situation would enhance his/her application?

    I do not mean to say that challenges are unnecessary nor unhelpful for medical school admissions. Rather, as I’ve mentioned, the worth of the student should come from what he/she chooses to make out of their challenge, not from the challenge itself. So, for my story, simply having a unique & heart-wrenching experience should not make my application worthy.

    The final, and most important, issue that needs to be addressed is that stories like these are purely romantic fiction. In Salinger’s “Catcher in the Rye,” Mr. Antolini quotes Psychologist Wilhelm Stekel, “The mark of the immature man is that he wants to die nobly for a cause, while the mark of the mature man is that he wants to live humbly for one” as he explains that Holden Caulfield must learn to grow out of his disillusioned, heroic vision of himself. The immature man chases the exciting, idealistic adornment from others while the mature man understands that tragic wisdom truly makes people brilliant. Pre-medical students (and undergraduates in general) must realize that we create true wisdom from our normal, everyday experiences not from flashy tales of epic struggle. Those stories do not accurately represent the complex moral dilemmas and struggles of the human condition.

    Before I end this post, I need to clarify that I think there are some helpful things to know (how to tell a story, find a “human” value in what you do, etc) that will show how well of a medical/graduate school candidate you are, but ultimately I don’t have the power to judge anybody. I can only give the impression that I get from the discourse of others surrounding their experiences. In order for us to address these issues, it is very helpful (but not necessary) to have empirical evidence (ie., direct evidence from the behavior of other students). But it’s very difficult to talk about the behavior of others without succumbing to the pretentious, judgmental position that one must criticize harmful attitudes of those among us. I do not wish to philosophize and analyze the nature of those around me. But who cares, right? Those people are all a phonies.

    We should just take solace in what we can make of ourselves for now.

    July 11, 2015
    Education

Previous Page Next Page

 

Loading Comments...
 

    • Subscribe Subscribed
      • Heuristic
      • Already have a WordPress.com account? Log in now.
      • Heuristic
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar