An introduction to ethics

Table of contents

  • What is ethics?
  • Reading

    What is ethics?

    Ethics is approximately about the questions to do with the nature, content, and application of morality, and so is the study of morality in general.

    Questions of moral language, psychology, phenomonenology, epistemology, and ontology typically fall under metaethics.

    Questions of theoretical content, what makes something right, wrong, good, bad, obligatory, or supererogatory typically fall under normative ethics.

    Questions of conduct related to specific issues in the real world to do with business, professional, social, environmental, bioethics, and personhood typically fall under applied ethics. These can be things like abortion, euthanasia, treatment of non-human animals, marketing, and charity.

    Ethics has been divided traditionally into three areas concerning how we ought to conduct ourselves.

    Meta-ethics (Metaethics)

    Metaethics is occasionally referred to as a “second-order” discipline to make a distinction between itself and areas that are less about questions regarding what morality itself is. Questions about the most plausible metaphysical report of moral facts or the link between moral judgment, motivation, and knowledge are questions can be described as such, and so are metaethical questions. There are several rough divisions that have been created to introduce metaethics adequately. Either of these distinctions should be sufficient for getting a distant sense of what metaethics is.

    Metaethics as the systematic analysis of moral language, psychology, and ontology

    In Andrew Fisher’s Metaethics: An Introduction, an intro book Fisher at one point playfully thought of as “An Introduction to An Introduction to Contemporary Metaethics,” we get this:

    Looking at ethics we can see that it involves what people say: moral language. So one strand of metaethics considers what is going on when people talk moral talk. For example, what do people mean when they say something is “wrong”? What links moral language to the world? Can we define moral terms?

    Obviously ethics also involves people, so metaethicists consider and analyse what’s going on in peoples’ minds. For example, when people make moral judgements are they expressing beliefs or expressing desires? What’s the link between making moral judgements and motivation?

    Finally, there are questions about what exists (ontology). Thus meta-ethicists ask questions about whether moral properties are real. What is it for something to be real? Could moral facts exist independently of people? Could moral properties be causal?

    Metaethics, then, is the systematic analysis of:

    (a) moral language; (b) moral psychology; (c) moral ontology. This classification is rough and does not explicitly capture a number of issues that are often discussed in metaethics, such as truth and phenomenology. However, for our purposes we can think of such issues as falling under these broad headings.

    Metaethics as concerned with meaning, metaphysics, epistemology and justification, phenomenology, moral psychology, and objectivity

    In Alex Miller’s Contemporary Metaethics: An Introduction (the book Fisher playfully compared his own introduction to), Miller provides us with perhaps the most succinct description of the three:

    [Metaethics is] concerned with questions about the following:

    (a) Meaning: what is the semantic function of moral discourse? Is the function of moral discourse to state facts, or does it have some other non-fact-stating role? (b) Metaphysics: do moral facts (or properties) exist? If so, what are they like? Are they identical or reducible to natural facts (or properties) or are they irreducible and sui generis? (c) Epistemology and justification: is there such a thing as moral knowledge? How can we know whether our moral judgements are true or false? How can we ever justify our claims to moral knowledge? (d) Phenomenology: how are moral qualities represented in the experience of an agent making a moral judgement? Do they appear to be ‘out there’ in the world? (e) Moral psychology: what can we say about the motivational state of someone making a moral judgement? What sort of connection is there between making a moral judgement and being motivated to act as that judgement prescribes? (f) Objectivity: can moral judgements really be correct or incorrect? Can we work towards finding out the moral truth? Obviously, this list is not intended to be exhaustive, and the various questions are not all independent (for example, a positive answer to (f) looks, on the face of it, to presuppose that the function of moral discourse is to state facts). But it is worth noting that the list is much wider than many philosophers forty or fifty years ago would have thought. For example, one such philosopher writes:

    [Metaethics] is not about what people ought to do. It is about what they are doing when they talk about what they ought to do. (Hudson 1970)

    The idea that metaethics is exclusively about language was no doubt due to the once prevalent idea that philosophy as a whole has no function other than the study of ordinary language and that ‘philosophical problems’ only arise from the application of words out of the contexts in which they are ordinarily used. Fortunately, this ‘ordinary language’ conception of philosophy has long since ceased to hold sway, and the list of metaethical concerns – in metaphysics, epistemology, phenomenology, moral psychology, as well as in semantics and the theory of meaning – bears this out.

    Two small notes that might be made are:

    “Objectivity” is standardly taken to mean mind-independence. Here, it almost seems as if it’s cognitivism that the author is describing, but it’s made clear by the author noting that (f) presupposes facts that when Miller says “correct,” Miller means “objectively true.” This is a somewhat unorthodox usage, but careful reading makes it clear what Miller is trying to say.

    “Moral phenomenology” is often categorized as falling under normative ethics as well, but this has little impact on the veracity of this description of metaethics.

    Applied ethics

    Applied ethics is concerned with what is permissible in particular practices. In Peter Singer’s Practical Ethics, Singer provides some examples of what sorts of things this field might address.

    Practical ethics covers a wide area. We can find ethical ramifications in most of our choices, if we look hard enough. This book does not attempt to cover this whole area. The problems it deals with have been selected on two grounds: their relevance, and the extent to which philosophical reasoning can contribute to a discussion of them.

    I regard an ethical issue as relevant if it is one that any thinking person must face. Some of the issues discussed in this book confront us daily: what are our personal responsibilities towards the poor? Are we justified in treating animals as nothing more than machines- producing flesh for us to eat? Should we be using paper that is not recycled? And why should we bother about acting in accordance with moral principles anyway? Other problems, like abortion and euthanasia, fortunately are not everyday decisions for most of us; but they are issues that can arise at some time in our lives. They are also issues of current concern about which any active participant in our society’s decision-making process needs to reflect.

    ….

    This book is about practical ethics, that is, the application of ethics or morality — I shall use the words interchangeably — to practical issues like the treatment of ethnic minorities, equality for women, the use of animals for food and research, the preservation of the natural environment, abortion, euthanasia, and the obligation of the wealthy to help the poor.

    So what does the application of ethics to practical issues look like?

    We can take a look at two of the issues that Singer brings up — abortion and animal rights — to get a sense of what sort of evidence might be taken into consideration with these matters. Keep in mind that this is written with the intention of providing a sense of how discussions in applied ethics develop rather than a comprehensive survey of views in each topic.

    Abortion

    In Rosalind Hursthouse’s Virtue Theory and Abortion, Hursthouse gives a summary of the discussion on abortion as to do with the struggle between facts about the moral status of the fetus and women’s rights.

    As everyone knows, the morality of abortion is commonly discussed in relation to just two considerations: first, and predominantly, the status of the fetus and whether or not it is the sort of thing that may or may not be innocuously or justifiably killed; and second, and less predominantly (when, that is, the discussion concerns the morality of abortion rather than the question of permissible legislation in a just society), women’s rights.

    Judith Jarvis Thomson, in A Defense of Abortion, Thomson addresses a common version of the former consideration, refuting the slippery slope argument.

    Most opposition to abortion relies on the premise that the fetus is a human being, a person, from the moment of conception. The premise is argued for, but, as I think, not well. Take, for example, the most common argument. We are asked to notice that the development of a human being from conception through birth into childhood is continuous; then it is said that to draw a line, to choose a point in this development and say “before this point the thing is not a person, after this point it is a person” is to make an arbitrary choice, a choice for which in the nature of things no good reason can be given. It is concluded that the fetus is, or anyway that we had better say it is, a person from the moment of conception. But this conclusion does not follow. Similar things might be said about the development of an acorn into an oak trees, and it does not follow that acorns are oak trees, or that we had better say they are. Arguments of this form are sometimes called “slippery slope arguments”–the phrase is perhaps self-explanatory–and it is dismaying that opponents of abortion rely on them so heavily and uncritically.

    Nonetheless, Thomson is willing to grant the premise, addressing instead whether or not we can make the case that abortion is impermissible given that the fetus is, indeed, a person. Thomson thinks that the argument that fetuses have the right to life and that right outweighs the right for the individual carrying the fetus to do as they wish with their body is faulty, but notes a limitation.

    But now let me ask you to imagine this. You wake up in the morning and find yourself back to back in bed with an unconscious violinist. A famous unconscious violinist. He has been found to have a fatal kidney ailment, and the Society of Music Lovers has canvassed all the available medical records and found that you alone have the right blood type to help. They have therefore kidnapped you, and last night the violinist’s circulatory system was plugged into yours, so that your kidneys can be used to extract poisons from his blood as well as your own. The director of the hospital now tells you, “Look, we’re sorry the Society of Music Lovers did this to you–we would never have permitted it if we had known. But still, they did it, and the violinist is now plugged into you. To unplug you would be to kill him. But never mind, it’s only for nine months. By then he will have recovered from his ailment, and can safely be unplugged from you.” Is it morally incumbent on you to accede to this situation? No doubt it would be very nice of you if you did, a great kindness. But do you have to accede to it? What if it were not nine months, but nine years? Or longer still? What if the director of the hospital says. “Tough luck. I agree, but now you’ve got to stay in bed, with the violinist plugged into you, for the rest of your life. Because remember this. All persons have a right to life, and violinists are persons. Granted you have a right to decide what happens in and to your body, but a person’s right to life outweighs your right to decide what happens in and to your body. So you cannot ever be unplugged from him.” I imagine you would regard this as outrageous, which suggests that something really is wrong with that plausible-sounding argument I mentioned a moment ago.

    In this case, of course, you were kidnapped, you didn’t volunteer for the operation that plugged the violinist into your kidneys.

    Thomson goes on to address this limitation and goes back and forth between the issue of the fetus’s and carrier’s rights, but Hursthouse (see above) rejects this framework, noting in more detail that we can suppose that women have a right to abortion in a legal sense and still have to wrestle with whether or not abortion is permissible. On the status of fetuses, Hursthouse claims this too can be bypassed with virtue theory.

    What about the consideration of the status of the fetus-what can virtue theory say about that? One might say that this issue is not in the province of any moral theory; it is a metaphysical question, and an extremely difficult one at that. Must virtue theory then wait upon metaphysics to come up with the answer?

    ….

    But the sort of wisdom that the fully virtuous person has is not supposed to be recondite; it does not call for fancy philosophical sophistication, and it does not depend upon, let alone wait upon, the discoveries of academic philosophers. And this entails the following, rather startling, conclusion: that the status of the fetus-that issue over which so much ink has been spilt-is, according to virtue theory, simply not relevant to the rightness or wrongness of abortion (within, that is, a secular morality).

    Or rather, since that is clearly too radical a conclusion, it is in a sense relevant, but only in the sense that the familiar biological facts are relevant. By “the familiar biological facts” I mean the facts that most human societies are and have been familiar with-that, standardly (but not invariably), pregnancy occurs as the result of sexual intercourse, that it lasts about nine months, during which time the fetus grows and develops, that standardly it terminates in the birth of a living baby, and that this is how we all come to be.

    It is worth noting that Hursthouse’s argument more centrally gives her conception of what virtue ethics ought to look like rather than how we should go about abortion, and so to avoid it clouding her paper, she never takes any stance on whether one should think abortion is or is not permissible.

    Thomson’s argument appears to be rather theory-agnostic whereas Hursthouse is committed to a certain theory of ethics. A third approach is intertheoretical, an example of which can be found in Tomasz Żuradzki’s Meta-Reasoning in Making Moral Decisions under Normative Uncertainty. Here, Żuradzki discusses how we might deal with uncertainty over which theory is correct.

    For example, we have to act in the face of uncertainty about the facts, the consequences of our decisions, the identity of people involved, people’s preferences, moral doctrines, specific moral duties, or the ontological status of some entities (belonging to some ontological class usually has serious implications for moral status). I want to analyze whether these kinds of uncertainties should have practical consequences for actions and whether there are reliable methods of reasoning that deal with the possibility that we understand some crucial moral issues wrong.

    Żuradzki at one point considers the seemingly obvious “My Favorite Theory” approach, but concludes that the approach is problematic.

    Probably the most obvious proposition how to act under normative uncertainty is My Favorite Theory approach. It says that “a morally conscientious agent chooses an option that is permitted by the most credible moral theory”

    ….

    Although this approach looks very intuitive, there are interesting counter-examples.

    Żuradzki also addresses a few different approaches, some of which seem to make abortion impermissible so long as there is uncertainty, but perhaps this gives a good idea of three approaches in applied ethics.

    Animal rights

    In the abortion section, the status of the fetus falls into the background. Thomson says even given a certain status, the case against abortion must do more, Hursthouse says the metaphysical question can be bypassed altogether, and Żuradzki considers how to take multiple theories about an action into account. But it seems this strategy of moving beyond the status of the patient in question cannot be done when it comes to the question of how we ought to treat non-human animals, for there’s no obvious competing right that might give us pause when we decide not to treat a non-human animal cruelly. In dealing with animal rights, then, it appears we are forced to address the status of the non-human animal, and there seem to be many ways to address this.

    In Tom Regan’s The Case for Animal Rights, Regan, who agrees with Kant that those who are worthy of moral consideration are ends-in-themselves, thinks what grounds that worthiness in humans is also what grounds that in non-human animals.

    We want and prefer things, believe and feel things, recall and expect things. And all these dimensions of our life, including our pleasure and pain, our enjoyment and suffering, our satisfaction and frustration, our continued existence or our untimely death – all make a difference to the quality of our life as lived, as experienced, by us as individuals. As the same is true of those animals that concern us (the ones that are eaten and trapped, for example), they too must be viewed as the experiencing subjects of a life, with inherent value of their own.

    Christine Korsgaard, who also agrees with a Kantian view, argues against Regan’s view and thinks non-human animals are not like humans. In Fellow Creatures: Kantian Ethics and Our Duties to Animals, Korsgaard makes the case that humans are rational in a sense that non-human animals are not, and that rationality is what grounds our moral obligations.

    an animal who acts from instinct is conscious of the object of its fear or desire, and conscious of it as fearful or desirable, and so as to-be-avoided or to-be-sought. That is the ground of its action. But a rational animal is, in addition, conscious that she fears or desires the object, and that she is inclined to act in a certain way as a result.

    ….

    We cannot expect the other animals to regulate their conduct in accordance with an assessment of their principles, because they are not conscious of their principles. They therefore have no moral obligations.

    Korsgaard, however, still thinks this difference that makes the sense in which humans and non-human animals should be considered fundamentally distinct still leaves room for animals to be worthy of moral consideration.

    Because we are animals, we have a natural good in this sense, and it is to this that our incentives are directed. Our natural good, like the other forms of natural good which I have just described, is not, in and of itself, normative. But it is on our natural good, in this sense, that we confer normative value when we value ourselves as ends-in-ourselves. It is therefore our animal nature, not just our autonomous nature, that we take to be an end-in-itself.

    ….

    In taking ourselves to be ends-in-ourselves we legislate that the natural good of a creature who matters to itself is the source of normative claims. Animal nature is an end-in-itself, because our own legislation makes it so. And that is why we have duties to the other animals.

    So Regan thinks that we can elevate the status of non-human animals up to something like the status of humans, but Korsgaard thinks there is a vast difference between the two categories. Before we consider which view is more credible, we should consider an additional, non-Kantian view which seems to bypass the issue of status once more.

    Rosalind Hursthouse (again!), in Applying Virtue Ethics to Our Treatment of the Other Animals, argues that status need not be relevant for roughly the same reasons as the case of abortion.

    In the abortion debate, the question that almost everyone began with was “What is the moral status of the fetus?”

    ….

    The consequentialist and deontological approaches to the rights and wrongs of the ways we treat the other animals (and also the environment) are structured in exactly the same way. Here too, the question that must be answered first is “What is the moral status of the other animals…?” And here too, virtue ethicists have no need to answer the question.

    So Hursthouse once again reframes the argument and grounds her argument in terms of virtue.

    So I take the leaves on which [Singer describes factory farming] and think about them in terms of, for example, compassion, temperance, callousness, cruelty, greed, self-indulgence—and honesty.

    Can I, in all honesty, deny the ongoing existence of this suffering? No, I can’t. I know perfectly well that althrough there have been some improvements in the regulation of factory farming, what is going on is still terrible. Can I think it is anything but callous to shrug this off and say it doesn’t matter? No, I can’t. Can I deny that the practices are cruel? No, I can’t.

    ….

    The practices that bring cheap meat to our tables are cruel, so we shouldn’t be party to them.

    Żuradzki’s argument in Meta-Reasoning in Making Moral Decisions under Normative Uncertainty becomes relevant once more as well. In it, he argues that if between the competing theories, one says something is wrong and one says nothing of the matter, it would be rational to act as if it were wrong.

    Comparativism in its weak form can be applied only to very specific kinds of situations in which an agent’s credences are not divided between two different moral doctrines, but between only one moral doctrine and some doctrine (or doctrines) that does not give any moral reasons. Its conclusion says that if some theories in which you have credence give you subjective reason to choose action A over action B, and no theories in which you have credence give you subjective reason to choose action B over action A, then you should (because of the requirements of rationality) choose A over B.

    Once again, we see a variety of approaches that help give us a sense of the type of strategies that applied ethicists might use. Here, we have arguments that accept and reject a central premise of the debate, an argument that bypasses it, and an argument that considers both views. Some approaches are theory-specific, some are intertheoretical, and while it was not discussed here, Singer’s argument from marginal cases is theory-neutral.

    Other issues will differ wildly, they will rely on different central premises, have arguments such that intertheoretical approaches are impossible, or have any number of other variations on the similarities and differences between the discussions on the two topics just discussed. However, this gives some idea, hopefully enough to build on if one chooses to look deeper into the literature, of how discussions in the area of applied ethics go about.

    Normative ethics

    Normative ethics deals very directly with the question of conduct. Much of the discipline is dedicated to discovering ethical theories capable of describing what we ought to do. But what does ought mean? In different contexts, while ought tends to deal with normativity and value, it does not always deal with ethics. The oughts that link aesthetics and normativity are not obviously the same as the oughts that we’re dealing with here. The questions of what oughts exist in normative ethics have a great deal to do with concepts like what is “permissible” or “impermissible,” what is “right” or “wrong,” or what is “good” and “bad.” It should be contrasted with how people do act, as well as the moral code of some person or group. These are not what normative ethics is about, but rather what genuinely is correct when it comes to how we ought to live our lives. For now, we can roughly divide the main theories of this area into three categories, though these are not the only categories: consequentialism, deontology, and virtue theory. As noted, there are other theories, and there are even other problems in normative ethics as well, but these three types of theories will be detailed below as well as what we should take from an understanding of the three categories.

    Ethics as grounded in outcomes: Consequentialism

    Consequentialism is a family of theories that are centrally concerned with consequences. Consequentialism, in ordinary practice, is used to refer to theories rooted in classical utilitarianism (even when the theory is not utilitarianism itself), ignoring certain theories that also seem grounded solely in consequences such as egoism. The aforementioned classical utilitarianism that serves as the historical and conceptual root of this discussion entailed a great deal of claims, laid out in Shelly Kagan’s Normative Ethics:

    that goodness of outcomes is the only morally relevant factor in determining the status of a given act. the agent is morally required to perform the act with the best consequences. It is not sufficient that an act have “pretty good” consequences, that it produce more good than harm, or that it be better than average. Rather, the agent is required to perform the act with the very best outcome (compared to alternatives); she is required to perform the optimal act, as it is sometimes called. the agent is morally required to performed the act with the best consequences. The optimal act is the only act that is morally permissible; no other act is morally right. Thus the consequentialist is not making the considerably more modest claim that performing the act with the best consequences is—although generally not obligatory—the nicest or the most praiseworthy thing to do. Rather, performing the optimal act is morally required: anything else is morally forbidden. the right act is the act that leads to the greatest total amount of happiness overall. the consequences [are evaluated] in terms of how they affect everyone’s well-being…

    And of course, these can be divided even further, but what’s salient is there appear to be a great many more claims entailed in this classical form of utilitarianism than one might think first glance: classical utilitarianism is an agent-neutral theory in which acts that actually result in the optimal amount of happiness for everyone is obligatory. By understanding all of these points, we can understand how consequentialism differs from this classical utilitarianism and thus what it means to be consequentialist.

    The limits of contemporary consequentialism

    Many of these claims don’t seem necessary to the label “consequentialism” and give us an unnecessarily narrow sense of what the word could mean.

    It seems desirable to want to broaden the scope of the term then, and in fact, this hasn’t only been done simply to help understand consequentialism, but to defend against criticisms of consequentialism. In Campbell Brown’s Consequentialize This, we get a brief description of one motivation behind radical consequentializing:

    You—a nonconsequentialist, let’s assume—begin with your favorite counterexample. You describe some action…[that] would clearly have the best consequences, yet equally clearly would be greatly immoral. So consequentialism is false, you conclude; sometimes a person ought not to do what would have best consequences. “Not so fast,” comes the consequentialist’s reply. “Your story presupposes a certain account of what makes consequences better or worse, a certain ‘theory of the good,’ as we consequentialists like to say. Consequentialism, however, is not wedded to any such theory…In order to reconcile consequentialism with the view that this action you’ve described is wrong, we need only to find an appropriate theory of the good, one according to which the consequences of this action would not be best. You say you’re concerned about the guy’s rights? No worries; we’ll just build that into your theory of the good. Then you can be a consequentialist too.”

    So, Brown says, this is what has just occurred:

    Instead of showing that your nonconsequentialism is mistaken, the consequentialist shows that it’s not really nonconsequentialism; instead of refuting your view, she ‘consequentializes’ it. If you can’t beat ’em, join ’em. Better still, make ’em join you.

    Is this a good strategy? Brown thinks not, for it weakens the consequentialist’s claim.

    It might succeed in immunizing consequentialism against counterexamples only at the cost of severely weakening it, perhaps to the point of utter triviality. So effortlessly is the strategy deployed that some are led to speculate that it is without theoretical limits: every moral view may be dressed up in consequentialist clothing…But then, it seems, consequentialism would be empty—trivial, vacuous, without substantive content, a mere tautology. The statement that an action is right if and only if (iff) it maximizes the good would entail nothing more substantive than the statement that an action is right iff it is right; true perhaps, but not of much use.

    So not too broad, not too narrow, and not too shifty. We want some sort of solid and only sufficiently broad meaning to jump from. Brown goes on to define what he thinks consequentialism minimally is and three limits must be placed upon it.

    whatever is meant by ‘consequentialism’, it must be intelligible as an elaboration of the familiar consequentialist slogan “Maximize the good.” The non-negotiable core of consequentialism, I shall assume, is the claim that an action is right, or permissible, iff it maximizes the good. My strategy is to decompose consequentialism into three conditions, which I call ‘agent neutrality’, ‘no moral dilemmas’, and ‘dominance’ As usually defined, a theory is agent-relative iff it gives different aims to different agents; otherwise it’s agent-neutral. By a moral dilemma, I mean a situation in which a person cannot avoid acting wrongly…Consider, for example, a theory which holds that violations of rights are absolutely morally forbidden; it is always wrong in any possible situation to violate a right. Suppose, further, that the catalog of rights endorsed by this theory is such that sometimes a person cannot help but violate at least one right. Then this theory cannot be represented by a rightness function which satisfies NMD, and so it cannot be consequentialized. [Dominance] may be the least intuitive of the three. It requires the following. Suppose that in a given choice situation, two worlds x and y are among the alternatives. And suppose in this situation, x is right and y wrong. Then x dominates y in the following sense: y cannot be right in any situation where x is an alternative; the presence of x is always sufficient to make y wrong.

    And there we have it, a definition of consequentialism. Not only that, but this definition is formalized in the paper as well. Can we safely say, then, that this is the definition of consequentialism? The most comprehensive, elucidating, uncontroversial in the field? Certainly not! In fact, it leaves out several significant forms of consequentialism, but this formulation of consequentialism captures many concepts important consequentialism, sufficient for further discussion over the three families. This disagreement over the definition might bring a new set of worries to the mind of any reader. The problem of disagreement will be discussed in another section.

    Ethics as grounded in moral law: Deontology

    Deontology is another family of theories whose definition can wiggle through our grasp (there’s a pattern here to recognize that will become important in a later section). Once more, Shelly Kagan’s Normative Ethics offers us a definition of deontology as it is used in contemporary discourse: a theory that places value on additional factors that would forbid certain actions independently of whether or not they result in the best outcomes.

    In defining deontology, I have appealed to the concept of a constraint: deontologists, unlike consequentialists, believe in the existence of constraints, which erect moral barriers to the promotion of the good…it won’t quite do to label as deontologists all those who accept additional normative factors, beyond that of goodness of results: we must add further stipulation that in at least some cases the effect of these additional factors is to make certain acts morally forbidden, even though these acts may lead to the best possible results overall. In short, we must say that deontologists are those who believe in additional normative factors that generate constraints.

    Kagan goes on to explain why of the various definitions, this one is best. That explanation will not be detailed here, but let’s keep this tenuously in mind as we dive into one of the deontological theories to give us a sense of what deontology entails. It would be absurd if these constraints were arbitrary, nothing more than consequentialism combined with “also, don’t do these specific things because they seem icky and I don’t like them,” so we will take a look at one of the prominent deontological theories: Kantianism.

    Kant’s First Formula

    In Julia Driver’s Ethics: The Fundamentals, Driver introduces us to deontology through Kant’s moral theory, saying this of the theory:

    Immanuel Kant’s theory is perhaps the most well-known exemplar of the deontological approach…whether or not a contemplated course of action is morally permissible will depend on whether or not it conforms to what he terms the moral law, the categorical imperative.

    There’s a tone here that seems noticeably different from consequentialist talk. Permissibility as conforming to moral law could still be consequentialist if that law is something like “maximize the good,” but this description seems to indicate something else. To figure this out, we need an explanation of what “the categorical imperative” means. In Christine Korsgaard’s Creating the Kingdom of Ends:

    Hypothetical imperatives [are] principles which instruct us to do certain actions if we want certain ends…

    ….

    Willing something is determining yourself to be the cause of that thing, which means determining yourself to use the available causal connections — the means — to it. “Willing the end” is already posited as the hypothesis, and we need only analyze it to arrive at willing the means. If you will to be able to play the piano, then you already will to practice, as that is the “indispensably necessary means to it” that “lie in your power.” But the moral ought is not expressed by a hypothetical imperative. Our duties hold for us regardless of what we want. A moral rule does not say “do this if you want that” but simply “do this.” It is expressed in a categorical imperative. For instance, the moral law says that you must respect the rights of others. Nothing is already posited, which can then be analyzed.

    We now have a fairly detailed description of what the distinction between a hypothetical and categorical imperative is, with fine examples to boot. Note that already, it’s clear this theory can’t be consequentialized according to Brown, but we must go further to remove any doubt as a result of controversy over Brown’s formulation. Korsgaard goes on to explain what is necessarily entailed as a part of the categorical imperative in her description of Kant’s first formula.

    If we remove all purposes — all material — from the will, what is left is the formal principle of the will. The formal principle of duty is just that it is duty — that it is law. The essentially character of law is universality. Therefore, the person who acts from duty attends to the universality of his/her principle. He or she only acts on a maxim that he or she could will to be universal law (G 402).

    ….

    But how can you tell whether you are able to will your maxim as a universal law? On Kant’s view, it is a matter of what you can will without contradiction…you envision trying to will your maxim in a world in which the maxim is universalized — in which it is a law of nature. You are to “Ask yourself whether, if the action which you propose should take place by a law of nature of which you yourself were a part, you could regard it as possible through your will” (C2 69)

    Already, upon encountering this first formulation of the categorical imperative, we have now well established that any limit on consequentialization would leave Kant’s moral theory able to resist it. For one, the rightness or wrongness of actions is conforming to moral law such that the outcomes are no longer centrally a point of consideration. This does not mean we have deprived ethics of consequences, as Kagan points out in Normative Ethics:

    [the goodness of outcomes]

    is a factor I think virtually everyone recognizes as morally relevant. It may not be the only factor that is important for determining the moral status of an act, but it is certainly one relevant factor.

    Kantianism is notwithstanding deciding the status of actions not on the sole basis of outcomes. As well, it fails Brown’s dominance formulation.

    The two other formulas are not within the scope of this section, nor is evidence for Kant’s theory. The purpose of detailing Kantianism at all was to demonstrate deontology as conforming to moral law in a manner distinct from consequentialism. As well, it is sufficient to remind ourselves that there is a massive amount of evidence for each of these types of theories without having to detail it in this section for this theory in particular. As well, there are other types of deontological theories, also with a great deal of evidence. Scanlon’s moral theory and Ross’s moral theory are other prominent examples of deontology.

    We are now left with a fairly strong sense of what deontological theories look like. There is some imprecision in that sense, this will be discussed in another section. For now, we must move on to address virtue ethics.

    Ethics as grounded in character: Virtue Ethics

    Virtue ethics, the final family of theories described in the section on normative ethics, is predictably concerned primarily with virtue and practical intelligence.

    Virtue

    A virtue is described as lasting, reliable, and characteristic in Julia Annas’s Intelligent Virtue:

    A virtue is a lasting feature of a person, a tendency for the person to be a certain way. It is not merely a lasting feature, however, one that just sits there undisturbed. It is active: to have it is to be disposed to act in certain ways. And it develops through selective response to circumstances. Given these points, I shall use the term persisting rather than merely lasting. Jane’s generosity, supposing her to be generous, persists through challenges and difficulties, and is strengthened or weakened by her generous or ungenerous responses respectively. Thus, although it is natural for us to think of a virtue as a disposition, we should be careful not to confuse this with the scientific notion of disposition, which just is a static lasting tendency…

    ….

    A virtue is also a reliable disposition. If Jane is generous, it is no accident that she does the generous action and has generous feelings. We would have been surprised, and shocked, if she had failed to act generously, and looked for some kind of explanation. Our friends’ virtues and vices enable us to rely on their responses and behaviour—to a certain extent, of course, since none of us is virtuous enough to be completely reliable in virtuous response and action.

    ….

    Further, a virtue is a disposition which is characteristic—that is, the virtuous (or vicious) person is acting in and from character when acting in a kindly, brave or restrained way. This is another way of putting the point that a virtue is a deep feature of the person. A virtue is a disposition which is central to the person, to whom he or she is, a way we standardly think of character. I might discover that I have an unsuspected talent for Sudoku, but this, although it enlarges my talents, does not alter my character. But someone who discovers in himself an unsuspected capacity to feel and act on compassion, and who develops this capacity, does come to change as a person, not just in some isolated feature; he comes to have a changed character.

    Virtue ethics, then, is centered around something that is roughly this concept. Note that any plausible theory is going to incorporate all of the concepts we’ve gone over on normative ethics. We can go back to Kagan’s Normative Ethics from above, where he notes the relevancy of consequences in every theory.

    all plausible theories agree that goodness of consequences is at least one factor relevant to the moral status of acts. (No plausible theory would hold, for example, that it was irrelevant whether an act would lead to disaster!)

    Similarly, other theories will have an account of virtue, as Jason Kawall’s In Defense of the Primacy of the Virtues briefly describes:

    Consequentialists will treat the virtues as character traits that serve to maximize (or produce sufficient quantities of) the good, where the good is taken as explanatorily basic. Deontologists will understand the virtues in terms of dispositions to respect and act in accordance with moral rules, or to perform morally right actions, where these moral rules or right actions are fundamental. Furthermore, the virtues will be considered valuable just insofar as they involve such tendencies to maximize the good or to perform right actions.

    So it is important to stress then that virtue is the central concept for virtue ethics, and is no more simply a theory that makes relevant an account of virtue any more than consequentialism is any theory that makes relevant an account of consequences. One way we can come to understand virtue ethics better is by understanding a specific kind of virtue ethics, theories which satisfying four conditions laid out by Kawall:

    (i) The concepts of rightness and goodness would be explained in terms of virtue concepts (or the concept of a virtuous agent).

    (ii) Rightness and goodness would be explained in terms of the virtues or virtuous agents.

    (iii) The explanatory primacy of the virtues or virtuous agents (and virtue concepts) would reflect a metaphysical dependence of rightness and goodness upon the virtues or virtuous agents.

    (iv) The virtues or virtuous agents themselves – as well as their value – could (but need not) be explained in terms of further states, such as health, eudaimonia, etc., but where these further states do not require an appeal to rightness or goodness.

    It should be emphasized again that this describes only some theories in this family, but they are good theories to focus on because much of the discussion around these theories would be representative of discussion around virtue ethics in general.

    It is worth stressing that not all theories that could plausibly be understood as forms of virtue ethics would satisfy the above conditions; the current goal is not to defend all possible virtue ethics. Rather, we are examining what might be taken to be among the more radical possible forms of virtue ethics, particularly in treating the virtues as explanatorily prior both to rightness and to goodness tout court. Why focus on these more radical forms? First, several prominent virtue ethics can be understood as satisfying the above conditions, including those of Michael Slote, Linda Zagzebski, and, perhaps (if controversially), Aristotle’s paradigmatic virtue ethics. Beyond this, many of the arguments presented here could be taken on board by those defending more moderate forms of virtue ethics, such as Rosalind Hursthouse or Christine Swanton (against those who would attempt to argue for the explanatory primacy of the right or of the good, for example). Thus the range of interest for most of these arguments will extend beyond those focusing on the more radical approaches.

    Practical intelligence

    Practical intelligence can be described much more briefly to get a sense of its meaning across. In Rosalind Hursthouse’s Applying Virtue Ethics to Our Treatment of the Other Animals, we get a brief description of the role of practical intelligence.

    Of course, applying the virtue and vice terms correctly may be difficult; one may need much practical wisdom to determine whether, in a particular case, telling a hurtful truth is cruel or not, for example…

    Julia Annas elaborates to greater detail in “Intelligent Virtue”:

    The way our characters develop is to some extent a matter of natural endowment; some of us have traits ‘by nature’—we will tend to act bravely or generously without having to learn to do so, or to think about it. This is ‘natural virtue’, which we have already encountered. Different people will have different natural virtues, and one person may be naturally endowed in one area of life but not others—naturally brave, for example, but not naturally generous. However, claims Aristotle, this can’t be the whole story about virtue. For one thing, children and animals can have some of these traits, but in them they are not virtues. Further, these natural traits are harmful if not guided by ‘the intellect’, which in this context is specified as practical wisdom or practical intelligence (phronesis). Just as a powerfully built person will stumble and fall if he cannot see, so a natural tendency to bravery can stumble unseeingly into ethical disaster because the person has not learned to look out for crucial factors in the situation. Our natural practical traits need to be formed and educated in an intelligent way for them to develop as virtues; a natural trait may just proceed blindly on where virtue would respond selectively and in a way open to novel information and contexts.

    Ethics as maximizing happiness: Utilitarianism

    In the famous Trolley problem philosopher Philippa Foot introduced in the 1960s, you have the ability to pull a lever to divert a train from running over five tied-up people lying on the tracks. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the side track.

    According to classical utilitarianism, pulling the lever would be permissible and more moral. English philosophers Jeremy Bentham and John Stuart Mill introduced utilitarianism as the sole moral obligation to maximize happiness. As an alternative to divine, religious theories of ethics. Utilitarianism suffers from the idea of “utility monsters,” individuals who would have much more happiness (and therefore utility) than average. This would cause actions to skew towards and exploit maximizing the monster’s happiness in such a way that others would suffer. Since philosopher Robert Nozick introduced the “utility monster” idea in 1974, it has been discussed in politics as driving the ideas of special interest groups and free speech – as though securing these interests would serve the interests of the few experiencing much more happiness than the general population.

    Are these taxonomic imperfections bad? How do we get over vague definitions?

    It might be tempting to read all of this and think there’s some sort of difficulty in discussing normative ethics. In general, academic discourse does not hinge on definitions, and so definitions are not a very large concern. And yet, it might appear upon reading this that ethics is some sort of exception. When philosophers talk about adaptationism in evolution or causation in metaphysics, the definitions they provide seem a lot more precise, so why is ethics an exception?

    The answer is uninterestingly that ethics is not an exception. It is important to avoid confusing what has been read here as some sort of fundamental ambiguity in these theories. Consider Brown’s motive for resisting consequentialization as a response to Dreir’s motive for consequentialization.

    I’ll close by drawing out another moral of my conclusion, related to something Dreier says. Dreier’s motivation for consequentializing is that he wants to overcome a certain “stigma” which he says afflicts defenders of “common sense morality” when they try to deny consequentialism. To deny consequentialism, he says, they must claim that we are sometimes required to do less good than we might, but that claim has a “paradoxical air.” So defenders of commonsense morality, who deny consequentialism, are stigmatized as having a seemingly paradoxical position.

    ….

    Dreier thinks the way to avoid the stigma is to avoid denying consequentialism. If we consequentialize commonsense morality, then defenders of commonsense morality need not deny consequentialism. If I’m right, however, this way of avoiding the stigma doesn’t work…

    Note that this is entirely orthogonal to the plausibility of any particular theory. Whatever stigmas exist makes no difference on whether or not some particular theory happens to be correct. It may prove useful to helping beginners gain a sense of what they’re talking about, but beyond pedagogical utility, it’s disputed that this distinction actually tells us, at a very fundamental level, what these theories are all about.

    In Michael Ridge’s Reasons for Action: Agent-Neutral vs. Agent-Relative, Ridge points out one of the alternative distinctions that might have a more prominent role in describing what fundamentally distinguishes these theories.

    The agent-relative/agent-neutral distinction is widely and rightly regarded as a philosophically important one.

    ….

    The distinction has played a very useful role in framing certain interesting and important debates in normative philosophy.

    For a start, the distinction helps frame a challenge to the traditional assumption that what separates so-called consequentialists and deontologists is that the former but not the latter are committed to the idea that all reasons for action are teleological. A deontological restriction forbids a certain sort of action (e.g., stealing) even when stealing here is the only way to prevent even more stealing in the long run. Consequentialists charge that such a restriction must be irrational, on the grounds that if stealing is forbidden then it must be bad but if it is bad then surely less stealing is better than more. The deontologist can respond in one of two ways. First, they could hold that deontological restrictions correspond to non-teleological reasons. The reason not to steal, on this account, is not that stealing is bad in the sense that it should be minimized but rather simply that stealing is forbidden no matter what the consequences (this is admittedly a stark form of deontology, but there are less stern versions as well). This is indeed one way of understanding the divide between consequentialists and deontologists, but the agent-relative/agent-neutral distinction, and in particular the idea of agent-relative reasons, brings to the fore an alternative conception. For arguably, we could instead understand deontological restrictions as corresponding to a species of reasons which are teleological after all so long as those reasons are agent-relative. If my reason not to steal is that I should minimize my stealing then the fact that my stealing here would prevent five other people from committing similar acts of theft does nothing to suggest that I ought to steal.

    ….

    If Dreier is right [that in effect we can consequentialize deontology] then the agent-relative/agent-neutral distinction may be more important than the distinction between consequentialist theories and non-consequentialist theories.

    The section goes on to detail several ways we can look at this issue so we can understand the importance of this distinction and what it can tell us about the structure and plausibility of certain theories. So while the typical division between consequentialist, deontological, and virtue ethical theories can be superficially valuable to those getting into ethics, it is important to not overstate the significance of these families and their implications.

    Reading

    Normative ethics

    Includes a minimal definition of normative ethics as a whole.

    In this entry, Ridge lays out another way of categorizing theories in normative ethics in an accessible manner.

    Issues in normative ethics

    • Christopher Heathwood Welfare. 2010.
    • Roger Crisp Stanford Encyclopedia of Philosophy entry on Well-being. 2017.
    • Michael Zimmerman Stanford Encyclopedia of Philosophy entry on Intrinsic vs. Extrinsic Value. 2014.
    • Dana Nelkin Stanford Encyclopedia of Philosophy entry on Moral Luck. 2013.
    • Stephen Stich, John Doris, and Erica Roedder Altruism. 2008.
    • Robert Shaver Stanford Encyclopedia of Philosophy entry on Egoism. 2014.
    • Joshua May Internet Encyclopedia of Philosophy entry on Psychological Egoism. 2011.

    Consequentialism

    About the best introduction that one can find to one of the consequentialist theories: utilitarianism.

    An introduction to the debate over utilitarianism.

    An influential work that lays out a decent strategy for keeping consequentialist theories of ethics distinct from other theories.

    • Walter Sinnott-Armstrong’s Stanford Encyclopedia of Philosophy entry on Consequentialism. 2015. A
    • William Haines Internet Encyclopedia of Philosophy entry on Consequentialism. 2006.
    • Chapter 3 and 4 of Driver (see above). 2006.

    Deontology

    A good introduction to and strong defense of Kantianism.

    Rawls’s revolutionary work in both ethics and political philosophy in which he describes justice as fairness, a view he would continue to develop later on.

    A significant improvement and defense of one of the most influential deontological alternatives to Kantianism: Rossian deontology.

    Scanlon, one of the most notable contributors to political and ethical philosophy among his contemporaries, provides an updated and comprehensive account of his formulation of contractualism.

    • Larry Alexander and Michael Moore Stanford Encyclopedia of Philosophy entry on Deontological Ethics. 2016.
    • Chapter 5 and 6 of Driver (see above). 2006.

    Virtue ethics

    Hursthouse’s groundbreaking and accessible work on virtue theory.

    Meta-ethics (Metaethics)

    This is probably a more difficult read than the others, but it is incredibly comprehensive and helpful. There are many things in this handbook that I’ve been reading about for a long time that I didn’t feel confident about until reading this. Certainly worth the cost.

    Moral judgement

    A must read for those who want to engage with issues in moral judgment, functioning both as a work popularly considered the most important in the topic as well as a great introduction.

    • Chapter 3 of Miller (see above). 2013.
    • Connie S. Rosati Stanford Encyclopedia of Philosophy entry on Moral Motivation. 2016.

    Moral responsibility

    Moral realism and irrealism

    A very popular Philosophy Compass paper that lays out very simply what moral realism is without arguing for or against any position.

    An obligatory text laying out the popular companions in guilt argument for moral realisms.

    • Smith (see above). 1998.
    • Enoch (see above). 2011.
    • Chapter 8, 9, and 10 of Miller (see above). 2013.
    • Shafer-Landau (see above). 2005.
    • Katia Vavova Debunking Evolutionary Debunking. 2013.

    Here, Vavova provides a very influential, comprehensive, and easy to read overview of evolutionary debunking arguments, in which she also takes the liberty of pointing out their flaws.

    Korsgaard’s brilliant description, as well as her defense, of a form of Kantian constructivism.

    Research Ethics

    Websites

    National Center for Professional and Research Ethics (NCPRE) – https://www.nationalethicscenter.org/

    National Science Foundation Office of Inspector General – http://www.nsf.gov/oig/index.jsp

    Office for Human Research Protections (OHRP) – http://www.hhs.gov/ohrp/

    Office of Research Integrity (ORI) – http://ori.dhhs.gov/

    Online Ethics Center for Engineering and Research – http://onlineethics.org/

    Project for Scholarly Integrity – http://www.scholarlyintegrity.org/

    Resources for Research Ethics Education – http://research-ethics.net/

    Email lists

    RCR-Instruction, Office of Research Integrity – send a request to askori@hhs.gov to subscribe

    Journals

    Accountability in Research – http://www.tandf.co.uk/journals/titles/08989621.asp

    Ethics and Behavior – http://www.tandf.co.uk/journals/titles/10508422.asp

    Journal of Empirical Research on Human Research Ethics – http://www.ucpressjournals.com/journal.asp?j=jer

    Science and Engineering Ethics – http://www.springer.com/philosophy/ethics/journal/11948#8085218705268172855

    News publications

    The Chronicle of Higher Education – http://www.chronicle.com/

    Nature – http://www.nature.com/

    Science – http://www.sciencemag.org/

    The Scientist – http://www.thescientist.com

    Ethical theory

    Frankena, William K. 1988. Ethics. 2nd ed. Prentice-Hall, Inc.

    Rachels, James, and Stuart Rachels. 2009. The Elements of Moral Philosophy. 6th ed. McGraw-Hill Companies.

    Books

    Beach, Dore. 1996. Responsible Conduct of Research. John Wiley & Sons, Incorporated.

    Bebeau, Muriel J., et al. 1995. Moral Reasoning in Scientific Research: Cases for Teaching and Assessment. Poynter Center for the Study of Ethics and American Institutions. Source: Order or download in PDF format at http://poynter.indiana.edu/mr/mr-main.shtml.

    Bulger, Ruth Ellen, Elizabeth Heitman, and Stanley Joel Reiser, eds. 2002. The Ethical Dimensions of the Biological and Health Sciences. 2nd ed. Cambridge University Press.

    Elliott, Deni, and Judy E. Stern, eds. 1997. Research Ethics: A Reader. University Press of New England. See also Stern and Elliott, The Ethics of Scientific Research.

    Erwin, Edward, Sidney Gendin, and Lowell Kleiman, eds. 1994. Ethical Issues in Scientific Research: An Anthology. Garland Publishing.

    Fleddermann, Charles B. 2007. Engineering Ethics. 3rd ed. Prentice Hall.

    Fluehr-Lobban, Carolyn. 2002. Ethics and the Profession of Anthropology: Dialogue for Ethically Conscious Practice. 2nd ed. AltaMira Press.

    Goodstein, David L. 2010. On Fact and Fraud: Cautionary Tales from the Front Lines of Science. Princeton University Press.

    Harris, Charles E., Jr., Michael S. Pritchard, and Michael J. Rabins. 2008. Engineering Ethics: Concepts and Cases. 4th edition. Wadsworth.

    Israel, Mark, and Iain Hay. 2006. Research Ethics for Social Scientists: Between Ethical Conduct and Regulatory Compliance. SAGE Publications, Limited.

    Johnson, Deborah G. 2008. Computer Ethics. 4th ed. Prentice Hall PTR.

    Korenman, Stanley G., and Allan C. Shipp. 1994. Teaching the Responsible Conduct of Research through a Case Study Approach: A Handbook for Instructors. Association of American Medical Colleges. Source: Order from http://www.aamc.org/publications/

    Loue, Sana. 2000. Textbook of Research Ethics: Theory and Practice. Springer.

    Macrina, Francis L. 2005. Scientific Integrity: Text and Cases in Responsible Conduct of Research. 3rd ed. ASM Press.

    Miller, David J., and Michel Hersen, eds. 1992. Research Fraud in the Behavioral and Biomedical Sciences. John Wiley & Sons, Incorporated.

    Murphy, Timothy F. 2004. Case Studies in Biomedical Research Ethics. MIT Press.

    National Academy of Sciences. 2009. On Being a Scientist: A Guide to Responsible Conduct in Research. 3rd edition. National Academy Press. Source: Order from http://www.nap.edu/catalog.php?record_id=12192

    National Academy of Sciences. 1992. Responsible Science, Vol. 1: Ensuring the Integrity of the Research Process. Source: Order from http://www.nap.edu/catalog.php?record_id=1864

    National Academy of Sciences. 1992. Responsible Science, Vol. 2: Background Papers and Resource Documents. Source: Order from http://www.nap.edu/catalog.php?record_id=2091

    Oliver, Paul. 2010. The Students’ Guide to Research Ethics. 2nd ed. McGraw-Hill Education.

    Orlans, F. Barbara, et al., eds. 2008. The Human Use of Animals: Case Studies in Ethical Choice. 2nd ed. Oxford University Press.

    Penslar, Robin Levin, ed. 1995. Research Ethics: Cases and Materials. Indiana University Press.

    Resnik, David B. 1998. The Ethics of Science: An Introduction. Routledge.

    Schrag, Brian, ed. 1997-2006. Research Ethics: Cases and Commentaries. Seven volumes. Association for Practical and Professional Ethics. Source: Order from http://www.indiana.edu/~appe/publications.html#research.

    Seebauer, Edmund G., and Robert L. Barry. 2000. Fundamentals of Ethics for Scientists and Engineers. Oxford University Press.

    Seebauer, Edmund G.. 2000. Instructor’s Manual for Fundamentals of Ethics for Scientists and Engineers. Oxford University Press.

    Shamoo, Adil E., and David B. Resnik. 2009. Responsible Conduct of Research. Oxford University Press.

    Shrader-Frechette, Kristin S. 1994. Ethics of Scientific Research. Rowman & Littlefield Publishers, Inc.

    Sieber, Joan E. 1992. Planning Ethically Responsible Research: A Guide for Students and Internal Review Boards. SAGE Publications, Inc.

    Sigma Xi. 1999. Honor in Science. Sigma Xi, the Scientific Research Society. Source: Order from http://www.sigmaxi.org/resources/merchandise/index.shtml

    Sigma Xi. 1999. The Responsible Researcher: Paths and Pitfalls. Sigma Xi, the Scientific Research Society. Source: Order from http://www.sigmaxi.org/resources/merchandise/index.shtml or download in PDF format at http://sigmaxi.org/programs/ethics/ResResearcher.pdf

    Steneck, Nicholas H. 2007. ORI Introduction to the Responsible Conduct of Research. Revised ed. DIANE Publishing Company. Source: Order from http://bookstore.gpo.gov/collections/ori-research.jsp or download in PDF format at http://ori.dhhs.gov/publications/oriintrotext.shtml.

    Stern, Judy E., and Deni Elliott. 1997. The Ethics of Scientific Research: A Guidebook for Course Development. University Press of New England. See also Elliott and Stern, eds., Research Ethics: A Reader.

    Vitelli, Karen D., and Chip Colwell-Chanthaphonh, eds. 2006. Archaeological Ethics. 2nd ed. AltaMira Press.

The epistemology and metaphysics of causality

The epistemology of causality

There are two epistemic approaches to causal theory. Under a hypothetico-deductive account, we hypothesize causal relationships and deduce predictions based on them. We test these hypotheses and predictions by comparing empirical phenomena and other knowledge and information on what actually happens to these theories. We may also take an inductive approach in which we make a large number of appropriate, justified observations (such as a set of data) from which we can induce causal relationships directly from them.

Hypothetico-Deductive discovery

The testing phase of this account of discovery and causality uses the views on the nature of causality to determine whether we support or refute the hypothesis. We search for physical processes underlying the causal relationships of the hypothesis. We can use statistics and probability to determine which consequences of hypotheses are verified, like comparing our data to a distribution such as a Gaussian or Dirichlet one. We can further probe these consequences on a probabilistic level and show that changing hypothesized causes can predict, determine, or guarantee effects.

Philosopher Karl Popper advocated this approach for causal explanations of events that consist of natural laws, which are universal statements about the world. He designated initial conditions, single-case statements, from which we may deduce outcomes and form predictions of various events. These case initial conditions call for effects that we can determine, such as whether a physical system will approach thermodynamic equilibrium or how a population might evolve under the influence of predators or external forces. Popper delineated the method of hypothesizing laws, deducing their consequences, and rejecting laws that aren’t supported as a cyclical process. This is the covering-law account of causal explanation.

Inductive learning

Philosopher Francis Bacon promoted the inductive account of scientific learning and reasoning. From a very high number of observations of some phenomenon or event with experimental, empirical evidence where it’s appropriate, we can compile a table of positive instances (in which a phenomenon occurs), negative instances (it doesn’t occur), and partial instances (it occurs to a certain degree). This gives a multidimensionality to phenomena that characterize causal relationships from both a priori and a posterior perspectives.

Inductivist artificial intelligence (AI) approaches have in common the feature that causal relationships can be determined from statistical relationships. We assume the Causal Markov condition holds of physical causality and physical probability. This Causal Markov Condition plays a significant deterministic role in the various features of the model and the events or phenomena it predicts. A causal net must have the Causal Markov Condition as an assumption or premise. For structural equation models (SEM), Causal Markov Conditions result from representations of each variable as a function of its direct causes and an associated error variable with it. We assume probabilistic independence of each error variable. We then find the class of causal models or a single best causal model with probabilistic independences that are justified by the Causal Markov Condition. They should be consistent with independences we can infer from the data, and we might also make further assumptions about the minimality (no submodel of the causal model also satisfied the Causal Markov Condition), faithfulness (all independences in the data are implied via the Causal Markov Condition), linearity (all variables are linear functions of their direct causes and uncorrelated error variables). We may also define causal sufficiency, whether all common causes of measured variables are measured, and context generality, every individual or node in the model has causal relations of the population. These two features let us describe models and methods of scientific reasoning as causal in nature and, from there, we may apply appropriate causal models such as Bayesian, frequentist, or similar methods of prediction. We may even illustrate a causal diagram or model elements under various conditions such as those given by independence or constraints on variables.

This way, in the intercorrelatedness of the graph or model, we can’t change the value of a variable without affecting the way it relates to other variables, but there may conditions in which we construct models that have autonomous nodes or variables. The way these features and claims of inductivist AI interact with another is subject to debate by the underlying assumptions, justification, and methods of reasoning behind these models.

Metaphysics of causality

We can pose questions about the mathematization of causality even with the research and methods that have dominated the work on probability and its consequences. We can speculate what causality is and the opinions on the nature of causality as they relate to the axioms and definitions that have remained stable in the theories of probability and statistics.

We can elaborate three types of causality approaches. The first is that causality is only a heuristic and has no role in scientific reasoning and discourse, as philosopher Bertrand Russel argued. Science depends upon functional relationships, not causal laws. The second position is that causality is a fundamental feature of the world, a universal principle. We should, therefore, treat it as a scientific primitive. This position evolved out of conflict with purported philosophical analyses that appealed to asymmetry of time (that it moves in one direction) to explain the asymmetry of causation (that they move in one direction and one direction only). This raises concerns of how to interpret time in terms of causality. The third is we can reduce causal relations to other concepts that don’t involve causal notions. Many philosophers support this position, and, as such, there are four divisions within this position.

The first schism we discuss is that causality is a relation between variables that are single-case or repeatable according to the interpretation of causality in question. We interpret causality as a mental in nature given that causality is a feature of an agent’s epistemic state and physical if it’s a feature of the external world. We interpret it as subjective if two agents with the same relevant knowledge can disagree on a conclusion of the relationships with both positions correct, as though they were a matter of arbitrary choice. Otherwise we interpret it as objective. The subjective-objective schism raises issues between how different positions would be regarded as correct and what determines the subjective element or role subjectivity plays in these two different positions.

The second partition is the mechanistic account of causality – that physical processes link cause and effect. We interpret causal statements as giving information about these processes. Philosophers Wesley Salmon and Phil Dowe advocate this position as they argue causal processes transmit or have a conserved physical quantity to them. We may describe the relation between energy and mass (E = mc²) as causal relations from start (cause) to a finish (effect). One may argue against this position on the grounds that these relations in science have no specific direction one way or another and are symmetrical and not subject to causality. It does, however, relate single cases linked by physical processes even if we can induce causal regularities or laws from these connections in an objective manner. If two people disagree on the causal connections, one or both are wrong.

This approach is difficult to apply. The physics of these quantities aren’t determined by the causal relations themselves. The conservation of these physical quantities may suggest causal links to physicists, they aren’t relevant in the fields that emerge from physics such as chemistry or engineering. This would lead one to believe the epistemology of the causal concepts are irrelevant to their metaphysics. If this were the case, the knowledge of a causal relationship would have little to do with the causal connection itself.

The third subdivision is probabilistic causality in which we treat causal connections with probabilistic relationships of variables. We can debate which probabilistic relationships among variables of probabilistic causality determine or create causal relationships. One might say the Principle of Common Cause (if two variables are probabilistically dependent, then one causes the other or they’re effects of common causes that make them independent from one another). Philosopher Hans Reichenbach applied this to causality to provide a probabilistic analysis of time in its single direction. More recent philosophers use the Causal Markov Condition as a necessary condition for causality with other less central conditions. We normally apply probabilistic causality to repeatable variables such that probability handles them, but critics may argue the Principle of the Common Cause and the Causal Markov Conditions have counterexamples showing they don’t hold in under all conditions.

Finally, the fourth subclass is the counterfactual account, as advocated by philosopher David Lewis. In this way, we reduce causal relations to subjunctive conditions such that an effect depends causally on a cause if and only iff (1) if the cause were to occur, then the effect would occur (or its chance to occur would raise significantly) and (2) if the cause didn’t occur then the effect wouldn’t occur. The transitive closure of the Causal Depedendence (that a cause will either increase the probability of a direct effect or, if it’s a preventative, make the effect less likely, as long as the effect’s other direct causes are held fixed) holds. The causal relationships are what goes on in possible worlds that are similar to our own. Lewis introduced counterfactual theory to account of the causal relationships between single-case events and causal relationships that are mind-independent and objective. We may still press this account by arguing that we have no physical contact with these possible worlds or that there isn’t an objective way to determine which worlds are closest to our own or which worlds we should follow and analyze in determining causality. The counterfactualist may respond that the worlds we choose are the ones in which the cause-and-effect relationship occurs as closer to our own world and, from there, determine which appropriate world is closest to our own.

What makes us special

Short answer: thinking. Why? Turning to analytic philosophy, you’ll find reasons stretching across consciousness and souls in why thinking makes us special. Evolutionary scientists explain how cognition and the ability to reflect, contemplate and ponder let humans overcome obstacles and struggle against nature. Thought transcending the surroundings of the world around us into truth, validity and other principles of reason seems nowhere in nature and, instead, only in our minds. “I think, therefore, I am human” resonates. Israeli philosopher Irad Kimhi begs to differ. That humans separate themselves from nature using thought is not only misguided but leads to false conclusions throughout philosophy, Kimhi argues in “Thinking and Being.”

Pre-Socratic philosopher Parmenides argued it’s impossible to think or say what is not. In his poem “On Nature,” he meant that what is not is nothing. To think nothing is to not think at all, and the “not”-ness of thought doesn’t differentiate it from nature and the universe itself. To think that the Earth is flat is to think from nothing in the world because there is nothing in the world that would let you think that. Though nothingness would continue in debates among thinkers including French philosopher Jean-Paul Sartre’s argument that our nothingness gives rise to consciousness, Parmenides’ reasoning that thought cannot follow from nothing doesn’t seem so appealing.

We think about what is “not” all the time. Negating anything to figure out what something isn’t is key in many lines of reasoning to figure what something is. Rejecting hypotheses and determining truth mean testing theory and detecting falsehood. But, even if we rejected Parmenides’s conclusion, we still need to figure out how to think of the “not.” Kimhi says understanding the nature of thought reveals why it doesn’t make humans so special after all.

How the sophist differs from a real philosopher, explored through Plato’s dialogue Sophist, that the Eleatic Stranger and Theaetetus discuss how discovering falsehoods let you figure out who we are. What makes thought special to the sophist is categorizing and systematizing what something is through clarifying what it is not until you figure out what it is. Thinking about what something is not is eliminates the confusion. Sophistry, then, is a productive art, the Eleatic Stranger concludes, involving imitating and copy-making to deceive and communicate with insincerity.

Philosophy in the analytic tradition means overcoming confusion similar to the way sciences do. German philosopher Gottlob Frege and British mathematician-philosopher Betrand Russell established its methods through logic. Yet the principles of logic and the appeal to science have, Kimhi believes, locked away thought’s specialness from philosophy. Frege’s belief that thought itself is fundamentally the same as nature meant thought exists independent of humans. These “propositions” stand on their own, lending credence to the idea that thought itself is part of nature just the same way “The Earth is flat” is false. Thinking, then, doesn’t set humans apart from the universe. When a philosopher debates Parmenidean’s question, her thoughts of what is “not” are false, not nothing.

Kimhi believes, however, Frege’s method of thinking about propositions is flawed. Kimhi’s argument rests on the negation of propositions. If she wanted to argue that it is raining, a philosopher could draw a picture of the sky and say “Things are as this picture shos.” To indicate that it is not raining, though, she couldn’t just draw a sky without rain. She would need the picture of rain and say “Things are not as this picture shows.” The picture, a metaphor for the proposition, needs this negation to clarify so you might conclude the picture itself, like a proposition, doesn’t say anything about how things are. Propositions mean nothing by themselves as far as stating things about the world. Kimhi attacks this idea, and believes that the picture expressing both the affirmation and the negation means a proposition says things are a certain way without having someone assert them. The same way we can’t say “Yes” or “No” to a claim without having the claim be there to begin with, Kimhi argues the propositions Frege promotes cannot be.

From a scientific perspective, if nature were an investigation of things that, by themselves have no meaning, then meaning itself is not part of nature. As Kimhi explains, thought’s place in the world doesn’t follow as separating humans from nature. Thoughts can be asserted and unasserted as a philosopher can say “It is raining, and it is not raining,” but there must be something both propositions have in common. Thinking, Kimhi believes, means representing how things are by combining elements like “the Earth” or “raining,” but the ability to put these elements together is also thinking of what these things aren’t. The difference between “It is raining” and “It is not raining” comes from our ability to think of it raining right now. Negating the claim doesn’t add any content to the thought. The two claims have a repeatable sign in common between them.

Kimhi further argues that, the same way negating a thought doesn’t add content to it, attributing thoughts to people doesn’t add content either. Though the judgments between “It is raining” and “It is not raining” differ, the claim is either affirmed or denied. Language doesn’t convey things in the world, but conveys the different ways we claim those things in the world. Thought itself is unique this way. The human capacity for language is part of the capacity to think. Language is the method of understanding the world and sets humans apart from everything else.

I sit and meditate on what makes us who we are. That thought runs so close to language makes intuitive sense. Language is the foundation for communication and expression. It’s role is inherent and to remove language from thought would be to lose thought itself. I worry that separating thinking from nature doesn’t do justice to the question Parmenides raised.

Though thinking isn’t something in nature, Kimhi believes, the linguistic form of human life constitutes thinking. Different from the austerity of “I” in German Idealism, philosophy is the apprehension of humans creatures of nature and thinkers not of nature. Thinking of what is not, though, remains a puzzle, but, by Kimhi’s views of thought, it doesn’t arise. Philosophy progresses through getting rid of confusion in clarifying what we already knew in some way or another.

The link between cognition and emotion

It’s easy to think of cognition and emotion as separate from one another, but research in cognitive science and neuroscience have suggested the two are more closely linked than we’d like to believe. Cognition can be defined as activities related to thought processes that let us gain knowledge about the world while emotions would be what we feel that involve physiological arousal, evaluation of what we experience, how our behavior expresses them, and the conscious experience of emotions themselves. To understand how cognition and emotion interact with one another in the brain, we may view cognitive behaviors neuroscientific phenomena as the result of both cognition and emotion, rather than simply one or the other. With research spanning philosophy, cognitive science, and neuroscience, emotions are no longer considered antagonistic to reason the way ancient Greek and Roman scholars treated them. Now, philosophers are much more inclined to view them closely linked through ideas such as reason being a slave to passion or reason giving way to passion through subjective experience.

Evidence of the mere-exposure effect, that people prefer things merely because they’re more familiar with them, in 1980 by psychologists William Raft Kunst-Wilson and R. B. Zajonc and as well as other findings in behavioral research shifted debates to focus on affect as a feature primary to yet independent of cognition. It could be related to unconscious processing and subcortical activity with cognition related to conscious processing and cortical involvement. 

Researchers generally agree on what constitutes cognition. Cognition, including memory, attention, language, problem-solving, and planning, often involve controlled neurological processes that respond to stimuli in the environment. This may include maintaining information while an external stimulus attempts to distract the mind. When cells in the dorsolateral prefrontal cortex of a monkey maintains information in the mind for brief periods of time, we can describe this link as a neural correlate for the cognitive process. With functional MRI (fMRI), we can identify which part of the brain are involved in these cognitive processes. Emotion, on the other hand, is much more subject to debate among scientists and philosophers. 

Emotions are arguably the most important part of our mental life to maintain quality and meaning of existence. We find meaning in emotions and rely on them to make sense of the world, sometimes in ways cognitive processes don’t offer. When researching emotion, some incorporate drive, motivation, and intention behind them as part of these states of mind.

Other researchers may use emotions in the conscious or unconscious assessment of events such as a feeling of disgust in the mouth. Subcortical parts of the brain such as the amygdala, ventral striatum, and hypothalamus are often linked to emotions. These brain structures are conserved through evolution and operate in a fast, sometimes automatic way. Still, how the different parts of the complex circuitry of the brain can mediate specific emotions is under research and debate. Neuropsychologists, neurologists and psychiatrists are only recently understanding the role of emotional processing in more complicated brain functions like decision-making and social behavior. 

But there’s much more to emotions than the physical phenomena in the brain. 

Imagine coming across a terrifying bear while hiking. In our most immediate reaction of fear, we can evaluate the situation (the bear is dangerous), a bodily change (increase heart rate), a phenomenological perception (feeling unpleasant), an expression of fear (eyelids raised and mouth open), a behavior component (wanting to run away), and a mental evaluation (focused attention on our surroundings). The phenomenological part involves our subjective experience as we respond to the world around us. All of these features come together in our emotions and can be debated to different degrees of necessity and sufficiency to emotions. On top of that, emotions may be directed towards objects with our intention (such as feeling angry at someone rather than just feeling anger on its own) and can shave motivation with respect to behavior (such as acting out of anger). Researchers have also debated whether emotions describe ourselves or emotions express ourselves imperatively. They’ve debated how the brain implements different types of emotions and how neural mechanisms describe emotional phenomena. 

Cognitive theories of emotions that have become popular in the latter half of the 20th century can be differentiated between constitutive and causal theories. Constitutive theories use emotions as cognitions or evaluations, while for causal theories, emotions are caused by cognitions or evaluations. For example, being frightened by a grizzly bear involves a judgement that the bear is scary. The fear may be the judgement itself or the result of the judgement. They let us differentiate between the complicated interactions of cognition emotion such as determining whether someone’s anger in response to a situation is the result of a cognitive evaluation of the situation or a reaction that’s more natural and automatic. In the mid-twentieth century, philosophers C. D. Broad and Errol Bedford emphasized constitutive approaches to emotion which would become dominant in philosopher while causal ones more popular in psychology. These philosophers argued that, if emotions had intentionality, there would be internal standards of appropriateness to which an emotion is appropriate. These cognitive evaluations, identifying emotions with judgements, have been used by philosophers such as Robert Solomn, Jerome Neu, and Martha Nussbaum since then. Identifying emotions with judgements, judgementalism, have been pivotal in cognitive theories of emotions.

Judgementalism in this way, however, doesn’t explain how emotions motivate, the subjective phenomenal experience of emotions, how one can experience an emotion with being able to identify a judgement with it, or a “recalcitrance to reason,” how we experience emotions even when they go against judgements that contradict them. Judgementalists may counter these issues by determining what judgements emotions are such as “enclosing a core desire,” as Solomon has argued, to let them motivate or “dynamic”, as Nussbaum has argued, so they may account for these issues. Through these methods, they may involve accepting how the world seems even with contradictory judgements. 

Other work in the 1960s showed how the cognitive component of emotions directly interacted with the physical bodily changes that occur alongside them. Psychologists Stanley Schachter and Jerome Singer developed a theory of emotion, known as the two-factor theory or Schachter-Singer theory, in which emotion is how we cognitively evaluate our bodily response to emotions. Injecting participants with epinephrine to arouse their subjects, the participants were told the drug would improve their eyesight with some of them additionally being told about the side effects. When witnessing other people act either happily or angrily, the participants who didn’t know about the side effects were more likely to feel either happier or angrier than the ones who were. The two theorized that, if people experienced an emotion without an explanation, they’d label their feelings using the feelings in the moment, suggesting participants without an explanation were susceptible to the emotional influences of others. The theory has faced criticism that it confuses emotions with how we label them such that we need complete knowledge of our emotions to label them as well as difficulty in explaining how we may experience emotions even before we think of them. Research in neuroscience has shown thinking about stimuli in ways to increase the emotion may boost prefrontal or amygdala activity while decreasing the emotion may reduce it. 

Integrating data and research from various parts of the brain, as they can provide the basis for cognitive phenomena, would illustrate a greater picture of emotion and cognition. There are many structures involved in functions and many functions for the individual structures of the brain. These neuron computations that underlie those phenomena also have affective and cognitive components, as described by cognitive scientists and philosophers. Viewing the relationship between emotion and cognition as a tug-of-war between the two doesn’t accurately capture the relationship between emotions and how we thinking about them. A combination of research in neuroscience, cognitive science, and philosophy would do justice. 

Time and Dreams in Political Unrest

With every tick of the clock I

awake, escape the shock as I

exit the dark of my dreams, as Jung would

remark. Yet not understood.
Now I ain’t sayin’ she’s a Heidegger, but she ain’t messin’ with no alt-right thinkers.

They say time flies. With age, the days feel shorter. Life speeds up, and it doesn’t slow down. The years start coming, and they don’t stop coming. However we look at it, we can understand how our perception has sped up in making these observations. It may be the result of memory. Every moment that passes and feels faster in our lives lets us view the present and the near-present with greater and and greater detail while losing the memories of what has gone long ago. We watch time speed up as we remember less.

Writing and other forms of immortalizing our words can fight against this. Whether its art, music, poetry or any other way of recording the tangible and conceivable into permanence, we can escape the fleeting visions of this world. As though we were waking up from a dream and recounting what had just happened, we can recognize dream states are part of our reality as Heidegger’s “Being-there” of the Dasein would describe.

The Dasein is what makes our existence more than a point in space-time that brings being from nothing. With death distinguishing existence, Dasein is the “being-toward-death” that gives our lives temporality. When Heidegger examined classical metaphysics with the hopes of creating a new ontological philosophy, he differentiated between the being and reality. All things have being while reality does not exist. Reality does not have the awareness of the world around it, and existing is what lets us determine what lies beyond ourselves. He described the technological advances of the 1930s and 1940s as threatening the world of ideas – poetry, intellectual thought, forms of art, and whatever we need to preserve who we are. Humanity becomes an object with an instrumetnal purpose through information and communication. Appreciating art and posing questions of who we are counteract these forces.

Much the same way Dennett wrote about his own dreams taking a long time, yet, in retrospect, seemed to have not taken any time at all, we may hypothesize that there is no dream experience. Instead, when we awaken, our memory banks play the dreams to us. Heidegger might respond to this claim by arguing that the times of dreams are consistent with the experience of dreams themselves.

With time moving faster, the present and the near-present become punctuated by events with less and less time between them. We find disparate events – whether its a meme about raiding Area 51 or the dispersion of fake news – coming and moving closer to one another. Our near-present perception enters a hypersensitive state that responds to the chaos and frenzy, and we can pick our poison: international turmoil, threats to the planet’s climate, the rise of fringe political groups, or whatever keeps us from falling asleep, as though we were trying to wake up from a nightmare. Even something as benign as a mock competitions between YouTube channels can turn messy when a shooter tells his audience to “subscribe to PewDiePie” before massacring a mosque.

It’s possible, though, that things had always been like this. The rise of Nazism during Heidegger’s time would lead historians to associate the philosopher and his views with the fascist movement. Heidegger watched rationalism, scientism, and market-centric forces overtake wonder, liberation, and freedom. Machines themselves reduced humans to the darkness they had created, and the fascists began attacking the mind-body dualism of Jews and liberals. The alt-right echoes Heidegger’s yearning for certainty and fixed values in modern life as well as nationalism and the interconnectedness of humans and the land. Trump’s former chief strategist Steve Bannon held up a biography of Heidegger and said “That’s my guy,” when he was interviewed by Der Spiegel.

Heidegger soon denounced Nazism. After he saw Hitler’s worship of efficiency and mythologized machines as though they were part of nature itself – part of who we are and how things should be – he condemned the anti-intellectualism running rampant. The racism and anti-Semitism followed an “I do not think, therefore I am,” inversion of Descartes’s famous proclamation.

When Horace wrote Caelum non animum mutant qui trans mare currunt (“those who rush across the sea change the sky above them, not their soul”), our souls still desire a connection to something permanent and fixed. Even Aristotle’s observation that we can only benefit from studying ethics when we already have “noble habits,” the philosopher must already have an idea of what she wants to learn. Heidegger believed that the philosopher with a main idea that she is a rooted being, tied to time and place and living within and through a land and language, her only interest is that she was born, worked, and died.

If only modern political discourse could heed the guidance of Aristotle. The philosopher’s first treatise on politics described a middle class that would lead to liberalist ideals by later intellectuals like Locke. The free rule because of their virtue and responsibility to rule. The commitment to philosophical thought, at the very least, eases the burden of time.

The Weil Conjectures: A tale of mathematics, philosophy, and art

riemannhypo
The real part (red) and imaginary part (blue) of the Riemann zeta function along the critical line Re(s) = 1/2. The first non-trivial zeros can be seen at Im(s) = ±14.135, ±21.022 and ±25.011. The Riemann hypothesis, a famous conjecture, says that all non-trivial zeros of the zeta function lie along the critical line.

For some, mathematics much more than a matter of solving problems. It transcends abstraction and intellectual pursuit into a way of determining meaning from life. For a brother and sister, it can mean a relentless search for truth that reads like a Romantic fable. A history that consists of settings across time and space punctuated by individual actions and events, the novelist creates a narrative that sheds light on a new meaning of truth. Truth may be elusive, especially in a post-truth society, but, in a metamodernist manner, it’s closer to reality – an authentic, original reality – than it seems.

In Karen Olsson’s The Weil Conjectures: On Math and the Pursuit of the Unknown, she intertwines the stories of French brother and sister André and Simone Weil during a Europe in the midst of World War II. The former, a mathematician known for his contributions to number theory and algebraic geometry, and the latter, a philosopher and Christian mystic whose writing would go on to influence intellectuals like T.S. Eliot, Albert Camus, Irish Murdoch, and Susan Sontag. Hearkening back to the childhood of the siblings, we follow their stories studying poetry, mathematics, tragedies, and other artists and scientists. Between these glimpses of their lives, Olsson throws in her personal anecdotes studying mathematics as an undergraduate at Harvard. She describes a “euphoria” from thinking hard about mathematics such that, while knowledge itself is the goal, it’s a disappointment to reach it. You lose your pleasure and sensation in seeking truth once you find it. André characterizes his own search for happiness through this search for truth. Drawing parallels between herself and the siblings, their stories depend less on the context that surrounds them and more on the similarities in their narratives. 

With multiple stories happening at once, the reader feels a sense of timelessness in the writing. The plot has less to do with one event happening after another, but more with a grand narrative carrying each part of the story with one another. A mix of elements of modernism and postmodernism together, Olsson’s book serves as a sign of the next step: metamodernism. In separate directions, mathematics and philosophy, the two venture for truth that seems to lie just outside their reach. Olsson tells the narratives through letters between the siblings, the notebooks upon which Simone scribbled her thoughts – philosophic, mathematical, and religious. On the purposes of mathematics and philosophy, Olsson questions how mathematics had become disconnected from the world around them. So focused on attacking problems in an abstract, self-referential setting, the field’s myopic focus on truth had strayed from meaning, she believed. Simone’s story through working in factories and a Resistance network with a wish to free herself from the biases of her own self would lead to her death by starvation in solidarity towards war victims. 

If the labor of machinery is so oppressive, Simone wondered how to create a successful revolution technological, economic, and political. The pain she sought through suffering made her who she was. It humanized her as she wrote about the German army defeating France. The evil in the world was God revealing, not creating, the misery inside us. Simone sought to achieve a state of mind that liberated herself from the material pursuits of the world through philosophy and Christian theology. She wanted an asceticism to provide she could a morally principled life on her self-imposed rules. This included donating money during her career as a teacher so that she would earn the same amount as the lowest-paid teachers. 

D. McClay, senior editor of The Hedgehog Review, wrote that Simone’s own struggle with Catholicism partly had to do with her anti-Semitism in his essay “Tell Me I’m OK.” “Though Weil was herself Jewish, she did not identify as Jewish in any significant sense, and her sense of solidarity with the oppressed did not extend to other Jews,” McClay said. Feminist philosopher Simone de Beauvoir who, according to her memoir, didn’t get along with Weil when they met, offers a contrast to Weil in how to live a good life. 

In Beauvoir’s The Ethics of Ambiguity, she argued existentialist ethics are rooted in recognition of freedom and contingency, McClay said. Beauvoir wrote, “Any man who has known real loves, real revolts, real desires, and real will knows quite well that he has no need of any outside guarantee to be sure of his goals; their certitude comes from his own drive…. If it came to be that each man did what he must, existence would be saved in each one without there being any need of dreaming of a paradise where all would be reconciled in death.” Beauvoir’s atheism created friction with Weil, McClay said. They also define a reality of what we do in the world that defines their own “drive,” which seems like a response to the threats of existential nothingness. 

McClay continued to compare the two Simones to provide an account for how to live a moral life, involving abandoning the idea of a “good person” in favor of goodness without regard to how others judge us. “It might mean living more like Weil—taking what you need, and giving away the surplus—”, McClay said, “with the caveat that one takes what one actually needs.” Beauvoir and Weil, moral philosophers that describe how “we are always, simultaneously, together and alone,” may even be guides for the crises of our age. Living together and alone, through the community of one another and the isolation of intellectual work, we can live like Weil intended. McClay’s writing also shows this mix of modernity’s unified, centralized identity withothers with postmodernism’s decentered self. 

Interspersed in Olsson’s book are stories about Archimedes’ having “eureka” moments, René Descartes’ search for the “unknown” (x in algebra), L. E. J. Brouwer’s work in topology, and even the mathematician Sophie Germain who studied mathematics in secrecy and corresponded with male mathematicians under a pseudonym. Tracing the foundations of mathematics, language, and other tenets of society to the Babylonians, she carefully compares the methods of problem solving and invention using language to reveal deeper nature of the phenomena (“Negative numbers infiltrated Europe during the Middle Ages” making mathematics seem deceptive or insidious) or method in discovery (“Are numbers real or not? Were they discovered or invented? We pursue this question for a couple of minutes.”). The figures comment on their own judgements on the deeper meaning and purpose in their work such as George Cantor saying “I see it, but I do not believe it.” Olsson drops these quotes and glimpses of history in between moments of trials of other characters.

When the early 20th-century Jewish-born mathematician Felix Hausdorff set the grounds for modern topology, an anti-semitic mob claiming they would send him to Madgascar where he could”teach mathematics to the apes” gathered around his house. Olsson then switches to her perception that she always read André and Simone Weil’s last name as “wail,” despite it actually pronounced as “vay.” Then, Olsson returns to Hausdorff’s story of taking a lethal dose of poison after failing to find a way to escape to America. In a farewell letter to his friend Hans Wollstein, who would later die in Auschwitz, Hausdorff wrote “Forgive us our desertion! We wish to you and all our friends to experience better times.” Olsson’s juxtaposition of the “wail” last name alongside the Kristallnacht, a systematic attack on Jews, compares the personal struggles of André and Simone as inseparable from the Nazi’s persecution of Jews – as though the siblings were “wailing” in response to their persecution. It also emphasizes Olsson’s own perception of the siblings that, no matter how hard she tries, she still has her own take on the story. Even when she shares the rise of Nazis in Europe, Olsson’s limited perspective preserves the postmodern disunity of culture alongside a modern master narrative. The art of narration is both a process of Olsson’s own struggles to share and an authenticated, objective authority of knowledge that can forgive Hausdorff’s suicide and provide a better future for everyone. 

With Descartes’ discovery of the “unknown,” he also introduced methods of standard notation of mathematics that would let researchers use superscripts (x² as “x squared”) and subscripts (x as “x naught”). Olsson demonstrates the similarities between the methods of reasoning that let mathematical invention become the same engine underneath the creation of science, art, and literature, as French mathematician Jacques Hadamard explained. Hadamard’s interest of what goes on in a mathematician’s mind as they do what they do was also in response to the crisis of modernity having witnessed the horrors of both world wars. The mathematician frequently seek new ways of looking at problems in mathematics as researchers came and visited during seminars twice a week. The pieces of each story come together in a flow that uses a variation in style, length, and meaning to create a multidimensional work of art that is the book. Each passage flows seamlessly in the interplay between exposition and narrative, description and action, showing and telling.

At one point, Simone and André’s reading habits are interrupted by the narrator of Clarice Lispector’s Agua Viva proclaiming mathematics as the “madness of reason.” The rational, coherent, commonsense nature of mathematics would seem to contradict the foolish wildness of madness. But, as an interruption to Simone’s love of Kant and Chardin as a child and André’s interest in the Bhagavad Gita in college, this “madness of reason” becomes more apparent. In Why This World: A Biography of Clarice Lispector, Benjamin Moser wrote: 

My passion for the essence of numbers, wherein I foretell the core of their own rigid and fatal destiny,” was, like her meditations on the neutral pronoun “it,” a desire for the pure truth, neutral, unclassifiable and beyond language, that was the ultimate mystical reality. In her late works, bare numbers themselves are conflated with God, now without the mathematics that binds them, one to another, to lend them a syntactical meaning. On their own, numbers like the paintings she created at the end of her life, were pure abstractions, and as such connected to the random mystery of life itself. In her late abstract masterpiece Água Viva she rejects “the meaning that her father’s mathematics provide and elects instead the sheer “it” of the unadorned number: “I still have the power of reason-I studied mathematics which is the madness of reason-but now I want the plasma-I want to feed directly from the placenta.

The Renaissance depiction of madness as an intrinsic part of man’s nature is found in the literature and philosophy of the time period. An imbalance, or excess, of reason could lead to the madness that seeks this mysterious, “pure truth” that transcends language itself. Much the same way Simone and André seek the essence through different forms of this “madness.” Simone’s personal battles with health and existential issues seem more alike a mathematician’s search for reason. Olsson later mentions the “madness of reason” as she narrates her own lonely experience “trying to demonstrate small truths” an undergraduate in her lonely dorm room on a cold, wintery day. It’s a localized truth that Olsson finds in her work, but still remains part of a grander narrative that connects their stories. The interjecting quote from Lispector’s text highlights this search for truth in the stories of Simone, André, and Olsson herself. 

According to Olsson, Descartes used “x” to refer to the unknown because the printer was running out of letters, but there may have been an aesthetic choice in addition to the pragmatic use. “x “ would come to mean that which we don’t know in other contexts such as sex shops and invisible rays. Olsson continues her personal story asking the question “What is my unknown? My x?” She narrates her venture back to mathematics after writing novels in her time since she graduated from Harvard University. 

Olsson emphasizes Simone’s inferiority complex to her brother as one of the primary causes for this perspective on the world. Simone’s own desire to be a boy, use the name “Simon,” and absence of any lover while André proposed the Weil Conjectures, married, and had children show these contexts. She found truth in this suffering and disregard for material pleasures – even chasing states of mind in which she could perceive the world in a state of purity and without any biases of her own self. The conjectures would become the foundation for modern algebra, geometry, and number theory. 

When Olsson took a course under Harvard mathematician Barry Mazur, she didn’t dare speak to him. The conjecture, Mazur explained, would lay down the basis of a theory, expectations believed to be true, driven by analogy. Olsson still recalls her feeling of awe when she first learned and geometry and the power of understanding the world without memorizing it. After André was arrested while on vacation in Finland in 1939 on suspicion of spying, he barely missed execution when a Finnish mathematician suggested to the chief of police during a dinner before the day of the execution to deport him instead. While André is forced by train to Sweden and England, Olsson returns to her childhood excitement in middle school learning about “math involving letters.” She then recalls moments teaching her two-year-old daughter how to count as the child asks “Where are numbers?” When Olsson returns to André’s story, now as he’s transferred to a prison in France and requires unidle intellectual activity, she comments that escaping France was a more pressing problem than anything in mathematics. 

As André longs for an ability to engage in research even in the cloistered sepulchre of a prison cell, he writes to Simone comparisons of mathematics to art. Simone is allowed to visit him for a few days a week, and the two rassure each other that they’re okay. André tells Simone he told an editor to send page proofs of his article to her so she can copyedit them. The writing between the two goes into stories of Babylonians and Pythagoreans reminiscent of the dialogue the two siblings had as children. Olsson’s own story intertwined with the communication between Simone and André serves as a parallel to demonstrate that she, too, can make mathematics accessible to the common person the same way Simome did with André’s work. André’s colleagues would even start to envy the quiet solitude of prison in which he could produce work undisturbed.  Comparing mathematics to art, though, André described the material essence of a sculpture that limit a mathematician’s objectivity while remaining an explanation in and of itself. In this sense, it has both objective and subjective value the same way a mix of modern and postmodern story would. Simone doubted this, though. Works of art that relied on a physical material didn’t directly translate to a material for the art of mathematics. Though the Greeks spoke of the material of geometry as space, André’s work, Simone argued, was an inaccessible system of previous mathematical work, not a connection between man and the universe. 

The brother responded with the role of analogy in mathematics far beyond a mental activity. It was something you felt, a version of eros, “a glimpse that sparks desire,” Olsson wrote. Going through the history of mathematics from the nineteenth-century watershed time in which questions of numbers were solved using equations, the mathematician feels “a shiver of intuition” in connecting different theories. Simone would imagine societies built upon mathematics, mysticism, and existential loneliness. Through this, all of Olsson’s jumping between stories becomes clear. She had set the reader up to view mathematics as an art the way André did and, through the world Simone created, something the general audience could understand. Olsson continues with her personal experience as an undergraduate being recommended by a professor to write about mathematics for a general audience as a career alongside dream sequences of André and Simone. 

In 1938, Simone attended a Bourbaki conference, a group of French mathematicians that André had initiated with the purpose of reformulating mathematics on an abstract and formal, yet self-contained basis. The mathematicians would sign their names collectively as “Bourbaki” on papers as they attempted to unify contemporary mathematics with a common language just as Euclid did centuries ago. While the group members would yell at one another with hard-hitting questions, even threatening at times, Simone began to believe that mathematics should be made more accessible to a mass audience. The Bourbaki group’s vision lead them to describe hundreds of pages of set theory before defining the number 1. They sought to create an idea of mathematics as a system of maps and relationships that were more important than the intrinsic qualities of numbers and other mathematical objects themselves. Scientific American would call André “the last universal mathematician.” This method of universalizing while still emphasizing relationships among objects shows a modernist tendency, the former, interacting with a postmodernist one, the latter.

Olsson’s own stories through studying mathematics as a student and teaching her children She explains the highlight of her mathematics career was finding the answer to a course problem before one of her classmates did. Her humility and sense of humor make her writing all the more approachable and relatable.

The book’s weakness is that the individual stories feel abbreviated at times. Olsson switches back and forth between many narratives that may leave the reader feeling confused or even frustrated that desires and beliefs of the characters are unexpanded. It can make it difficult to get committed to the story events or feel connected to characters when their moments are so brief and spread out across the book. The short snippets of stories across time and space alongside Olsson’s juxtaposition of them with one another make the reading easy to understand for anyone without a strong background in either mathematics or philosophy. Still, much the same way Olsson describes the search for truth, it leaves the reader in a perpetual search. We get a feeling of excitement that we are bound to get to the correct answer to a problem or find meaning in research while still never quite achieving it. 

Olsson’s book serves as a beacon of the power of evidence and justification in a post-truth world. Olsson addresses the constant searches for truth and meaning in our current society by capturing opposites and extremes in her writing. The empirical, hypothesis-driven mathematics and speculative, argument-driven philosophy contrast one another on the meandering search for truth. The isolation of intelligence for both André and Simone in their work contrast the warmth of community and social engagement the two find in their respective environments. Truth becomes less of something that we must obtain by being on one side or the other and more of finding appropriate methods of addressing problems. It’s objective in that it lies in the techniques of various disciplines, but constructed because it comes from the individual’s choice. In a typical mix of modernism and postmodernism, Olsson’s personal story to find the answers to her personal curiosities by turning back to mathematics demonstrates this mix of the personal with the impersonal. 

Like postmodern stories, Olson’s book is non-linear and reveals truth as a series of localized, fragmented pieces. Like modernism, we find greater purposes and narratives between the different stories as a testament to the power of science and technology. It switches between the progressive, exalted story of André with the melancholic, tragedy of Simone with parallels between the stories together. The grand themes of the power and style of mathematics and philosophy dictate the rules and principles that set the foundation for the stories. André’s story and Simone’s may even be treated with the former as a modernist tale of the triumph of science and the latter, a postmodern warning of society’s so-called “progress.” In regular metamodernist fashion, Olsson uses elements of both modernism and postmodernism in her book. In metamodernist fashion, the two searches for truth become one and the same. Philosophy may ask “Why?” but, for mathematics, the question is “y?”

Neuralink: the allure of brain-computer interfaces

Screen Shot 2019-07-18 at 4.02.10 PM
“You better be careful telling him something’s impossible. It better be limited by a law of physics or you’re going to end up looking stupid.” – Max Hodak, Neuralink president

As the gap between humans and computers becomes smaller every day, the startup Neuralink, backed by figures including Elon Musk, Vanessa Tolosa, and other individuals, recently hosted a public conference in which they revealed their efforts create neural interfaces between brains and computers. The futuristic dream of a brain-computer interface for mutual exchange of information between humans and works of artificial intelligence may sound like something out of a science fiction dream, but the neural interface, a device to enable communication between the human nervous systems and computers, would include invasive brain implants and noninvasive sensors on the body.

During the livestream on July 16, 2019, Neuralink revealed their work to the public for the first time with the pressing goal of treating neurophysiological disorders and a long-term vision of merging humans with artificial intelligence. With $158 million in funding and nearly 100 employees, the team has made advances in flexible electrodes that bundle into threads smaller in width than human hair inserted into the human brain. As the computer chip processes brain signals, the first product “N1” is meant to help quadriplegic individuals using brain implants, a bluetooth device, and a phone app.

In their paper “An Integrated Brain-Machine Interface Platform with thousands of channels,” Musk and other team members noted that electrode impedances after coating were really low allowing for efficient information transmission. Each electrode uses pixels at 3 Hz bandwidth to measure spikes, a neuron’s responding to stimuli that are generally about 200 Hz but can reach up to 10 kHz at times. The dense web that the team creates would let them feed the entirety of a brain’s activity to a deep learning program for creating artificial intelligence at a great degree of accuracy, study the neuroscientific basis for phenomena, or even decode the basics of other features such as language. For the Human Connectome project, an initiative to create a complete map of the human brain, Neuralink’s scale would give more precision than the project has done before.

This precision could address the ethical issues raised when the cognitive response of a brain-computer interface doesn’t appropriately match what a patient communicates. Neuralink’s work should take into account the risks associated with such a fine level of precision. Most strikingly, brain-computer interfaces so intimate to who we are raise the ethical issues of whether neurologically compromised patients can make informed decisions about their own care. Philosopher Walter Glannon said in his paper “Ethical issues with brain-computer interfaces,” the capacity to make decisions is a spectrum of cognitive and emotional abilities without a specific threshold that would indicate how much constitutes the ability to make an informed decision. Just as philosophers and ethicists have studied the basis for ethical frameworks in the decision-making process among physicians, patients, and other roles in health care, the complex semantic processing of brain-computer interfaces may not constitute enough to show a patient has the cognitive and emotional capacity to make an informed and autonomous decision about life-sustaining treatment. It would need some a behavioral interaction between the patient and the health care professional so that the brain-computer interface’s response reflects only what it’s capable of communicating.

Tim Urban of “Wait But Why” described Neuralink as Musk’s effort to reach the “Wizard Era” – in which everyone could have an AI extension of themselves – “A world where AI could be of the people, by the people, for the people.” The promise of cyborg superpowers as humans step into the digital world calls back to science fiction stories such as 2001: A Space Odyssey and Jason and the Argonauts. From the electrode array that joins the limbic system and cortex of the human brain gives Nerualink the information for those regions of the brain. It creates a reality in which information and the metaphysical nature of what we are depend less on the physical structures of the brain itself, but, rather the information of the human body. Prior to artificial intelligence, the brain evolved to develop communication, language, emotions, and consciousness through the slow, steady, aimless walk of natural selection, and a collective intelligence that can contribute to machine learning algorithms like Keras and IBM Watson. The Neuralink interface would let us communicate effortlessly with anyone else in the collective intelligence. The AI extension of who are means that the machines that are built upon this information are part of us as much as they are machines. With machines connecting all humans, we achieve a collective intelligence that goes against how human and animal minds have evolved over the past hundred million years.

A machine learning approach to Traditional Chinese Medicine

Screen Shot 2019-07-11 at 10.48.36 PM

Modern science can uncover ancient wisdom. While it may seem regressive or pseudoscientific to study concepts from Traditional Chinese Medicine (TCM), they reveal deeper meanings about who we are as humans when subject to scrutiny by the scientific method. The herb formulas, plant-derived nature produces of TCM are still used in disease prevention and treatment despite the dominance of modern science. When medical researchers performed machine learning classification methods on 646 herbs are according to organ systems, known as Meridians, they found the 20 molecule features were most important for predicting these Meridian. It included structure-based fingerprints and properties of absorption, distribution, metabolism, and excretion. As the first time molecular properties of herb compounds have been associated with Meridians, this provides molecular evidence of Meridian systems. 

The Meridian system dictates how he life-energy qi flows through the body. Qi includes actuation of the body, warming, defense again excess, containment of body fluids, and transformation between qi and food, drink, and breath. Each Meridian corresponds to a yin yang quality, an extremity (hand or foot), one of the five elements (metal, fire, earth, wood, or water), an organ (such as heart or kidney), and a time of day. The yin yang qualities describe how complementary, opposite forces of the universe interact, such as Greater Yin or Lesser Yang. Given these roots in traditional, non-scientific thought, scholars have debated the scientific justification behind why and how TCM works. In their paper “Predicting Meridian in Chinese Traditional Medicine Using Machine Learning 2 Approaches,” the researchers assumed Meridian can be found through scientific methods to begin with. The five elements are qi are metaphysical, not modern physiological or medical phenomena. The researchers emphasized the need to examine the herb medicine actions as they relate to disease etiology to create a formal understanding of TCM.

Qi and yin yang as they relate to human health date back to texts of discussion and debates from the Warring States period (475–221 BC) of ancient China. Philosopher Zhaunghzi noted qi was the basis of the body’s physical being with the six qi (wind, cold, summer heat, fire, dryness, and damp) in harmony with one another as they affect the seasons. These theories would be used in medicine to describe relations and analogies between the body, the state, and the cosmos, or the universe.

The neuroscientific basis of consciousness

furiousdreams
Part of “Furious Dreams” by Marc Garrison

Scientists and philosophers alike have examined and pondered about consciousness, one of the most central problems in our experience of the world. Both approaches may seek different methods, with science being empirical and philosophy, speculative, but they’re both relevant to any discussion on such a complicated phenomena. Understanding how neural correlates of conscious experience correspond to various parts in the brain lets scientists take “bottom-up” approaches of beginning with empirical phenomena and determining what cognitive and psychological behaviors result. On the other hand, beginning with the philosophy of mental states, including beliefs, intents, desires, emotions, knowledge, and figuring out how those can be attributed to the mind moves in the opposite direction. These mental states are whatever is the nature of the mental phenomena the mind occupies. It’s the way the brain makes sense of the world. A combination of empirical neural data from both computational and psychological models alongside a philosophical analysis would let us bridge the gap between the brain and consciousness. Either way, researchers end up with a neurobiologically accurate view of the brain in how consciousness emerges. There are many challenges to neuroscience explaining consciousness and its findings have philosophical significance to many ideas in epistemology, ethics, and metaphysics.

The difficulty of the subjective experience presents a challenge to consciousness. Any person’s experience is an external phenomena to anyone else. Philosopher Edmund Husserl’s notions of phenomenological and natural attitude show this relationship between the first- and third-person experiences. When someone perceives a car, their consciousness is directed at the car and that person is not necessarily aware of the details of the experience, such as what the steel car feels like. This is Husserl’s natural attitude. When that person focuses on the experience of perceiving the car as its own experience, it is a phenomenological attitude. A neuroscientist would generally concern themselves with the external phenomena they research and endorse the natural attitude. But the concepts of a scientific model (such as water being composed of two hydrogen atoms and one oxygen) are conscious phenomena a neuroscientist uses to understand the model. The neuroscientist can use the phenomenological attitude towards those models as experiences, and this shows we can’t escape our own point of view. Even a scientific model is a representation in one’s mind.

Consciousness as a neuroscientific phenomena requires a study of this relationship between an experience and the scientific model of it. The gap between the two can depend upon whether one endorses a realism view of science or an antirealism one. Under realism, mature scientific theories can be true and describe the world such that the gap is non-existent or narrow. Scientific realism means that all natural phenomena can be modeled using the structures and relations among them. We may model qualitative feelings, or qualia, our subjective experiences, using these structures and relations. It still leaves certain experiences difficult, such as how one may see the color red as a structural relationship given that introspection about redness doesn’t reveal its nature. Anti-realism, on the other hand, dictates we can only say whether a neuroscientific theory of consciousness is compatible with observations, not whether it captures nature. There is a large epistemic gap between any concrete phenomenon and the corresponding scientific models. The experiences are not different in this respect.

Before the 20th century difference between philosophy and science, philosophers generally studied consciousness through both philosophical and scientific means. French philosopher René Descartes performed research in mathematics, neuroscience, and philosophy. Psychologist-philosopher William James would create philosophical theories in light of empirical psychology research. With the rise of logical positivism, the idea that only statements that can be empirically verified are meaningful, in the early 20th century, philosophers focused on the semantic content of arguments instead of empirical scientific results. During this time, the three traditional perspectives of the mind-brain question, physicalism, mentalism, and dualism, emerged. Physicalism is the thesis that everything can be reduced to physical phenomena, mentalism, to mental phenomena, and dualism, that everything is either mental or physical. he rise of physicalism in the late 19th century meant to take consciousness as unscientific in some interpretations. Philosopher Galen Strawson’s realistic physicalism means the physical nature of the nervous system can manifest consciousness through mental activity. This can be illustrated with an example of philosopher Frank Jackson’s knowledge argument.

The argument uses the fictional story of a famous neuroscientist Mary who learns everything about the world through a computer but remains confined to a room that is entirely black and white. If physicalism were true, one might argue Mary knows everything about the world, but, because she has not experienced color, she does not know what it is. Upon seeing color, this argument would dictate that she would experience a subjective experience of her consciousness that she had never encountered before and, therefore, learns something new – showing she did not know everything about the world. One may conclude, from this line of reasoning, physicalism doesn’t entail everything. Possible responses to this may include the ability hypothesis – that Mary learns how to see color, but doesn’t learn what color is, therefore, what she learns doesn’t contradict that she knew everything about what the world was.

The historical trends meant changes, not only for consciousness, but for science as a whole. Scientific terms began taking new meanings. With the rise of thermodynamics around the turn of the 20th century, heat went from meaning boiling water to a specific dimension of temperature variation. Consciousness became an empirical phenomena of brain activity variation.

Consciousness presents researchers with the problem of how to explain when a mental state is conscious rather than not as well as what the content of a conscious state is given the subject experience of everyone’s consciousness. From a philosophical perspective, we rely instead on behavior and introspective testimony on the nature of consciousness in searching for common features from which we deduce knowledge about consciousness. From the neuroscientific angle, we study the central nervous system and the neural properties in the cerebral cortex. Psychologists Jussi Jylkka and Henry Railo have argued scientific models of consciousness need to explain consciousness’ constituents, contents, causes, and causal power. Though consciousness has many aspects to it, we focus on perception in this essay. Consciousness can further be differentiated into phenomenal consciousness, the properties of experience that correspond to what it’s like to have those experiences, and access consciousness, consciousness based on which states one can access.

The global neuronal workspace explains when a mental state is in consciousness such that it is accessible to systems related to memory, attention, and perception. Accessing content means it can use its content in performing computations and processing. Consciousness resides in these accessed states by the cortical structure of the brain that is involved with perceptual, mnemonic, attentional, evaluational and motoric systems. Whatever neurons are involved in someone’s current state constitute the workspace neurons. They activate such that the neural activation causes activity between workspace systems. It’s tempting to say the cortical workspace network correlates with the phenomenon of consciousness itself especially given imaging results can show which areas of the brain activate when a subject is conscious, but this correlation doesn’t tell us whether brain activity is of phenomenal or access consciousness.

Another approach, recurrent processing theory, ties perceptual to processing consciousness without a workspace and, instead, a focus on activity connecting sensory areas of the brain. It uses first-order neural representation, that we may perceptually represent the content of a mental state to mean perceiving that content, to explain consciousness. Interconnected sensory systems may use feedforward and feedback connections, such as those in the first cortical visual area, V1, which carry information to higher-level processing areas. The forward sweep of processing uses feedback connections linking visual areas. Global workspace theory and recurrent processing theory differ in the stages of visual processing they depend upon. The four stages of visual processing are superficial feedforward processing (process visual signals locally in the visual system), deep feedforward processing (the signals can influence action), superficial recurrent processing (information travels to previously visited visual areas), and widespread recurrent processing (across broad areas such as the global workspace access). Recurrent processing at the third stage, superficial recurrent processing, is necessary and sufficient for consciousness according to recurrent processing theory, and at the fourth stage, widespread recurrent processing, for the global workspace theory.

Philosopher Victor Lamme has argued that superficial recurrent processing is sufficient for consciousness because features of widespread recurrent processing are also found in superficial recurrent processing. Recurrent processing is found in both stages, and the global neuronal workspace theory allows superficial recurrent processing to correlate with widespread processing. Lamme believes that, in response to visual stimuli, there is first a fast forward sweep of processing, proceeding through the cortex. This first stage is nonconscious. Only a second stage of recurrent processing, when earlier parts of the visual cortex are activated once more by feedback from later parts, is taken to be conscious (with, again, empirical support).

Another approach to consciousness holds that one can be in a conscious state if and only if one represents oneself in that state. If one were in a conscious visual state of seeing am moving, that person must represent themselves in that visual state. The higher-order state represents the first-order state of the world and results from the consciousness of the first-order state. One must be aware of a conscious state to be in it. This lets neuroscientists correlate empirical work on higher-order representations of states with prefrontal cortex activity. For some higher-order theories, one can be in a conscious state by representing oneself even if there is no visual system activity.

Neuroscientists can use empirical tests of higher-order theory against other accounts, but neurologist Melanie Boly has argued that individuals with the prefrontal cortex removed can still have perceptual consciousness. This may prefrontal cortical activity isn’t necessary for consciousness, but one may argue the experiments didn’t remove all of the prefrontal cortex or that the prefrontal cortex is necessary, but in a more complicated system than previously suggested. Psychologist Hakwan Lau and philosopher Richard Brown have used experimental results to suggest consciousness cannot exist without the corresponding sensory processing as predicted by some higher-order accounts.

Finally, the Information Integration Theory of Consciousness (IIT) uses integrated information to explain whether one is in a state of consciousness. Integrated information is the effective information parts of a system carry in light of the causal profile of the system. If the information a system carries is greater than the sum of the information of each of the individual parts, then the information of that system is integrated information. IIT holds that this integrated information implies that a neural system is consciousness, and, the more integrated information there is, the more conscious the system is. Neuroscientist Giulio Tononi has argued the cerebellum has a low amount of consciousness compared to the cortex as it has far fewer connections even though it has more neurons. IIT suffers from treating many things as conscious even when they don’t seem to be. Tononi has proposed that a loop connecting the thalamus and cortex forms a dynamic core of functional neural clusters, varying over time. This core is assumed to integrate and differentiate information in such a way that consciousness results.

Neuroscientist J. H. van Hateren has presented a computational theory of consciousness in which the neurobiology of the brain allows it to compute a fitness estimate by a specific inversion mechanism that also causes the feeling of consciousness. His conjecture that consciousness is a transient and distinct cause the individual produces when he or she prepares to communicate—externally or internally. Citing the thalamocortical feedback loop, the internal variables involved in this process estimate the individual’s evolutionary fitness.

Despite how flashy it sounds to say researchers can completely understand consciousness, the challenges that neuroscientists and philosophers face mean things are far from completely figured out. There remains a lot to be discovered and examined from both scientific and philosophical angles.

A metamodernist narrative of genetic engineering

mm

Screen Shot 2019-07-06 at 9.18.18 PMWith the ethical concerns raised by issues of gene editing of human embryos, academic ethics research has set the foundation for and discussed the bioethical threats mankind faces. Alongside artificial intelligence (AI) and similar issues such as data science privacy and the power of social media, the steps into baby manufacturing are illustrated through a mix of modernist and postmodernist ideologies and require a revised notion of a biological-digital autonomy that can account for the changing self. The CRISPR-Cas9 gene editing technology have already shocked and disgusted scholars in science and philosophy around the world. Questions of how much of who we are we should be able to change and what we should do with the rapid power of artificial intelligence on the horizon have taken center stage. With the newfound metamodernism appraoch to science, reality, and existence, we step into gene editing the same way we jump off the deep end of a lake and hold our breath until we rise to the surface.

The oft-repeated truism “science is moving so fast that ethics just can’t keep up” couldn’t be farther from the truth. Ignoring the baseless assumption that science and ethics were racing against one another, the scientistic idea that philosophers and ethicists in similar fields have not addressed the power and potential of science would be to disregard the decades of ethics research on genetic engineering. The claim also seems to treat science as an uncontrollable force that must be braced against because we can’t do anything to stop it. It’s false that mankind has complete control over nature, but the notion inaccurately portrays mankind as weak and vulnerable to the world when we can take a metamodernist approach that rests somewhere in between. Researchers in ethics have been paying close attention. They’ve been studying everything closely.

Screen Shot 2019-07-06 at 9.18.12 PMSociety and individuals have been shifting from postmodernism into metamodernism. We create the self as something between a postmodernist and modernist notion of reality through gene editing. As opposed to postmodernist traditions that nothing is real and modernist ones that reality is there beyond media, language, and symbols, right now we’re sure reality is somewhere in the middle in our notions of metamodernism. We are both a modernist believer in the power of science and technology and a postmodernist skeptical of the reality we find. Genetic engineers have begun using pluirpotent stem cells, ones that have the same properties of embryonic stem cells but come from manipulating ordinary adult cells rather than destroying embryos, that are more effective in providing dozens or hundreds of offspring for individual parent cells. As more stem cell research goes into how male sex cells can result from female cells and vice versa, this could even allow single parents or same-sex couples to produce biological children. Researchers have even predicted scenarios in which children result from the DNA of more than two biological parents, known as “multiplex parenting.”

Recent success in both cloning and CRISPR technologies have let scientists understand better the embryology and developmental physiology of human embryos as a result of the pluripotent stem cell advancements and in vitro fertilization (IVF). We must warn of the issues that may arise as stem cell reproduction methods gear towards manufacturing embryos for desirable traits. Couples who choose to keep their unwanted embryos frozen or donate them to further research or to other couples need to be aware of how those embryos are being used to assess their role of responsibility in stem cell research.

This stem cell method comes with the advantage as the daughter cells result similar to the adult ones, and researchers have posed solutions for the issues of eugenic control that would result. We can critique these ideas for their shortcomings in characterizing the eugenics movement. These movements do not thoroughly emphasize the social forces governing how individuals would be manufactured. To address the issues raised by gene editing, we need a deeper, more multidimensional view of the moral problems raised by eugenic control that accounts for the changing self and reality in a metamodernist world. We can engage in these subjects through personal narratives and humanized ideas of who we are that embrace ethics and threats of existentialism.

Researchers have, however, brought up solutions that derive from dangerous principles of eugenics and extreme notions of individual autonomy. In the transition to metamodernism, they prevent mankind from pushing back against looming threat of a full-fledged surveillance state, and, instead disregard the idea that a particular line of research can be inherently morally wrong. These transhumanist thinkers such as philosopher Nick Bostrom (who has also warned about the threat of Superintelligence) who proposes a solution to use stem cell sex cells for performing eugenic selection for intelligence, partially as a method for combating superintelligent AI. If humans can replicate natural selection on a group of embryos over the course of several generations, they can produce the most intelligent humans possible. This eugenics approach isn’t uncommon, either. Ethics professor Nicholas Agar wants prospective parents to choose how they can improve their children in a “liberal eugenics” fashion. This sort of scientific perfection fails to capture how these humans would supposedly relate to the rest of society given that they’ve attacked the fundamental ideals of community and sharedness that humans share. Bostrom does suggest there may be religious or moral grounds to prohibit this method of creating genetically enhanced children, but claims having children at a disadvantage to those around them would cause everyone to eventually pick up the technique. This reasoning rests on a dangerous egalitarian notion of human success and morality driven by competition in a way that forces those who disagree to accept the technique. It doesn’t rest upon morally reasoned principles or the virtues of humanism and research. Much the same way individuals would rapidly evolve under Bostrom’s scheme, the entirety of society should follow suit regardless of choice.

Bostrom’s idea also doesn’t recognize the reality of how natural selection and evolution work. Much like natural selection, this method wouldn’t automatically and instantly choose the most optimal DNA. Instead, it would choose a heritable trait without regard to DNA the same way nature influences which traits are optimal for survival and reproduction. Bostrom’s method relies on knowing which parts of the DNA are responsible for the trait which can be edited directly without the need for cycling through generations. Besides, the genetic basis for these traits have been shown to be limited as the traits themselves are a complicated amalgamation of environmental interactions, genetic pathways, what epigenetic factors activate throughout an individuals’ lifetime, how nature would “select” for certain traits, and the resulting phenotypes. It wouldn’t dictate how a superintelligent human may emerge. The typical issues of the artificial selection process being prone to error and having an inherent natural selection to it raise concerns as well. All of this lies on the inhumane assumption that we may find human perfection through genes regardless of how an objectified individual may control their own fate and what right they have to do so. Indeed, the vacuous claim that “science is moving so fast that ethics just can’t keep up” only measures unethical, unjustified notions of how fast science is moving to begin with.

The ethics of some reproductive technologies become blurrier in light of the newly complex understanding of heredity’s cross-currents. A maternal surrogate, for example, will likely exchange stem cells with the fetus she carries, opening the door to claims that baby and surrogate are related. If the surrogate later carries her own baby, or that of a different woman, are the children related? Parenthood becomes even stranger with so-called mitochondrial-replacement therapy. If a woman with a mitochondrial disorder wants a biological child, it is now possible to inject the nucleus of one of her eggs into a healthy woman’s egg (after removing its nucleus), and then perform in vitro fertilization. The result is a “three-parent baby,” the first of which was born in 2016. Zimmer doesn’t presume to make ethical judgments about procedures such as this, but warns that “informed consent” in such cases can be unexpectedly difficult to determine.

The more honored individuals in bioethics such as Stanford law professor Henry Greely have voiced similar arguments. Greely has argued insurance companies and government agencies can help fund the effective DNA sequencing methods in fighting genetic disease. In his book The End of Sex and the Future of Human Reproduction, he predicts how we may perceive and judge the potential of stem cell technologies. He notes he wouldn’t ban embryos created from a single parent, but would still require pre-implamentation genetic diagnosis to select for optimal offspring. But, above all, Greely emphasizes that these should be closely scrutinized by a standing commission that can recognize what principles all people would believe in. What sort of principles all individuals would agree upon, such as the four common principles of medical ethics: autonomy, beneficence, non-maleficence, and justice, are up for debate. The principles of parenthood should be upheld even for extreme scenarios of the future that may select for the most desirable traits such that biological parenthood becomes meaningless. We must protect every notion of humanity that comes from our current methods of reproduction, both biologically and artificially, to address these issues.

With the growing threat of AI, namely that computers may become more and more Screen Shot 2019-07-06 at 9.19.35 PMhuman-like, our autonomy should reflect how the self has been changing through these innovations. The self can be changed artificially much the same way robots and computers are programmed, but they’re not completely fragmented that humans share nothing with one another. These stem cell methods can give humanity a more unified individualistic self that, when appropriately regulated, allow for even modified individuals to exercise appropriate rights and responsibilities. Greely’s ideas still disgust by suggesting market-driven factors to influence human reproductive choices. The principles that doctors and scientists must stand upon are far too likely to become cold, calculated treatments for dehumanized problems. Professor of public health Annelien L. Bredenoord and professor of bioethics Insoo Hyun argued in “Ethics of stem cell‐derived gametes made in a dish: fertility for everyone?” multiplex parenting will shake the very notions of responsibility and autonomy and moreso than other reproductive techniques. They will disgust individuals, citing ideas of the reaction by professor of philosophy Martha Nussbaum’s book Hiding from Humanity: Disgust, Shame, and the Law. Phyisican Leon Kass noted there is wisdom in this repugnance in his paper “The Wisdom of Repugnance: Why We Should Ban the Cloning of Humans.” No amount of sociological or psychological research into the well-being of multiplex children can prevent this natural sense of disgust that we feel at this idea – and with good reason. We must hold onto this disgust and other aesthetic, physiological responses and assess them to the extent which they provide us with moral clarity. From there, a metamodernistic view of gene editing can take place. Writer Carl Zimmer noted in She Has Her Mother’s Laugh that he doesn’t make ethical judgments about multiplex parenting, but “informed consent” in such cases can difficult to determine.

Reading, learning, and writing about these issues is the first step. For anyone to learn more about science and technology in this age would help spread humanism to fight ignorance. These issues need to enter the sphere of public debate and discussion in contrast to how they’re currently only governed by scientists, ethicists, and philosophers. We need laws in place to prevent these catastrophic consequences long before they occur. In our metamodernist society, we need not reject science and technology entirely. We may remain skeptical of the notions of progress and reality, but only to a point where we can begin a new direction for scientific research. Given the existential crises of genetic engineering and artificial intelligence, we may imagine a moral society through personal development and psychological growth in wrestling with and understanding these struggles. We need a humanized notion of reproduction to address psychological needs of individuals in a society that has the power of gene editing. We can create a grand narrative that mankind has an overarching worldview to connect all humans to one another, but hold it lightly enough to recognize the limits of what we know and should do. We can understand that we may create an idea of “reality” and methods of understanding the world around us that respect scientific research while still questioning the authority of problematic research techniques. By creating a “reality,” we can embrace the truths that we can create a moral society that determines who we are despite the changing self brought upon by genetic engineering and the digital age. We can determine what is honest, authentic, and true without being cynical, showing contempt for the beliefs and sensibilities of others, and turning to eugenics approaches for solutions. Metamodernism, as it begins to infect all areas of life, means our scientific research should seek elegant, morally refined methods that we embrace for knowledge.

Only then will we come closer to our selves in a metamodernist future. The intellectual thought to counter the doomsday dystopia scenarios of the future can involve selecting for desirable traits in offspring through trustworthy, verified methods that acknowledge the rights and responsibilities of the individual. We must remain skeptical of harmful progress, but remain open to gene editing technologies insofar as they may help mankind without raising ethical concerns.