I take a sip from my coffee mug and lean back as I stare at my writing. Through libraries, coffee shops, hospitals, and other venues, I write and hack away on my laptop. On the intersection of neuroscience and philosophy, I present An introduction to ethics,An introduction to philosophy, and Contextual emergence. I hope these resources prove useful to others.
How can we create frameworks of practical moral reasoning in the absence of free will? Can neuroscience research shed light on how we make moral judgements? What are the general implications of neuroscience research itself? How can we differentiate between the study of the mind or the brain to begin with? In the current development of neuroscience research, scenarios have changed. Researchers are beginning to uncover a new knowledge about personal identity, emotions, awareness, and free will. All of these are key pieces in the understanding of the puzzle of the mind human. These issues that seemed to be alien to science are now exposed in the scenario of Neuroethics, the ethical issues brought upon by neuroscience as well as the neuroscience of ethics itself. As presented by Kathinka Evers, principal investigator of the Center for Research in Ethics and Bioethics from the University of Uppsala, we can investigate a slew of questions that born in this interface between the sciences of the human spirit and the natural sciences in her book “Neuroetica.” It should be remembered, in the face of this reconciliation between science and ethics, that there have been challenges and struggles to write on neuroethics. Understanding “the analysis of the concepts involved in practical moral reasoning “(p 21), and the first, according to Robert Hooke, as “knowledge of natural things, and of all useful arts, manufactures, and mechanical practices, artifacts and experimental inventions “(p.22), it’s easy to come to incorrect conclusions on these ethical issues.
Fortunately, through history not all modern thinkers have seen science in this way. As Evers points out, in accordance with philosopher Francis Bacon’s views of science, the study well-organized and detailed in nature, science should be much more than the mere school search for knowledge. The sciences have to fulfill a fundamental function, namely: to allow human beings to improve their life on earth (p.21); objective that would be difficult to achieve if it were insisted on keep excluded the philosophical, political, moral and metaphysical that are born in their same this particular case, within the neurosciences.
Now, although the ethical problems initially raised in neuroscience referred to the practice and use of brain imaging technologies, neuropharmacology or the interests of research and sponsors of this, currently neuroscientific research itself is also concentrated in the construction of “adequate theoretical foundations that are required to be able to deal appropriately with the problems of application “(p.28). This establishes a distinction clear between an applied neuroethics and a theoretical neuroethics, concerned about the capacity that could have the science of nature to improve our understanding of moral thinking. We can determine whether the former is really important for the latter by considering both concerns as part of a greater question, that is, if human consciousness can to be addressed or not in biological terms.
It should be mentioned that any attempt to expose the complete set of ideas that go through neuroethics and the development of these would be foolish. We can still refer to a small, but representative, set that begins with the idea of unifying different levels and types of knowledge, taking both the techniques and the methodologies of each discipline, in order to build bridges. Fragile as they may be, they would allow the flow of the knowledge of the neurosciences to other sciences and disciplines, integrating in turn, this knowledge in the conception that have human beings of himself. It resonates through the world and morality in a shared theoretical framework (p.30 and p.57). The materialism position may respond, aptly illustrated and proposed in chemistry by French philosopher Gaston Bachelard in 1953 and extended by neuroscientist Jean-Pierre Changueux, to the neuroscience of the present. It may be that far from any naive reductionism and dualism (ontological), we can assume the brain as “a plastic, projective organ and narrative, which results from a sociocultural, biological symbiosis that appeared in the course of evolution … ” (p 69), judging emotion as the characteristic mark of consciousness from an evolutionary perspective.
Following, you can expose an idea pretty striking, a neurophilosophical model of the free agency that tries to answer how even though Free will is or can be: “1) a construction of the brain, 2) causally determined, or 3) initiated unconsciously “(p.80), it is not something” illusory .” As Evers argues, first, the fact that free will be a construction of the brain not necessarily means that it is an illusion, and that perhaps if it is an illusion it will be for other reasons (p.86); second, “causality is a prerequisite for the free agency “(p.88), otherwise the behavior would be totally random, in addition, causal determinism does not imply an invariable and necessary relationship between cause and effect, to the extent that this relationship can be variable and contingent; third, although the processes non-conscious appear to be far from control aware, the relationship and influence between both are “To a certain extent mutual, and not unilateral” (p.104). Of course, to understand the development and integration of each argument to think of free will as “The ability to acquire a causal power, combined with the ability to influence the use of said power ” (p.107), you need to read chapter II of the book, where Evers makes use of different authors (Changueux, Le Doux, Libet, Freeman, Churchland, Pinker, Blakemore, Pylyshyn, among others) to recreate the scenario in which situates all this discussion and each one of his ideas.
Finally, we note the normative relevance of the neurosciences according to the understanding of the neural bases of development of thinking and moral behavior. We can mention four innate tendencies closely related that appeared in the evolution: 1) self-interest, 2) the desire to control and security, 3) the dissociation of what can be considered unpleasant or threatening, 4) selective sympathy. Regarding the latter, the author risks saying that the human being is a xenophobe with natural empathy insofar as it is “empathic by virtue of [your] understanding of a relatively large set of creatures; but […] nice so much more narrow and selective towards the restricted group [in which born or has chosen to join] “(page 132). Although understanding (empathy) can be extended to broad groups (i.e. foreigners), the affective bond that unites human beings is restricted to their group more close. There’s an indifference to the foreigner or the which is considered different.
Keeping in mind these innate preferences, there’s no doubt about the difficult situation of current moral discussions. It becomes a priority then, to establish a diagnosis in neurobiological terms to be able intervene human behavior, recognizing that the structure of the brain determines to some degree the social behavior, moral dispositions and the type of society that is created, although the latter has an influence on brain development (p.149). At the same time, we can pose the question about the scientific responsibility of neuroscience at the socio-political level in terms of its adequacy (formulation of real problems), conceptual clarity, and application of methods and techniques without forgetting the origins and interests. Making it clear what a finding or fact (if it is) of neuroscience is not can give off categorical imperatives. A duty can be universal because of knowing that you have an innate preference does not follow that it is okay or that it must conceive this fact as good or bad.
In short, “Neuroethics” is an excellent introduction for both the unnoticed reader and for professionals from different areas of health (Psychology, Psychiatry, Neuropsychology, Medicine) and other professionals such as philosophers, lawyers and politicians, concerned about the participation of neurosciences in the understanding of the mind, the behavior, socio-cultural organizations, mental health, education, but first of all in the perception of human existence and its future. It may be “A Critique of the Neuroscientific reason,” a clear demarcation of the limits of this knowledge and its uses in society, a judgment by the other disciplines, to the extent that knowledge about the brain seems to give to neuroscientists certain power to expand their ideas beyond the laboratory, expanding their horizons and its explanatory power in domains already mentioned. It’s sometimes quite assertive when plotting new research paths, other times. But other times it’s about attacking different fields of knowledge by not knowing the limits of its frame of reference and in the impossibility to purge the investigations carried out of their own cognitive biases. That would respond more to the interests of certain ideologies than to the objective to improve human life on earth.
Memory storage in the language learning process shows how we make inferences about
language in our communication.
If our brains were like computers, we could process information like a machine would. It takes about 1.5 megabytes of memory to learn a language, enough for a computer to save an image, according to researchers Francisca Mollica from the University of Rochester and Steven Piantadosi from the University of California, Berkeley.
The researchers estimated how much information we use to learn semantics, grammar rules, word choice, and other language features. They calculated the number of bits used in possible ways to represent these features. The majority of information goes to word meaning, suggesting language learning theories should focus on meaning, as opposed to other areas like grammar structure. For grammar structure, the researchers found 10 210 possible representations, greater than the number of atoms in the universe. Humans must have powerful inferring methods to reason through so many possibilities, they noted.
The research holds potential for determining theories of processing meaning and learning. The study had limitations, such as estimating the size of the adult vocabulary, to simplify the learning process.
Truth is elusive, nowhere to be found.
Footprint and forecast, through reason and verse,
through scars and marks that style the ground.
Memory and reason, fade to the bland.
Glimpse of light, the sight of truth. We converse
scratched in concrete or scribbled in sand.
From birthmark or gravestone, the discourse abound,
of dialogue, debating, counted controverse,
through scars and marks that style the ground.
Through mystery, the truth we don’t understand.
We pursue a cure, if truth were a curse,
scratched in concrete or scribbled in sand.
It evades, it leaves our own selves earthbound,
Like supernova, particles spread out dispersed,
through scars and marks that style the ground.
The highest of truths, we seek heights grand.
washed like waves, without sleight of hand,
through scars and marks that style the ground,
scratched in concrete or scribbled in sand.
“Only passions, great passions can elevate the soul to great things.” – Denis Diderot, Pensées Philosophiques
I believe the ways we become better researchers only come through self-reflection and meditating upon the arguments and principles behind what we do – not the simple acts of doing those things themselves. What makes good work that we find satisfying, engaging, morally clear, and even effective for whatever purpose or value we put forward can only come as we contemplate and fully realize the effects of what we’re doing. As French philosopher Denis Diderot sought to learn about a variety of fields, from philosophy to art to religion, he advocated strongly for the emancipatory power of philosophy. Overturning our previously held convictions of the 1700s, Diderot’s Encyclopédie showed philosophy should trample underfoot prejudice, tradition, antiquity, shared covenants, authority, and everything that controls the mind of the common heard. Much the same way I fell in love with philosophy as an undergraduate student, I took these challenges upon myself. I wanted to figure out what it meant to be a good researcher no matter the field. Being a good researcher, whether it’s in science, philosophy, mathematics, or anything, requires taking apart our notions of skills, talents, abilities, and all other arguments and claims we put forward about what we do and re-framing them in appropriate ways to address the solutions in ways we decide. I’ve always believed that success requires nurturing these values and virtues in such a way that I can not only prepare for the next step in my life, but I can address the issues I want to address. This is how I search for a purpose. As I look for these purposes, I attach motives, intentions, and other moral characteristics to them. I don’t only use simple purposes like getting into a good graduate school. because I know that’s not the most effective way to work. I need to understand what it means to be good at my craft in general and apply that to what I do. When Diderot condemned asceticism, he argued for lifestyles in search of pleasure by cultivating passions. In response to the abstinence or celibacy of priesthood, Diderot argued those passions our body experiences cause us to achieve great things. I believe these methods of understanding the passion inherently tie into becoming a good researcher, but as Diderot sought to restructure knowledge itself and attack fundamental beliefs of his society, he was thrown in prison. Because this task of taking apart what it means to be a good researcher is so arduous and complex, even simple things I do on a day-to-day basis can be incredibly difficult. My methods of thinking through these problems and becoming the best researcher I can possibly be don’t align so perfectly with the tasks I’m assigned to do on a day-to-day basis. It simply doesn’t make sense to me that, if I want to become the best researcher I can possibly be, I need to follow the simple directions that are put forward in front of me every day. It also doesn’t make sense that other factors such as how many hours I work should be relevant to success when there are far more certain, nuanced factors such as what effect my work has had on the world. Instead, I absolutely need to take apart arguments and claims about these notions such that I can figure out what it means to be a good researcher. I notice minute differences in the way we reason to become better researchers. These little things can be as small as the difference between asking the question “What would the best researcher possible do in this situation?” vs. “What can I do in this situation to become the best researcher possible?” We can see this difference in running a protocol that hasn’t been used before on the grounds that the best researcher possible would do that or running a new protocol because it will make me a better researcher. The former shows courage and audacity in trying new things because the best researcher already has those traits established and would do that. The latter implies we’re not the best researcher, but, if we value the willpower in carrying out the task, performing it would make us the best researcher possible. Each method of reasoning is suited for different purposes and goals in what we do. That’s why it’s essential we understand these methods of reasoning for the purpose of becoming a better researcher. If my boss tells me, “Do this because you need to do it to get a good recommendation for graduate school,” it’s very difficult for me to convince myself to do that thing. I see that sort of motive as empty, selfish, and even contrary to how researchers should perform. Besides, it becomes trivial and almost nonsensical to reason that “If I do X, Y, and Z, then I’ll get a good recommendation.” A good recommendation cannot be made by performing actions for the sake of getting a good recommendation. There needs to be authenticity and genuine moral agreement in it. Even if it were true that my boss would have the action itself to write about my actions in my recommendation, this still doesn’t show much as my actions are things that I myself can write about in my graduate school applications themselves. There’s no deeper meaning or theoretical idea my boss puts forward. As a result of the way I reason through these issues, it’s often incredibly difficult for me to follow simple, straightforward directions because I’m so busy taking apart the justification, validity, and other characteristics of anything we do in a way that I can figure out what they should mean. The way I discern these differences in attempts to address these questions have caused me to become confused about what I should do in the present moment. It shows that, even though I’m always trying to be the best researcher I can possibly be, that doesn’t mean that what I do in the present moment is a direct statement on how good of a researcher I am. What I do in the present moment is a mixture of all of these thoughts about what it means to be a good researcher burning within me. Not having these methods of discerning these issues took its toll on me. When I was an undergraduate student at Indiana University-Bloomington, I could barely see the purpose in much of my work to the point where I nearly dropped out. I had faced so many obstacles from other individuals in my attempts to address these issues, and I was so discouraged by almost no other individual posing these questions to begin with. My justification and motivations for doing things in the present moment are complicated, as I’ve explained due to my interest in these issues. Challenging the very notion of knowledge itself, Diderot worked with mathematician-philosopher Jean le Rond d’Alembert to create the Encyclopédie, which they described asa theater of war in which Enlightenment intellectuals desiring social change rallied against the French Church and state. Allowing free thought, especially through atheism, the scholars laid down the fundamentals of fields such as mathematics, physics, and philosophy themselves. By reasoning through the inquiry and scope of these fields, d’Alembert wrote that memory gives rise to history, imagination to poetry, and reason to philosophy. I continue to turn to philosophy for finding truth in science as I work.
The truth is I’ve been struggling with these issues for maybe four years now, and I still struggle with them. They affect me in ways that I detect through everything I do. When I wake up, go to work, contemplate my actions, and even dream while I sleep, I find these questions on my purpose shaking me in ways I can barely articulate.
When I attended the 2019 meeting of the American Association of Advancement of Science, I couldn’t help but feel déjà vu. At my second AAAS conference, I found familiar faces among scientists and journalists. I also felt the conference’s theme “Science Transcending Boundaries” resonating with centuries-old writing that has remained relevant to this day.
At the AAAS meeting, Erika Hayden, director of the Science Communication program at the University of California Santa Cruz, and I discussed how science writers should tell stories with history in mind. This would not only let writers put current findings in context, but transcend the boundaries of research. Looking at the work of philosophers and mathematicians in the 1950s, we can address ethical issues of automation and predict how artificial intelligence will change the workforce. Referencing 19th-century novelist Mary Shelley’s Frankenstein can warn of the dangers of genetic engineering.I also discussed how engaging the public with history and literature can instill more faith in them as readers.
I spoke with researchers and journalists about my website A History of Artificial Intelligence as well as my other writing on scientific history. I mentioned how my work had opened up eyes of my audience to the nuanced, complicated history of science. It can sometimes be a stark contrast to journalism’s principles of concise, straightforward writing, but, by writing with a historical perspective in mind, scientists and science writers can at least find well-reasoned, humanistic answers to age-old questions. These answers speak true to the lives, virtues, and values the human being seeks to instill within research. A historical account of science lets scientists and writers draw from fields such as ethics, art, and philosophy – a true transcendence of boundaries. The same way Bill Nye and Carl Sagan capture the current public’s imagination, popular science emerged from tens of thousands of popular science books published in France throughout the 1700s. Today’s scientists and writers can understand this history of science writing to put their roles and purposes in context and transcend boundaries.
Throughout the conference I spoke with journalists, researchers, and other professionals about the best ways to engage the public as a science communicator. As I reflected upon the historical works, I spoke with others how the 18th-century French author Bernard le Bovier de Fontenelle wrote about science such that a wide audience could understand in his work Conversations on the Plurality of Worlds. Exemplifying the theme “Science Transcending Boundaries,” he introduced readers to Cartesian philosophy centuries before the word “scientist” was even coined. I spoke with journalists on the principles of journalism and how they came about through historical events such as the French Revolution and the Dreyfus Affair. Through these events, journalists developed principles of writing in an investigative manner, independent of external forces that can, in some ways, revolutionize society’s ways of thinking. At the same time of Fontenelle, French philosopher Voltaire’s poems, short stories, critical essays, plays, letters, and history covering physics, chemistry, and botany would also redirect future scientific research. Imagining our work in these greater contexts of history, it gave others a deeper appreciation of their writing and research. With the past in mind, we would speculate on the future of issues such as artificial intelligence and genetic engineering. With Fontenelle and Voltaire’s writing, scientific books went from being read by hundreds to hundreds of thousands. As intellectualism flourished in 18th-century France, science itself became more professionalized. Scientific institutions received more support, and individuals took more distinct professional research paths, re-defining the scientist. In 1795 French philosopher Nicolas de Condorcet advocated scientific reasoning in democratic governance. From the lab bench to the living room, science entered the hearts of the masses. It laid the foundation for the intellectual revolution of the Enlightenment to change reason and inquiry itself.Science writers themselves can learn about the purpose and value of scientific research through these historical trends. In learning from Fontenelle, Voltaire, and other historical writers, scientists can put their findings in greater contexts, writers can share a more accurate stories of science, and the world can become better for the sake of humanity.
It’s hard for me to remember the time before the internet became such a pervasive part of daily life. I work online to earn money, watch Netflix to relax, scroll YouTube for advice on anything from personal finance to cooking, and read push notifications from my favorite news outlets to keep up-to-date. I’m part of the generation in which proper computer use was taught in school. Our digital literacy began with typing classes in grade school, then turned to learning about the dangers of Wikipedia in high school, and, by the time I was in college, people used the internet to write class papers more often than physical books in the library. But one area where I think our digital education was lacking is in determining how to spot a ‘credible’ source.
Sure, people have always known that anyone can say whatever they want on the internet, and we’ve all heard that it’s important to question what you read before accepting it as fact. But very little was actually said about how to determine if something is credible, or what to do if you come across websites with suspect information. If anything, this was further confused in college, where only peer-reviewed academic articles were considered credible—a wealth of information that, by and large, you lose access to after graduation.
One solution to this problem is to review how audiences are persuaded in the first place. By understanding how arguments are created, it can be easier to recognize flaws in logic, or failures in the speaker’s character. I’m talking about Aristotle, and his three modes of persuasion: pathos, logos, and ethos. You likely touched on these in school when studying persuasive writing and political speeches, but I don’t think nearly enough emphasis is placed on how methods of persuasion can influence perceptions of credibility, the spread of viral stories, and belief in factually unsound statements. Although all three modes are equally vital for a strong, sound argument, we as human beings are predisposed to focus on some factors more than others. I believe with the rise of misinformation, it’s more important now than ever before to understand exactly how we are influenced by persuasion, and what our weaknesses are in recognizing good arguments.
Let’s start with the easiest, pathos. Pathos is an appeal to an audience’s emotion, whether negative or positive. This encompasses both evoking a particular emotion from an audience, as well as invoking that emotion as justification for a certain behavior or action. There’s a strong connection between emotion and persuasion: in fact, there’s evidence that people naturally include more emotionality in their language when they are trying to be persuasive, even if they are specifically advised against doing so.Perhaps appeals to emotion are frowned upon as unscientific or misleading, but they’re pervasive for a simple reason: appealing to emotion works.In 2012, researchers at the University of Pennsylvania found that the most emailed New York Times articles were ones which prompted emotional responses, especially if the emotion was positive or associated with high energy, for example, anger or anxiety, as opposed to sadness.A study in 2016 found that when people are forced to make quick decisions about an object, they are more likely to rely on their emotional response to the item rather than objective information. A person is more likely to quickly classify a cookie as something positive because it makes them happy, while with more time for consideration, they may classify it as is negative because it’s unhealthy. Finally, a metanalysis of 127 previous studies concluded that appeals to fear were nearly always effective at influencing an audience’s attitude and behavior, especially when the proposed solution is seen as achievable and only requires one-time action. We know that people use emotion to make quick judgements, can be strategically influenced by arguments which appeal to emotion, in particular, fear, and are more likely to share articles which elicit emotion. These are all strong evidence that emotion is an integral part of how humans perceive and interact with the world. The problem with pathos is that if used without logos and ethos, the proposed solution to a problem may not be very effective, andthere’s no guarantee that the problem being addressed is even real. For example, in 1998, Dr. Andrew Wakefield purported to have found a link between autism and vaccines. There is no such link, but the report garnered enough fear to spark the anti-vax movement which is now responsible for the reemergence of preventable diseases like measles and whooping cough. Fearmongering about marijuana in the 1930s lead to the drug being outlawed in 1937 and classified under the strictest designation by the Controlled Substances Act in 1971, despite contemporaneous recommendations from within the U.S. government to decriminalize its use. What’s more, there’s evidence that decisions made during stressful situations are less logically sound than decisions made in calm situations. The lesson? People suffer when pathos alone prevails.
Logos is usually framed as the antidote to pathos. An appeal to logos is an appeal to logic: cold numbers, rational solutions, statistical significance. In theory, this sounds great. The problem is, the human brain isn’t wired to think purely logically: we have trouble conceptualizing large numbers, we seek patterns in random smatterings of data points, we’re quick to claim causation where chance or other variables are involved, and we’re easy victims of logical fallacies. Even among the scientific community, there are plenty of examples of seemingly logical, scientific arguments that turned out to be bad science. A paper published in 1971 asserted that women’s periods will “sync up” if they spend enough time together. Although this is still widely believed today, it has been thoroughly debunked by the scientific community. More pressing, the idea that low-fat diets are an effective way to lose weight without any regard to sugar consumption was introduced to the American consciousness in 1967, sponsored by representatives of the sugar industry. This no doubt altered the standard American diet and likely contributed to the rise in obesity across the U.S. (although the culpability of the sugar industry is up for debate).Even with good intentions, scientists can make mistakes: In 2018, scientists tried to replicate 21 previously published social science experiments, but only got the same results for 13, all with a weaker correlation than in the original studies. While it’s important that scientists review and revise their original conclusions, correcting common misconceptions is difficult once a myth has entered the popular consciousness: Think you’re immune? Check out this infographic.
What’s more, without pathos and a focus on morality, seemingly “logical” solutions can be plain cruel. This is demonstrated beautifully in Jonathan Swift’s satirical A Modest Proposal, in which eating children is proposed as a rational solution to poverty in Ireland. Decisions made with a total disregard for emotions shouldn’t be our gold-standard for sound reasoning, and an argument lacking in pathos can be just as bad as one lacking in logos.
If appeals to both pathos and logos can lead to mistakes in reasoning, where does that leave modern thinkers in their quest for credible arguments? The answer lies in Aristotle’s third mode of persuasion, ethos. Today, ethos is sort of a fuzzy idea; it means both having knowledge about a topic and establishing yourself to your audience as a credible speaker, two things that don’t necessarily go hand in hand. Aristotle himself split the idea into three parts: good sense, good moral character, and goodwill. Good sense, or phronesis, comes from having experience in one’s field, especially with a track record of rational, moral decision making. Good moral character, arete, is gained by practicing virtuous behaviors until they become habits. Finally, goodwill, eunoia, is earned by convincing the audience of one’s knowledge and intentions. Having ethos is vital to a sound argument. In fact, Aristotle grants only three reasons for unsound arguments to exist: either the speaker is wrong due to lack of good sense, the speaker is lying due to lack of moral character, or the speaker is silent, because they don’t care if the audience hears good advice. The problem is, it’s difficult for readers to judge the ethos of a speaker, particularly over the internet. Unlike pathos and logos, the root of ethos comes from outside the argument itself: the audience must know the speaker’s experience (good sense) and moral character to avoid falling for unsound advice. To make matters worse, if a speaker wants to persuade an audience, they will go through the trouble of appearing credible whether they are offering sound advice or not, they. Today, that can mean anything from verbally assuring the audience of their credibility and good intentions, to selecting appropriate clothing for a given situation, or even hiring a web-designer to make sure content looks clean and professional: all things which index competence in the modern world. But the appearance of credibility alone isn’t enough to judge a speaker as credible.
Where does that leave us? Finding reliable, credible information can seem daunting at first:emotions cloud our judgement, but logic is uncertain. We often must know if a speaker has reliable experience in a field without ever having met them. However, with a little practice, I think we can all improve our sense of what’s credible and what’s not. To that end, I offer the following advice: First, after hearing an argument or statement – be it online or in person – consider your own position to the piece. How did it make you feel? Does it confirm what you want to be true? Do you have any stake in the events at play? All of these factors cloud judgement, so take extra care when evaluating an argument. Second, consider the logic of the piece. Does everything make sense? Do the numbers add up? Does it align with a wider context, or does it seem out of place? If the argument does not make sense logically—and you have enough knowledge in the field to judge it appropriately—then it probably isn’t sound.Finally, consider the position of the speaker. Do they have experience with the topic being discussed? Do they have a history of honesty? Do they benefit from your support of their argument? If more information is needed to answer these questions, then do some digging. Look to others with more experience in the field to help determine if the speaker can be trusted. If you can’t find enough information to make a solid judgement, consider the source not credible. And when in doubt, err on the side of caution:given that Rhetoric was written in the 4th century B.C., speakers have had a loooooong time to develop ways to manipulate audiences, whether intentions are pure or not!
Sources Alvergne (2016). “Do women’s periods really synch when they spend time together?” The Conversation: Academic rigour, journalistic flair. Aristotle, Rhetoric: Book II. Translated by W. Rhys Roberts. Berger and Milkman (2012). “What Makes Online Content Viral?” Journal of Marketing Research. 49(3): 192-205. For a summary, see: Tierney (2010). “Will You be E-Mailing This Column? It’s Awesome” The New York Times.
It overwhelms me. It is everything. Waking up, walking outside, working in this world, waiting for time to pass, Everything launches a flurry of questions and thoughts through my head, “Who am I?” “What is my purpose?” “What can I achieve?” They taunt me, vex me, take form of darkness, “What is this world?” “What does it mean for me?” “Why can’t I get out of bed?” They dominate my thoughts and run through my head, control me. And every decision becomes an escape, A reaction to avoid and tune out reality, to ignore the dark truths that may be difficult, Tactics fail. Only addressing these dark fears that govern my thoughts. They seek to control as if they own, fear every mistake amounts to nothing, fear there is no deeper purpose in my life, fear of the terrible decisions I’ve made leading up to now, It all comes tumbling down.