We live in strange times. In the public sphere, we talk more than ever about ‘truth’ and ‘facts’, and yet in a civilisation that has developed some pretty good methods of getting at the truth, particularly in the sciences, medicine and law, nothing is more contested today. We have at our disposal the internet and other technologies that one might have expected to lead to ever-greater popularisation of science and the arts, enabling a concomitant shift to considered, rational debate of topics in politics and society. However, the same tools have turned out to be just as good at purveying misinformation, diatribe and invective, and our collective ability to solve social problems or even talk about them appears to be quickly eroding rather than improving. Of course, the tools of the information age are not innately at fault; they are merely amplifiers of states of mind.
The primary symptom has been a steady replacement of objective thinking by emotional subjectivism, to the point where the lack of objectivity may be seriously harming our societies and democracies. This is not just a simple psychological or logical thinking error occurring in the minds of certain individuals, but the result of an academic and more recently a popular culture of rejection of a reasonable relationship with reality in favour of endlessly contestable stories and arbitrary social categories. Major political decisions such as Brexit and the election of Trump as US president have been made based on extremely thin justification, by a voting public seemingly more interested in venting resentment than choosing sensible paths for the future. This creates the danger not just of being out of touch with reality, but constructing new social and political pseudo-realities with no foundations. In the most extreme form, what we think has become how we feel.
In this subjective world, objectivity may be replaced at any level, including the most mundane. In the UK, the labels of products sold in the major supermarkets contain phrases such as ‘tart and refreshing’ (blended juice), ‘sweet and crunchy’ (snap peas), and so on, when they used to just provide the names of the food in the packet. When did marketing people decide that they had to change the name of a product to how it tastes? In the education system, the importance of feelings and not hurting them at any cost has led to the culture of gold stars for everyone for just turning up and the demonisation of any kind of punishment as cruelty and streamed classes as elitist. It has become fashionable to reject basic science, as we see with the anti-vaxxer and climate change wars. But so too does the pro-science side often have an ideological ring to its diatribes against the scientifically illiterate (a sign of scientism). How is a person with no scientific training to know the real actual truth on anything when every ‘fact’ is claimed by one or other camp in these wars? If there can be no unclaimed facts, there can be no unclaimed minds.
In higher education, university student groups routinely demand the ‘no-platforming’ of speakers who are offensive to these students’ views; law professors are told they cannot use the word ‘violate’ in lectures as it is an offence to victims of rape; English professors are told they have to provide ‘trigger warnings’ on texts as horrific as The Great Gatsby and Mrs Dalloway; academics generally wonder how they can teach challenging ideas when students now demand ‘safe spaces’ from such ideas. The latter concept is so strange that it puts the likes of Slavoj Žižek, Noam Chomsky and Ann Coulter on the same side. In the social sciences, discussions about slavery and colonisation have become dirges of white liberal self-flagellation, with no regard to objective facts such as technology differences, indigenous war and slavery, the complete lack of democracy and therefore representativeness, of colonial invasion.
The political sphere, where the quality of discourse was already catastrophically poor in recent decades, has became a surreal cartoon show of nearly fact-free extremist ranting in the Trump/Brexit era. Decisions about immigration in Europe or the future of Greece, or the Euro itself, simply can’t be openly discussed in evidential terms. Indeed, the contemporary pinnacle of emotional subjectivism must surely be the inability of Western countries to rationally discuss the topics of immigration and Islam. If a question or criticism is mentioned with respect to the first, the speaker is castigated as a racist, and with respect to the second an ‘Islamophobe’.
The result of the culture of emotional subjectivism is an extreme polarisation of discourse on any topic into opposing ideological camps each specialising in their own brand of insults, and leaving no space between for those interested in calmly and objectively talking a question over.
In this blog, I aim to examine some of the symptoms and causes and consider ways forward. Much from philosophy, psychology and other disciplines may be brought to bear on these topics, and I’ll include some references on the way, but as the intention of this blog is practical rather than academic, such discussion will generally be brief. Nevertheless, I’ll use a few key philosophical terms, which I’ll take the time to explain in this first post.
One very useful tool comes from the school of scientific realist philosophy, whose position is that a) there is a mind-independent reality and b) it is possible for most practical purposes to approximately determine its nature. I’ll take this stance as a given, since regardless of objections from academic philosophers, it’s the only tenable one given the reality of mobile phones, modern medicine and mag-lev trains. The tool is simply the concept of grounding which roughly means that the terms used in statements, for example in a newspaper or YouTube clip, or a medical theory, refer to real things. ‘Terms’ essentially boils down to references to entities and relationships between entities; grounded terms are said to have referents, i.e. targets in the real world.
The sentence “smoking causes cancer” refers to three things in the real world – ‘smoking’ (meaning habitual smoking), ‘cancer’, and a causal relationship between them, today accepted as real due to decades of medical research since Richard Doll first sounded the alarm. We can thus say it is ‘grounded’. A sentence doesn’t have to be true to be grounded. The statement “mothers who don’t breast-feed are bad” clearly refers to mothers and babies in reality, and a common public notion that not to breast-feed is bad motherhood. However, there are many reasons (good and bad) some mothers don’t breastfeed, and it’s easy to prove that non breastfed babies turn out, generally speaking, just as well as those that were breastfed (if this were not so, failure to breastfeed might be illegal). So the statement is highly debatable, despite being grounded, which is indeed the point: one can only have meaningful debates and do meaningful investigations about grounded propositions.
The notion of groundedness can be juxtaposed against a different kind of philosophy (some would say pseudo-philosophy), post-modernism. The latter school of thought is based around the idea that assertions about reality are not reliable, and the only thing we can work with is texts, narratives, and stories; ‘facts’ are really just socio-cultural constructions. In the modern world, narratives come in all forms, including film, music, and everything we watch or read online, as well as historical documents. In this view, we are supposed to see the world as consisting of competing narratives, each with their own local truth, none privileged over the others, all to be respected. This might sound like a good approach for a cop interviewing the owners of cars involved in a traffic accident, but in postmodernism, narratives also include science, religion and myth, with none being allowed to claim any better relation to the truth than the other. Consequently, there are people who believe that fundamentalist creationism should be taught on an equal footing with evolution in schools as a theory of how the natural world came to be. The typical justification given is that “children should be allowed to question everything”. Coming from religious fundamentalists, the irony does seem somewhat amusing. Elsewhere, there are social scientists who argue with straight faces that gender preferences and even gender are solely social constructs with no biological reality behind them.
The interesting thing about postmodernist narratives, and other stories treated by post-modernism as being valid is that while the narrative may internally cohere, when compared objectively against reality, many of the terms, and particularly the references to relationships are not in fact grounded – they don’t refer to anything real. An example is from a 2014 Ken Hamm v Bill Nye Creationism versus Evolution debate, in which Hamm repeatedly refers to ‘all the species of dogs [coming from the one original dog pair on Noah’s Ark]’. But his notion of ‘species’ is spurious and doesn’t refer to anything real. All the dogs he has in mind are just one species – Canis lupus. There is a good reason why Labradoodles exist – because they can. The post-modernist perspective does however teach us something very important: to regard narrative (in any mediatic form) as a real thing – something to be sceptically inspected rather than to be believed as being grounded.
Two more useful concepts from philosophy are the ontological and epistemological perspectives on human knowledge. The ontological view consists of descriptions (sometimes formalised and computable) of the way we understand the world to actually be – particularly of the kinds of things in it, and their defining characteristics. An example of such a description is a database of species of living organisms, arranged in the well-known hierarchy originally described by Carl Linnaeus. Reliable ontologies consist of reliable facts about kinds of entities and relationships found in the world, know as universals by philosophers, such as chemical reactions, animal biology, and weather processes. They result from extensive scientific study of numerous examples of some entity or process. We can think of the ontological view of the world as impersonal textbook knowledge, in other words, the way the world is as best we know it today.
Where ontological knowledge has been developed by application of the scientific method, we can usually trust it, since it will normally indicate holes in knowledge rather than make the unfounded claims common in the pre-scientific era, such as for ‘phlogiston‘ and ‘aether‘. A good example comes from genetics. Until the early 2000s, non-coding DNA (the DNA that directly codes for proteins that make up your body, or phenotype) was called ‘junk DNA’, but in the last decade research has found that significant amounts of it act to regulate DNA to protein transcription. Genetics textbooks until about 2004 didn’t pay much attention to junk DNA, other than to make vague but nevertheless correct statements that it didn’t appear to have any function or that its function was unknown. Recent textbooks have started filling in those holes with the results of epigenetics research. Ontological knowledge is thus not perfect, but does a reasonable job of identifying its own limitations.
The epistemological view is quite different: it relates to what we think we know in a particular situation, about an event or thing. In contrast to the ontological viewpoint which relates to universals, the epistemic viewpoint relates to particulars: this mouse rather than the species Mus domesticus; this shooting rather than the category ‘accidental gunshot injuries’. Epistemology is about ways that such immediate knowledge may be obtained, how we come to know anything, how it may be justified and the kinds of errors that can occur which render our knowledge fallible, or even just plain wrong. It distinguishes various sources of knowledge, including empirical (external evidence from the senses), introspection, emotional and faith-based knowledge, categories that must clearly be treated quite differently in the public sphere. The epistemic viewpoint is of practical importance in journalism and other kinds of reportage, where phrases like “he said …”, “they believe that …”, “scientists now think that …” are often used in an attempt to convince the reader to believe or disbelieve whatever follows the ‘that’, depending on whether the source is implied to be credible. This construction is known as an ‘epistemic wrapping’.
We can connect these two types of knowledge to understand the basic function of reasoning, or what most people would understand by the term ‘rational thinking’. Formally, reasoning is inferencing, that is, inferring a conclusion from premises, such as the following:
- A: medical errors in hospitals are more common on weekends [in the UK] (an ontological claim)
- B: Diane is booked for admission to hospital on Saturday (an epistemic claim)
- => Diane is more likely to experience a problem than if she was admitted on Wednesday. (an epistemic inference)
The above illustrates a very common reasoning structure. One of the premises (B) is a specific fact about an particular event, whereas the other (A) is a statement of universal nature, stated in (implied) statistical terms. Assuming we believe B uncontroversially, if we also believe A, we should agree with the conclusion. However A is likely to be contestable and also non-uniform – it might be that only hospitals in the north of the UK or only A&E wards have this deviation; if Diane is going to the cardiac ward of a hospital in Kent, the conclusion probably doesn’t hold.
This is an example of inductive inferencing. The key feature is the comparison of a (claimed) fact about a particular (something from the epistemic realm) against a (claimed) truth about a category (something from the ontological realm) to obtain a particular conclusion. The latter ‘truth’ has been obtained due to numerous instances of the same conclusion being associated with the same kind of particular in the past – the previous ‘N’ instances. The current occurrence of the same kind of particular constitutes the N+1st instance, implying that the same conclusion can be drawn this time as well, which is the essence of induction. This is the only way reliable opinions about real world phenomena can be formed – by comparison of reliable facts against reliable general knowledge. Unreliable opinions (arguably the majority found in opinion columns and social media these days) are typically formed when either the facts or the general knowledge are not reliable, or when the reasoning process is faulty.
Inductive reasoning may be compared to deductive reasoning, whereby the conclusions necessarily follow from the premises, as in mathematics, logic and Sherlock Holmes stories. In deductive inferencing, premises referring to categories (such as ‘all humans are mammals’) have the status of scientific laws or mathematical rules. We are thus entitled to consider a conclusion incontrovertible or approximately so if there is one of: a) an axiomatic rule (such as from mathematics or a language grammar); b) a law of physics (e.g. Newton’s law of gravitation) or c) an extremely solid scientific theory (e.g. the theory of type I diabetes mellitus or that of evolution). Such premises are either defined to be true, or in the last case, have the strength of a natural law due to being a theory with solid explanatory and predictive power.
More typically however, general claims about the social, political and economic spheres of life are not underpinned by any proper theory, and indeed, as in the example above, take the form of statistical statements with no currently available explanation of cause. In fact, in the majority of situations where no axiom, physical law or theory can be brought to bear, most of our efforts at reasoning are centred around finding explanations for particular results.
This brings us to a particular kind of inductive reasoning is called inference to the best explanation, or more formally, abductive reasoning as termed by C. S. Peirce. It is the most common kind of reasoning used in everyday affairs to derive explanations of events. In terms of the above structure, premise A would be considered an explanation of the conclusion. The term ‘best explanation’ implies that in any situation, there can be a number of possible explanations (none of which are axiomatic or law-like), and that we should be in the habit of considering all possible explanations and then determining which is most likely based on the existing evidence, which is often statistical. The better TV detectives such as Columbo and Wallander always consider multiple explanations, and usually solve the crime by following one that initially has least evidential support, but when followed leads the investigation to find incontrovertible evidence. Participants in public debates and the media do this less often, and frequently come to faulty conclusions.
The following inference demonstrates the failure to consider possible explanations other than (A) for fact (B), and why the conclusion is not valid:
- A: Drunk driving causes accidents.
- B: Marco had an accident.
- => Therefore, Marco was driving drunk.
There is a third step in many thought processes, which takes on great significance in the public sphere, and that is the act of moral judgment. Moral judgments are made by comparing particular claims to a moral value system. For example, murder is morally wrong according to most moral codes. The chain of information processing is as follows:
- obtain facts
- make inferences to obtain conclusion(s)
- compare conclusions to universal moral values to obtain moral judgment.
As with the first two steps, serious errors are possible in the third, particularly if the moral value system itself is suspect. When combined, errors in all three stages of cognitive process often lead to a complete failure of public discourse resulting in outrage with no basis in reason or fact. Unfortunately for the truth, moral judgment as the last step in a reliable cognitive process has become the first step in today’s mediatic culture of outrage, with only a passing engagement with true facts or solid reasoning.
A recent example illustrates a series of failures in the real world. In August 2017, a Swiss Hotel owner was accused of anti-semitism due to having posted a sign at the hotel pool reading “To our Jewish guests, women, men and children, please take a shower before you go swimming”, and another in the kitchen notice, instructing “our Jewish guests” to limit use of the facility’s freezer to between 10 and 11am and between 4.30 and 5.30pm, so as to avoid constantly disturbing staff. These notices, clearly somewhat offiensive, seemed to be obvious evidence of an anti-semitic hotel owner, and elicited responses incuding the following:
- condemnation by Israel’s deputy foreign minister, Tzipi Hotovely, describing the notices as “an antisemitic act of the worst and ugliest kind”;
- the Simon Wiesenthal Centre published a letter asking Switzerland to “close [the] hotel of hate and penalise its management”;
- booking.com removed the hotel from its site on the basis of non-discrimination.
In fact the explanation of the hotelier’s behaviour was quite different: the hotel is a favoured destination of Jewish travellers, particuarly orthodox Jews. The manager habitually made the kitchen freezer available to these customers to allow them to store kosher food, but probably felt her largesse was being slightly abused and posted what she thought was a polite sign to that effect. One can assume that she posted the sign at the pool in response to (easily-identifiable) orthodox Jewish visitors not obeying the very standard European rules about showering before swimming due to simple ignorance. The only problem was that she did not think carefully about the wording of the posted signs or how they might be interpreted. [Guardian report].
Understood in the light of facts, this incident would once have seemed almost comic, and one might have expected an article containing a picture of the sheepish-looking hotelier accompanied by understanding Jewish guests, making up after an embarrassing misunderstanding. Indeed the Times of Israel did publish a very reasonable article (with a blackly humorous punchline) but noone will find it unless they look; in the mainstream press we just see hysteria. There are serious problems here: if the Israeli government, NGOs like Simon Wiesenthal Centre and a major online hotel booking site react to this silly mistake in the same way as they would react to real anti-semitisim, how are we to know when real anti-semitism is occurring? How can such serious organisations evince outrage so quickly, when the true, unremarkable explanation was openly available? They appear not to be aware of the credibility problem. How careful must we be, if such simple mistakes resulting in the blackening of one’s name and possible ruin of businesses?
As this incident shows, lot of what goes wrong in public discourse and the media has to do with failures around gathering specific facts, incorrectly invoking general knowledge, incorrect reasoning and ultimately unfounded moral accusations.
Another simple example serves to illustrate:
- [some] vaccines contain aluminium as an ‘adjuvant’ (helper compound)
- aluminium is a neuro-toxin in humans
- therefore, being vaccinated (with these vaccines) is dangerous.
The above conclusion is faulty according to contemporary evidence. The main error in reasoning is to do with the amount of aluminium in a vaccination compared with a toxic dose, which is far greater, according to current science. It might be the case that for a very small segment of the population, even the tiny amounts of aluminium used in vaccinations is in fact toxic, but to date, no study has shown this in any reliable way. Even if it were shown that (say) people with some particular gene had an acute sensitivity to aluminium, it still would not make the above inference true, for the vast majority of cases. It could however be used to modify the second premise to something like:
- aluminium is not a neuro-toxin even in tiny doses, except to people with the VAXX gene.
One might expect that doing more research on small-dose aluminium toxicity would be the obvious way forward. However, a cursory internet search shows that the few researchers who do follow this path are without exception dismissed as anti-vaxxers, and their research debunked in various pro-science blogs. The latter may have it right, but that’s not the issue. The real issue is this: could a respectable scientist embark on such research without fear of being assigned to the anti-vaxxer camp and having her reputation destroyed? My impression is that most would not take the risk.
If we get onto bigger topics such as terrorism, gender politics, immigration, and climate change, the vehemence of the entrenched position only becomes more pronounced, and the fear of honestly sceptical research may be so great as to be skewing science. I’ll get into this question in later posts.
In summary, I would state the kind of critical thinking skill needed for useful public discourse in terms as the careful execution, in order, of the three stages of cognitive processing i.e. observation, inference, and moral evaluation. In other words, we need to think scientifically.
Thus there are two things I want to look at in this blog: the mechanical failure to think scientifically, and the normalisation of emotional subjectivism that appears to be one of the main causes.
I’ll finish here with an observation that I think is worth keeping in mind at all times: the only enquiry that can be trusted is by one with no stake in the outcome.
Share this: