Studying the reliability of thinking and decision-making leads us into the slightly

complex world of System 1 (fast) and System 2 (slow) thinking and heuristics. Teaching cognitive biases is straightforward, and less is more. The key point is that we are inclined to base our current thinking and decision-making on past experiences and present perceptions. Our memories distort the past, and the media and our selective attention distort our present, especially if we are being pushed into a fast decision.
Tversky & Kahneman (1974) review a range of research in which they themselves have tested different heuristics, looking for evidence of ways in which System 1 thinking (effortless, fast, a short-cut to the answer) may operate when tested under specific conditions. They describe three different heuristics, leading to cognitive bias.
The representative heuristic is based on the idea that one event is representative of other events very similar to it, using the idea of how probable something is according to the individual’s prior knowledge of it. Even though participants knew that 70% of the descriptions of people that they had been given had referred to engineers, while 30% had referred to lawyers, when faced with a description of a man who could have been either, they judged that there was an equal chance of John being either an engineer or a lawyer. Similarly, when given a description of a shy quiet person, they were immediately judged to be most likely to be a librarian, even though the list of possible occupations included those that were much more statistically probable. This can be seen as the basis for stereotypes – taking a shortcut based on prior knowledge and assumptions.
The availability heuristic works by people tending to judge an event using the probability of its occurring, according to their prior knowledge: e.g. a middle-aged man with chest pains might be assumed to be having a heart attack but a four-year-old child with similar pains would not elicit the same response as four-year-old children do not tend to have heart attacks. This can lead to bias in diagnosis, as clinicians base their diagnoses on previous examples that come readily to mind; they are cognitively available.
The anchoring bias involves an initial value or starting-point in an information processing task determining how the final value is arrived at. The researchers tested high school students asking them to estimate one of the following: 8x7x6x5x4x3x2x1 or 1x2x3x4x5x6x7x8. Of course, each answer is the same as the numbers are identical per list. What Tversky and Kahneman found was that the descending list (8x7x6 etc.) produced a much higher estimate than the ascending scale (1x2x3 etc.) with the researchers concluding that the first value anchored the value as either high or low and that this is what caused the adjustment to the estimations. This is related to our first judgements about people: if we judge them in a positive light because of their friendly behaviour, this can ‘anchor’ our appraisal of their subsequent behaviour.
Use these examples as the basis for discussing how stereotypes are developed, or how diagnoses can lack validity, and they are also useful for discussing the lab experiment method. I am sure students can think of many more examples of how these heuristics can occasionally (not always) work to distort our thinking and decision-making in real life. But that might take some time and some logical, patient reasoning using System 2 thinking!