January is not the kindest month but it doesn’t have to be as cruel as we are popularly led to believe (which is good news for those of us whose birthdays fall in this gloomy month – thank you mum and dad…) Research has shown that depression can be held at bay by engaging not only in physical activity but in a positive mindset, by simply not giving in to the feeling that ‘it’s all bad’.
Our fabulous textbook, ‘Psychology Sorted, Book 1‘ includes research which considers the role of biology – brain chemicals, specifically the neurotransmitter serotonin – in the experience of depression. Book 2 will look in even more detail at the etiology of depression (Abnormal Psychology) and the consequences of depression on health and well-being (Health Psychology). There is some validity to the idea that if you think you’re not going to enjoy something then you won’t enjoy it. So – here’s to January, the most rockin’, joy-giving month of the year! Just keep telling yourself that and you never know, you might start to believe it!
Most students and teachers of psychology are familiar with Bandura, Ross & Ross’s classic study into the role of social modelling in aggression* It showed that children who observed aggressive acts committed by adults in one setting would, through play, reproduce those acts in another setting when the adult role model was absent. Bandura extended this social learning model in the 1980s into what is now a complex and comprehensive social cognitive theory, further developing and exploring concepts underpinning social behaviour: performance feedback, modelling, and – most importantly of all – moral disengagement.
Moral disengagement is the process by which we disengage our moral self in order to distance ourselves from our actions. It can be seen in soldiers who need to disconnect themselves from their actions in order to live with themselves, and in us every time we buy our food in non-recyclable plastic packaging. The decision to go to war in a just cause can be a moral one, but it still involves killing fellow human beings. The desire for conveniently packaged food is an understandable one, but it still involves environmentally degrading our planet.
Bandura uses social cognitive theory to investigate our moral disengagement from harmful activities. He applies it particularly effectively to drone warfare and to the arms trade. In class I use it to explain how we dehumanise the homeless in order to ignore homelessness.
For Bandura, it is not enough to explain moral disengagement. He believes that if we can understand the processes underlying it, then we can begin to change them, and this is why he promotes social change through locally-distributed films in Africa, Asia and South America, making the abstract explanations of social cognitive theory concrete to people’s lives. Nearer home, we need to use storytelling and media to keep advertising the environmental dangers of uncontrolled consumption. Psychology has a vital role in social change as well as social explanation, for “As a society, we enjoy the benefits left by those before us who collectively worked for social changes that improved our lives. Our own collective efficacy will determine whether we pass on a habitable planet to our grandchildren and future generations.” (Bandura, 2009)
*For those of you who have already bought our book (thank you!), the description of the study design has been changed from a matched pairs to a ‘matched triads’ as the children were matched by measured levels of aggression across the three experimental groups. The effect is the same, to control this variable. We will publish this change in an updated edition in the future.
The question of when to use non-human animal studies as evidence for human behaviour is a tricky one. Because it remains unethical to lesion the brain of a live human to look for a correlation between brain damage and behaviour (at the moment!), animal studies are used in large numbers at the biological approach. However, many people are becoming more disturbed by this than previously as over the years we have come to realise that animals also suffer pain, fear and anxiety as we do, and maybe other ways should be sought to conduct animal studies.
In Psychology Sorted, this is part of the Biological extension: the British Psychological guidelines for working with animals (2012) state that researchers should: Replace animals with other alternatives. Reduce the number of animals used. Refine procedures to minimise suffering. But isn’t how they are used a large part of the problem? After all, observation under natural conditions should be no problem. Xu et al. (2015) researched naturally-occurring depression in macaque monkeys by observing monkeys living at a research base in China in environmental conditions that closely resembled what they would experience in the wild, for nearly 3 years. The monkeys were housed in colonies, usually of two males and 16-22 females, with offspring of under six months.
Instead of unnaturally separating baby chimpanzees from their mothers, as Bowlby and others have done, causing distress, Stanton et al. (2015) ‘picked up poo’: they investigated the effect of maternal stress on the glucocorticoid levels of infant chimpanzees by examining and measuring faecal glucocorticoid metabolite (FGM) concentrations of mothers and babies in the wild. Much less stress for the monkeys, though maybe not for the researchers!
Bearing in mind that we are animals too, it is time empathy stretched to our non-human cousins, and these methods seem to be a first step on the way. See Psychology Sorted for more examples of ethical animal research.
Bit of a controversial idea this one but I’m going to put this out there: is using a questionnaire to gather data at all helpful in the quest to measure behaviour? Lots of psychologists use this research method, probably because:
a) it’s quick to fill in (usually, although I wouldn’t recommend that you start filling in Eysenck’s personality inventory unless you have the whole day to spare!)
b) easy to produce and to replicate, particularly in these days of the world wide web and SurveyMonkey
c) quantitative data is generated which can be fashioned into handy little graphs and neat percentages
d) lots of people are aware of and regularly fill in questionnaires so there’s no danger of participants scratching their heads and muttering, ‘What is this strange and unusual item before me?‘
But the above reasons are not really enough to justify the use of a measure that is so darned unscientific, imprecise and prone to the mood of the person filling it in. Social desirability bias, outright lying, deluding oneself, trying to mess with the researcher’s results, not understanding or misinterpreting questions: none of these can ever be fully ruled out when analysing questionnaires. I rest my case: questionnaires are a bit of a cop-out in terms of psychological research. Now, would you like to fill in a questionnaire to indicate your level of agreement with this opinion….
How we develop our social identity is still a hot topic today, and for those of you studying the effect of technologies, especially social media, on social identity, there is a developing literature on the subject. But we should start with the classic minimal groups paradigm from Tajfel (1971), found in our new book Psychology Sorted, as it is still so relevant today.
The predominant 1960s theory of social identity formation came from Sherif et al.’s (1961) study which led to the development of his 1966 realistic conflict theory that competition for scarce resources is the foundation for group (social) identity, and also one cause of conflict. Think of the worldwide competition for water and oil on a large scale and maybe sporting competitions on a smaller scale. Why do you think that schools have ‘houses’, ‘sporting colours’, ‘house badges’?
However, Tajfel’s research contradicted this, demonstrating that only minimal conditions were necessary for group identity to form: his experiment randomly allocated schoolboys to two groups. The boys thought they had been allocated their group according to their preference for a painting by either Klee or Kandinsky, but this was a deception and the allocation was random. This perception of belonging to a certain group was enough for boys to show in-group favouritism when allocating virtual money via a complex matrix of rules. The minimal groups paradigm formed the basis of Tajfel and Turner’s social identity theory, which remains a powerful explanation of in-group favouritism and out-group discrimination.
social categorisation – we understand that people (and things) can be grouped
social identification – we identify with a group
social comparison – we compare ourselves favourably with another group
Social comparison underlies stereotyping, gang fights (though these can also be seen as competition for scarce resources), between-class competitions, girl/boy competition, online identities…how many more can you think of?
Tajfel’s theory can be used extensively in the curriculum, from his lab experiments in the 1970s (research methods), to an argument for the formation of stereotypes (sociocultural approach), to an explanation of how competition and maybe even conflict is generated in human relationships, to how images are cultivated socially on Snapchat, Instagram and (amongst us oldies) Facebook for cognitive psychology. This is an example of a classic theory that can be easily accessed through Psychology Sorted.
Studying the reliability of thinking and decision-making leads us into the slightly
complex world of System 1 (fast) and System 2 (slow) thinking and heuristics. Teaching cognitive biases is straightforward, and less is more. The key point is that we are inclined to base our current thinking and decision-making on past experiences and present perceptions. Our memories distort the past, and the media and our selective attention distort our present, especially if we are being pushed into a fast decision.
Tversky & Kahneman (1974) review a range of research in which they themselves have tested different heuristics, looking for evidence of ways in which System 1 thinking (effortless, fast, a short-cut to the answer) may operate when tested under specific conditions. They describe three different heuristics, leading to cognitive bias.
The representative heuristic is based on the idea that one event is representative of other events very similar to it, using the idea of how probable something is according to the individual’s prior knowledge of it. Even though participants knew that 70% of the descriptions of people that they had been given had referred to engineers, while 30% had referred to lawyers, when faced with a description of a man who could have been either, they judged that there was an equal chance of John being either an engineer or a lawyer. Similarly, when given a description of a shy quiet person, they were immediately judged to be most likely to be a librarian, even though the list of possible occupations included those that were much more statistically probable. This can be seen as the basis for stereotypes – taking a shortcut based on prior knowledge and assumptions.
The availability heuristic works by people tending to judge an event using the probability of its occurring, according to their prior knowledge: e.g. a middle-aged man with chest pains might be assumed to be having a heart attack but a four-year-old child with similar pains would not elicit the same response as four-year-old children do not tend to have heart attacks. This can lead to bias in diagnosis, as clinicians base their diagnoses on previous examples that come readily to mind; they are cognitively available.
The anchoring bias involves an initial value or starting-point in an information processing task determining how the final value is arrived at. The researchers tested high school students asking them to estimate one of the following: 8x7x6x5x4x3x2x1 or 1x2x3x4x5x6x7x8. Of course, each answer is the same as the numbers are identical per list. What Tversky and Kahneman found was that the descending list (8x7x6 etc.) produced a much higher estimate than the ascending scale (1x2x3 etc.) with the researchers concluding that the first value anchored the value as either high or low and that this is what caused the adjustment to the estimations. This is related to our first judgements about people: if we judge them in a positive light because of their friendly behaviour, this can ‘anchor’ our appraisal of their subsequent behaviour.
Use these examples as the basis for discussing how stereotypes are developed, or how diagnoses can lack validity, and they are also useful for discussing the lab experiment method. I am sure students can think of many more examples of how these heuristics can occasionally (not always) work to distort our thinking and decision-making in real life. But that might take some time and some logical, patient reasoning using System 2 thinking!
There has been a lot in the news recently about the effect of social media on mental health, but less about the effect on school and university students of reading or responding to texts during lectures. As students expect to be ‘connected’ throughout the day, gradually mobile phones have been finding their way into classrooms and lecture halls. Students often argue this makes no difference to their learning, as they can disregard texts and interruptions. But is this true? Another study from Psychology Sorted is explored today, with examples of how it may be used.
Rosen et al (2011) conducted a field experiment to examine the direct impact of text message interruptions on memory in a classroom environment and found the effects to be a slight, but significant, reduction in memory. This is an example of a study that can be used to illustrate research into the influence of technology and also to explore a common method used to research the influence of technology – the field experiment.
The researchers conducted their experiment in a classroom during a lecture. The independent variable was the number of texts received and sent (3 groups, no/low, medium and high), and the dependent variable was the score on a test based on the lesson content. 185 college students (148 female and 37 male) were told that they were going to view a 30-minute videotaped lecture relevant to their course and that during the session some of them would receive texts from the researchers to which they should respond as promptly as possible. They were informed that they would be tested on the material after the lecture.
The results were that the no/low texting group performed 10.6% better than the high texting group in their tests. The test score was significantly negatively correlated with the total number of words sent and received. Those participants who chose to wait more than 4-5 minutes to respond to a text message did better than those who responded immediately. But in all cases the difference was only just significant. This led the researchers to suggest that metacognitive skills (including learning to wait before responding to disturbances that make us lose focus) should be explicitly taught and that it might be wise for teachers and lecturers to use strategies that focus on when it is appropriate to take a break and when it is important to focus without distractions.
Some schools have opted to require all mobile phones to be turned off or left in lockers, but the problem is that just because the student’s technology is ‘out of sight’ it is not ‘out of mind.’ Maybe teachers should share the results of this study with their students?