Not the psychology of Joe average terrorist

News reports have been covering a fascinating study on the moral reasoning of ‘terrorists’ published in Nature Human Behaviour but it’s worth being aware of the wider context to understand what it means.

Firstly, it’s important to highlight how impressive this study is. The researchers, led by Sandra Baez, managed to complete the remarkably difficult task of getting access to, and recruiting, 66 jailed paramilitary fighters from the Colombian armed conflict to participate in the study.

They compared this group to 66 matched ‘civilians’ with no criminal background and 13 jailed murderers with no paramilitary connections, on a moral reasoning task.

The task involved 24 scenarios that varied in two important ways: harm and no harm, and intended and unintended actions. Meaning the researchers could compare across four situations – no harm, accidental harm, unsuccessfully attempted harm, and successfully attempted harm.

A consistent finding was that paramilitary participants consistently judged accidental harm as less acceptable than other groups, and intentional harm as more acceptable than others groups, indicating a distortion in moral reasoning.

They also measured cognitive function, emotion recognition and aggressive tendencies and found that when these measures were included in the analysis, they couldn’t account for the results.

One slightly curious thing in the paper though, and something the media has run with, is that the authors describe the background of the paramilitary participants and then discuss the implications for understanding ‘terrorists’ throughout.

But some context on the Colombian armed conflict is needed here.

The participants were right-wing paramilitaries who took part in the demobilisation agreement of 2003. This makes them members of the Autodefensas Unidas de Colombia or AUC – a now defunct organisation who were initially formed by drug traffickers and land owners to combat the extortion and kidnapping of the left-wing Marxist paramilitary organisations – mostly notably the FARC.

The organisation was paramilitary in the traditional sense – with uniforms, a command structure, local and regional divisions, national commanders, and written statutes. It involved itself in drug trafficking, extortion, torture, massacres, targeted killings, and ‘social cleansing’ of civilians assumed to be undesirable (homeless people, people with HIV, drug users etc) and killings of people thought to support left-wing causes. Fighters were paid and most signed up for economic reasons.

It was indeed designated a terrorist organisation by the US and EU, although within Colombia they enjoyed significant support from mainstream politicians (the reverberations of which are still being felt) and there is widespread evidence of collusion with the Colombian security forces of the time.

Also, considering that a great deal of military and paramilitary training is about re-aligning moral judgements, it’s not clear how well you can generalise these results to terrorists in general.

It is probably unlikely that the moral reasoning of people who participated in this study is akin to, for example, the jihadi terrorists who have mounted semi-regular attacks in Europe over the last few years. Or alternatively, it is not clear how ‘acceptable harm’ moral reasoning applies across different contexts in different groups.

Even within Colombia you can see how the terrorist label is not a reliable classification of a particular group’s actions and culture. Los Urabeños are the biggest drug trafficking organisation in Colombia at the moment. They are essentially the Centauros Bloc of the AUC, who didn’t demobilise and just changed their name. They are involved in very similar activities.

Importantly, they are not classified as a terrorist organisation, despite being virtually same organisation from which members were recruited into this study.

I would guess these results are probably more directly relevant in understanding paramilitary criminal organisations, like the Sinaloa Cartel in Mexico, than more ideologically-oriented groups that claim political or religious motivations, although it would be fascinating if they did generalise.

So what this study provides is a massively useful step forward in understanding moral reasoning in this particular paramilitary group, and the extent to which this applies to other terrorist, paramilitary or criminal groups is an open question.
 

Link to open access study in Nature Human Behaviour.

An alternative beauty in parenthood

Vela has an amazing essay by a mother of a child with a rare chromosomal deletion. Put aside all your expectations about what this article will be like: it is about the hopes and reality of having a child, but it’s also about so much more.

It’s an insightful commentary on the social expectations foisted upon pregnant women.

It’s about the clash of folk understanding of wellness and the reality of genetic disorders.

It’s about being with your child as they develop in ways that are surprising and sometimes troubling and finding an alternative beauty in parenthood.
 

Link to Vela article SuperBabies Don’t Cry.

rational judges, not extraneous factors in decisions

The graph tells a drammatic story of irrationality, presented in the 2011 paper Extraneous factors in judicial decisions. What it shows is the outcome of parole board decisions, as ruled by judges, against the order those decisions were made. The circles show the meal breaks taken by the judges.

parole_decisionsAs you can see, the decisions change the further the judge gets from his/her last meal, dramatically decreasing from around 65% chance of a favourable decision if you are the first case after a meal break, to close to 0% if you are the last case in a long series before a break.

In their paper, the original authors argue that this effect of order truly is due to the judges’ hunger, and not a confound introduced by some other factor which affects the order of cases and their chances of success (the lawyers sit outside the closed doors of the court, for example, so can’t time their best cases to come just after a break – they don’t know when the judge is taking a meal; The effect survives additional analysis where severity of prisoner’s crime and length of sentence are factored it; and so on). The interpretation is that as the judges tire they more and more fall back on a simple heuristic – playing safe and refusing parole.

This seeming evidence of the irrationality of judges has been cited hundreds of times, in economics, psychology and legal scholarship. Now, a new analysis by Andreas Glöckner in the journal Judgement and Decision Making questions these conclusions.

Glöckner’s analysis doesn’t prove that extraneous factors weren’t influencing the judges, but he shows how the same effect could be produced by entirely rational judges interacting with the protocols required by the legal system.

The main analysis works like this: we know that favourable rulings take longer than unfavourable ones (~7 mins vs ~5 mins), and we assume that judges are able to guess how long a case will take to rule on before they begin it (from clues like the thickness of the file, the types of request made, the representation the prisoner has and so on). Finally, we assume judges have a time limit in mind for each of the three sessions of the day, and will avoid starting cases which they estimate will overrun the time limit for the current session.

It turns out that this kind of rational time-management is sufficient to  generate the drops in favourable outcomes. How this occurs isn’t straightforward and interacts with a quirk of original author’s data presentation (specifically their graph shows the order number of cases when the number of cases in each session varied day to day – so, for example, it shows that the 12th case after a break is least likely to be judged favourably, but there wasn’t always a 12 case in each session. So sessions in which there were more unfavourable cases were more likely to contribute to this data point).

This story of claim and counter-claim shows why psychologists prefer experiments, since only then can you truly isolate causal explanations (if you are a judge and willing to go without lunch please get in touch). Also, it shows the benefit of simulations for extending the horizons of our intuition. Glöckner’s achievement is to show in detail how some reasonable assumptions – including that of a rational judge – can generate a pattern which hitherto seemed only explainable by the influence of an irrelevant factor on the judges decisions. This doesn’t settle the matter, but it does mean we can’t be so confident that this graph shows what it is often claimed to show. The judges decisions may not be irrational after all, and the timing of the judges meal breaks may not be influencing parole decision outcome.

Original finding: Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011). Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences, 108(17), 6889-6892.

New analysis: Glöckner, A. (2016). The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated. Judgment and Decision Making, 11(6), 601-610.

Elsewhere I have written about how evidence of human irrationality is often over-egged : For argument’s sake: evidence that reason can change minds

 

Serendipity in psychological research

micDorothy Bishop has an excellent post ‘Ten serendipitous findings in psychology’, in which she lists ten celebrated discoveries which occurred by happy accident.

Each discovery is interesting in itself, but Prof Bishop puts the discoveries in the context of the recent discussion about preregistration (declaring in advance what you are looking for and how you’ll look). Does preregistration hinder serendipity? Absolutely not says Bishop, not least because the context of ‘discovery’ is never a one-off experiment.

Note that, in all cases, having made the initial unexpected observation – either from unstructured exploratory research, or in the course of investigating something else – the researchers went on to shore up the findings with further, hypothesis-driven experiments. What they did not do is to report just the initial observation, embellished with statistics, and then move on, as if the presence of a low p-value guaranteed the truth of the result.

(It’s hard not to read into these comments a criticism of some academic journals which seem happy to publish single experiments reporting surprising findings.)

Bishop’s list contains 3 findings from electrophysiology (recording brain cell activity directly with electrodes), which I think is notable. In these cases neural recording acts in the place of a microscope, allowing fairly direct observation of the system the scientist is investigating at a level of detail hitherto unavailable. It isn’t surprising to me that given a new tool of observation, the prepared mind of the scientists will make serendipitous discoveries. The catch is whether, for the rest of psychology, such observational tools exist. Many psychologists use their intuition to decide where to look, and experiments to test whether their intuition is correct. The important serendipitous discoveries from electrophysiology suggest that measures which are new ways of observing, rather than merely tests of ideas, must also be important for psychological discoveries. Do such observational measures exist?

Images of ultra-thin models need your attention to make you feel bad

I have a guest post over at the BPS Research Digest, covering research on the psychological effects of pictures of ultra-thin fashion models.

A crucial question is whether the effect of these thin-ideal images is automatic. Does the comparison to the models, which is thought to be the key driver in their negative effects, happen without our intention, attention or both? Knowing the answer will tell us just how much power these images have, and also how best we might protect ourselves from them.

It’s a great study from the lab of Stephen Want (Ryerson University). For the full details of the research, head over: Images of ultra-thin models need your attention to make you feel bad

Update: Download the preprint of the paper, and the original data here

CBT is becoming less effective, like everything else

‘Researchers have found that Cognitive Behavioural Therapy is roughly half as effective in treating depression as it used to be’ writes Oliver Burkeman in The Guardian, arguing that this is why CBT is ‘falling out of favour’. It’s worth saying that CBT seems as popular as ever, but even if it was in decline, it probably wouldn’t be due to diminishing effectiveness – because this sort of reduction in effect is common across a range of treatments.

Burkeman is commenting on a new meta-analysis that reports that more recent trials of CBT for depression find it to be less effective than older trials but this pattern is common as treatments are more thoroughly tested. This has been reported in antipsychotics, antidepressants and treatments for OCD to name but a few.

Interestingly, one commonly cited reason treatments become less effective in trials is because response to placebo is increasing, meaning many treatments seem to lose their relative potency over time.

Counter-intuitively, for something considered to be ‘an inert control condition’ the placebo response is very sensitive to the design of the trial, so even comparing placebo against several rather than one active treatment can affect placebo response.

This has led people to suggest lots of ‘placebo’ hacks. “In clinical trials,” noted one 2013 paper in Drug Discovery, “the placebo effect should be minimized to optimize drug–placebo difference”.

It’s interesting that it is still not entirely clear whether this approach is ‘revealing’ the true effects of the treatment or just another way of ‘spinning’ trials for the increasingly worried pharmaceutical and therapy industries.

The reasons for the declining treatment effects over time are also likely to include different types of patients selected into trials, more methodologically sound research practices meaning less chance of optimistic measuring and reporting, the fact that if chance gives you a falsely inflated treatment effect first time round it is more likely to be re-tested than initially less impressive first trials, and the fact that older known treatments might bring a whole load of expectations with them that brand new treatments don’t.

The bottom line is that lots of our treatments, across medicine as a whole, have quite modest effects when compared to placebo. But if placebo represents an attempt to address the problem, it provides quite a boost to the moderate effects that the treatment itself brings.

So the reports of the death of CBT have been greatly exaggerated but this is mostly due to the fact that lots of treatments start to look less impressive when they’ve been around for a while. This is less due to them ‘losing’ their effect and more likely due to us more accurately measuring their true but more modest effect over time.

Phantasmagoric neural net visions

dreaming neural network imageA starling galley of phantasmagoric images generated by a neural network technique has been released. The images were made by some computer scientists associated with Google who had been using neural networks to classify objects in images. They discovered that by using the neural networks “in reverse” they could elicit visualisations of the representations that the networks had developed over training.

These pictures are freaky because they look sort of like the things the network had been trained to classify, but without the coherence of real-world scenes. In fact, the researchers impose a local coherence on the images (so that neighbouring pixels do similar work in the image) but put no restraint on what is globally represented.

The obvious parallel is to images from dreams or other altered states – situations where ‘low level’ constraints in our vision are obviously still operating, but the high-level constraints – the kind of thing that tries to impose an abstract and unitary coherence on what we see – is loosened. In these situations we get to observe something that reflects our own processes as much as what is out there in the world.

Link: The researchers talk about their ‘dreaming neural networks’
Gallery: Inceptionism: Going deeper into Neural Networks

Explore our back pages

At our birthday party on Thursday I told people how I’d crunched the stats for the 10 years of mindhacks.com posts. Nearly 5000 posts, and over 2 million words – an incredible achievement (for which 96% of the credit should go to Vaughan).

In 2010 we had an overhaul (thanks JD for this, and Matt for his continued support of the tech side of the site). I had a look at the stats, which only date back till then, and pulled out our all time most popular posts. Here they are:

topten

Something about the enthusiasm of last Thursday inspired me to put the links the top ten posts on a wiki. Since it is a wiki anyone can jump in and edit, so if there are any bits of the mindhacks.com back catalogue that you think are worth leaving a placeholder to, feel free to add it. Vaughan and I will add links to a few of our favourite posts, so check back and see how it is coming along.

Link: Mind Hacks wiki

Quasi-stability

Yesterday, before I got here, my dad was trying to fix an invisible machine. By all accounts, he began working on the phantom device quite intently, but as his repairs began to involve the hospice bed and the tubes attached to his body, he was gently sedated, and he had to leave it, unresolved.

This was out-of-character for my father, who I presumed had never encountered a machine he couldn’t fix. He built model aeroplanes in rural New Zealand, won a scholarship to go to university, and ended up as an aeronautical engineer for Air New Zealand, fixing engines twice his size. More scholarships followed and I first remember him completing his PhD in thermodynamics, or ‘what heat does’, as he used to describe it, to his six-year-old son.

When he was first admitted to the hospice, more than a week go, he was quite lucid – chatting, talking, bemoaning the slow pace of dying. “Takes too long,” he said, “who designed this?” But now he is mostly unconscious.

Occasionally though, moments of lucidity dodge between the sleep and the confusion. “When did you arrive?” he asked me in the early hours of this morning, having woken up wanting water. Once the water was resolved he was preoccupied about illusory teaspoons lost among the bedclothes, but then chatted in feint short sentences to me and my step-mum before drifting off once more.

Drifting is a recent tendency, but in the lucidity he has remained a proud engineer. It’s more of a vocation, he always told his students, than a career.

Last week, when the doctors asked if he would speak to medical trainees, he was only too happy to have a final opportunity to teach. Even the consultants find his pragmatic approach to death somewhat out of the ordinary and they funnelled eager learners his way where he engaged with answering their questions and demonstrating any malfunctioning components.

“When I got here”, he explained to them, “I was thermodynamically unstable but now I think I’m in a state of quasi-stability. It looks like I have achieved thermal equilibrium but actually I’m steadily losing energy.”

“I’m not sure”, I said afterwards, “that explaining your health in terms of thermodynamics is exactly what they’re after.”

“They’ll have to learn,” he said, “you can’t beat entropy.”


Postscript

My dad finally returned to entropy on the afternoon of Friday 31st October, with his family and a half-read book on nanoscience by his side.

Dr Murray Alan Bell, 30th January 1945 – 31st October 2014, Engineer (by vocation as much as by career)

Hallucinating astronauts

I’ve got a piece in The Observer about the stresses, strains and mind-bending effects of space flight.

NASA considers behavioural and psychiatric conditions to be one of the most significant risks to the integrity of astronaut functioning and there is a surprisingly long history of these difficulties adversely affecting missions.

Perhaps more seriously, hallucinations have been associated with the breakdown of crew coherence and space mission stress. In 1976, crew from the Russian Soyuz-21 mission were brought back to Earth early after they reported an acrid smell aboard the Salyut-5 space station. Concerns about a possible fluid leak meant the replacement crew boarded with breathing equipment, but no odour or technical problems were found. Subsequent reports of “interpersonal issues” and “psychological problems” in the crew led Nasa to conclude the odour was probably a hallucination. Other Russian missions were thought to be have been halted by psychological problems, but the US space programme has not been without difficulties. During the Skylab 4 mission, long hours, exhaustion and disagreements with mission control resulted in the crew switching off their radio and spending a day ignoring Nasa while watching the Earth’s surface pass by.

The piece also tackles a curious form of hallucination caused by cosmic rays and the detrimental effects of zero-gravity of brain function, as well as some curious Freudian theories from pre-space flight 1950s about the potential psychological consequences of leaving ‘Mother Earth’.

Enjoy!
 

Link to Observer article on psychological challenges of astronauts.

Why do we bite our nails?

It can ruin the appearance of your hands, could be unhygienic and can hurt if you take it too far. So why do people do it? Biter Tom Stafford investigates

What do ex-British prime minster Gordon Brown, Jackie Onassis, Britney Spears and I all have in common? We all are (or were) nail biters.

It’s not a habit I’m proud of. It’s pretty disgusting for other people to watch, ruins the appearance of my hands, is probably unhygienic and sometimes hurts if I take it too far. I’ve tried to quit many times, but have never managed to keep it up.

Lately I’ve been wondering what makes someone an inveterate nail-biter like me. Are we weaker willed? More neurotic? Hungrier? Perhaps, somewhere in the annals of psychological research there could be an answer to my question, and maybe even hints about how to cure myself of this unsavoury habit.

My first dip into the literature shows up the medical name for excessive nail biting: ‘onychophagia’. Psychiatrists classify it as an impulse control problem, alongside things like obsessive compulsive disorder. But this is for extreme cases, where psychiatric help is beneficial, as with other excessive grooming habits like skin picking or hair pulling. I’m not at that stage, falling instead among the majority of nail biters who carry on the habit without serious side effects. Up to 45% of teenagers bite their nails, for example; teenagers may be a handful but you wouldn’t argue that nearly half of them need medical intervention. I want to understand the ‘subclinical’ side of the phenomenon – nail biting that isn’t a major problem, but still enough of an issue for me to want to be rid of it.

It’s mother’s fault

Psychotherapists have had some theories about nail biting, of course. Sigmund Freud blamed it on arrested psycho-sexual development, at the oral stage (of course). Typical to Freudian theories, oral fixation is linked to myriad causes, such as under-feeding or over-feeding, breast-feeding too long, or problematic relationship with your mother. It also has a grab-bag of resulting symptoms: nail biting, of course, but also a sarcastic personality, smoking, alcoholism and love of oral sex. Other therapists have suggested nail-biting may be due to inward hostility – it is a form of self-mutilation after all – or nervous anxiety.

Like most psychodynamic theories these explanations could be true, but there’s no particular reason to believe they should be true. Most importantly for me, they don’t have any strong suggestions on how to cure myself of the habit. I’ve kind of missed the boat as far as extent of breast-feeding goes, and I bite my nails even when I’m at my most relaxed, so there doesn’t seem to be an easy fix there either. Needless to say, there’s no evidence that treatments based on these theories have any special success.

Unfortunately, after these speculations, the trail goes cold. A search of a scientific literature reveals only a handful of studies on treatment of nail-biting. One reports that any treatment which made people more aware of the habit seemed to help, but beyond that there is little evidence to report on the habit. Indeed, several of the few articles on nail-biting open by commenting on the surprising lack of literature on the topic.

Creature of habit

Given this lack of prior scientific treatment, I feel free to speculate for myself. So, here is my theory on why people bite their nails, and how to treat it.

Let’s call it the ‘anti-theory’ theory. I propose that there is no special cause of nail biting – not breastfeeding, chronic anxiety or a lack of motherly love. The advantage of this move is that we don’t need to find a particular connection between me, Gordon, Jackie and Britney. Rather, I suggest, nail biting is just the result of a number of factors which – due to random variation – combine in some people to create a bad habit.

First off, there is the fact that putting your fingers in your mouth is an easy thing to do. It is one of the basic functions for feeding and grooming, and so it is controlled by some pretty fundamental brain circuitry, meaning it can quickly develop into an automatic reaction. Added to this, there is a ‘tidying up’ element to nail biting – keeping them short – which means in the short term at least it can be pleasurable, even if the bigger picture is that you end up tearing your fingers to shreds. This reward element, combined with the ease with which the behaviour can be carried out, means that it is easy for a habit to develop; apart from touching yourself in the genitals it is hard to think of a more immediate way to give yourself a small moment of pleasure, and biting your nails has the advantage of being OK at school. Once established, the habit can become routine – there are many situations in everyone’s daily life where you have both your hands and your mouth available to use.

Understanding nail-biting as a habit has a bleak message for a cure, unfortunately, since we know how hard bad habits can be to break. Most people, at least once per day, will lose concentration on not biting their nails.

Nail-biting, in my view, isn’t some revealing personality characteristic, nor a maladaptive echo of some useful evolutionary behaviour. It is the product of the shape of our bodies, how hand-to-mouth behaviour is built into (and rewarded in) our brains and the psychology of habit.

And, yes, I did bite my nails while writing this column. Sometimes even a good theory doesn’t help.

 

This was my BBC Future column from last week

It’s your own time you’re wasting

CC Licensed photo by Flickr user alamosbasement. Click for source.British teachers have voted to receive training in neuroscience ‘to improve classroom practice’ according to a report in the Times Educational Supplement and the debate sounded like a full-on serial head-desker.

The idea of asking for neuroscience training at all sounds a little curious but the intro seemed like it could be quite reasonable:

Members of the Association of Teachers and Lecturers (ATL) at the union’s annual conference narrowly voted for a motion calling for training materials and policies on applying neuroscience to education and for further research on how technology can be used to develop better teaching.

Now, this could be just a request to be kept up-to-date with the latest educational neuroscience developments. Sounds fascinating but probably not that practically useful as neuroscience doesn’t really have much to offer your average classroom teacher.

Enter Julia Neal, a member of the council for the union’s leadership division and leading member of the head-desk working group:

“It is true that the emerging world of neuroscience presents opportunities as well as challenges for education, and it’s important that we bridge the gulf between educators, psychologists and neuroscientists.”

Neuroscience could also help teachers tailor their lessons for creative “right brain thinkers”, who tend to struggle with conventional lessons but often have more advanced entrepreneurial skills, Ms Neal said.

Entrepreneurial skills being a well known function of the ‘right brain’. It’s why Bill Gates always veers slightly to the left when he walks. So why this sudden interest in neuroscience in the classroom I wonder?

Earlier this year, the government-backed Education Endowment Foundation and the Wellcome Trust launched a £6 million scheme that will fund neuroscientific research into learning.

Kerching! But the best bit of the debate is where a neuropsychologist stands up and goes ‘well, I don’t think it’s as simple as you’re making out’:

However Joanne Fludder, a classroom teacher in Reading with a doctorate in neuropsychology, opposed the motion.

She told the conference that the field was “very complicated” and theories were “still in flux” as research was carried out.

Boo! Get her off!
 

Link to article in the Times Educational Supplement

Does the unconscious know when you’re being lied to?

The headlines
BBC: Truth or lie – trust your instinct, says research

British Psychological Society: Our subconscious mind may detect liars

Daily Mail: Why you SHOULD go with your gut: Instinct is better at detecting lies than our conscious mind

The Story
Researchers at the University of California, Berkeley, have shown that we have the ability to unconsciously detect lies, even when we’re not able to explicitly say who is lying and who is telling the truth.

What they actually did
The team, led by Leanne ten Brinke of the Haas School of Business, created a set of videos using a “mock high-stakes crime scenario”. This involved asking 12 volunteers to be filmed while being interrogated about whether they had taken US$100 dollars from the testing room. Half the volunteers had been asked to take the $100, and had been told they could keep it if they persuaded the experimenter that they hadn’t. In this way the researchers generated videos of both sincere denials and people who were trying hard to deceive.

They then showed these videos to experimental participants who had to judge if the people in the videos were lying or telling the truth. As well as this measure of conscious lie detection, the participants also completed a task designed to measure their automatic feelings towards the people in the videos.

In experiment one this was a so-called Implicit Association Test which works by comparing the ease with which the participants associated the faces of the people in the videos with the words TRUTH or LIE. Experiment two was a priming test, where the faces of the people in the videos changed the speed at which people then made judgements about words they were then given related to truth-telling and deception.

The results of the study showed that people were no better than chance in their explicit judgements of who was telling the truth and who was lying, but the measurements of their other behaviours showed significant differences. Specifically, for people who were actually lying, observers were slower to associate their faces with the word TRUTH or quicker to associate it with the word LIE. The second experiment showed that after seeing someone who was actually telling the truth people made faster judgements about words related to truth-telling and slower judgements about words related to deception (and vice versa after a video of someone who was actually lying).

How plausible is this?
The result that people aren’t good at detecting lies is very well established. Even professionals, such as police officers, perform poorly when formally tested on their ability to discriminate lying from truth telling.

It’s also very plausible that the way in which you measure someone’s judgement can reveal different things. For example, people are in general notoriously bad at reasoning about risk when they are asked to give estimates verbally, but measurements of behaviour show that we are able to make very accurate estimates of risk in the right circumstances.

It fits with other results in psychological research which show that over thinking certain judgements can reduce their accuracy

Tom’s take
The researchers are trying to have it both ways. The surprise of the result rests on the fact that people don’t score well when asked to make a simple truth vs lie judgement, but their behavioural measures suggest people would be able to make this judgement if asked differently. Claiming the unconscious mind knows what the conscious mind doesn’t is going too far – it could be that the simple truth vs lie judgement isn’t sensitive enough, or is subject to some bias (participants afraid of being wrong for example).

Alternatively, it could be that the researchers’ measures of the unconscious are only sensitive to one aspect of the unconscious – and it happens to be an aspect that can distinguish lies from an honest report. How much can we infer from the unconscious mind as a whole from the behavioural measures?

When reports of this study say “trust your instincts” they ignore the fact that the participants in this study did have the opportunity to trust their instincts – they made a judgement of whether individuals were lying or not, presumably following the combination of all the instincts they had, including those that produced the unconscious measures the researchers tested. Despite this, they couldn’t guess correctly if someone was lying or not.

If the unconscious is anything it will be made up of all the automatic processes that run under the surface of our conscious minds. For any particular judgement – in this case detecting truth telling – some process may be accurate at above chance levels, but that doesn’t mean the unconscious mind as a whole knows who is lying or not.

It doesn’t even mean there is such as thing as the unconscious mind, just that there are aspects to what we think that aren’t reported by people if you ask them directly. We can’t say that people “knew” who was lying, when the evidence shows that they didn’t or couldn’t use this information to make correct judgements.

Read more
The original paper: Some evidence for unconscious lie detection”

The data and stimuli for this experiment are freely available – a wonderful example of “open science.”

A short piece I wrote about how articulating your feelings can get in the way of realising them.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Ghost psychiatry

The Australian Journal of Parapsychology has an article about post-traumatic stress disorder in people who have been murdered.

I suspect diagnosing mental disorder in those who have passed onto another plane of existence isn’t the easiest form of mental health assessment but it seems this gentleman is determined to give it a go.

Psychological phenomena in dead people: Post- traumatic stress disorder in murdered people and its consequences to public health

Australian Journal of Parapsychology, Volume 13 Issue 1 (Jun 2013)

Wasney de Almeida Ferreira

The aims of this paper are to narrate and analyze some psychological phenomena that I have perceived in dead people, including evidence of post-traumatic stress disorder (PTSD) in murdered people. The methodology adopted was “projection of consciousness” (i.e., a non-ordinary state of consciousness), which allowed me to observe, interact, and interview dead people directly as a social psychologist. This investigation was based on Cartesian skepticism, which allowed me a more critical analysis of my experiences during projection of consciousness. There is strong evidence that a dead person: (i) continues living, thinking, behaving after death as if he/she still has his/her body because consciousness continues in an embodied state as ‘postmortem embodied experiences’; (ii) may not realize for a considerable time that he/she is already dead since consciousness continues to be embodied after death (i.e., ‘postmortem perturbation’ – the duration of this perturbation can vary from person to person, in principle according to the type of death, and the level of conformation), and (iii) does not like to talk, remember, and/or explain things related to his/her own death because there is evidence that many events related to death are repressed in his/her unconscious (‘postmortem cognitive repression’). In addition, there is evidence that dying can be very traumatic to consciousness, especially to the murdered, and PTSD may even develop.

It is worth noting that the concept of post-mortem PTSD was largely invented by Big Parlour as a way of selling seances, when what spirits really need is someone to help them understand their experiences.
 

Link to abstract for article (via @WiringTheBrain)

Do violent video games make teens ‘eat and cheat’ more?

By Tom Stafford, University of Sheffield

The Headlines

Business Standard: Violent video games make teens eat more, cheat more

Scienceblog.com: Teens ‘Eat more, cheat more’ after playing violent video games

The Times of India: Violent video games make teens cheat more

The story

Playing the violent video game Grand Theft Auto made teenagers more aggressive, more dishonest and lowered their self control.

What they actually did

172 Italian high school students (age 13-19 years old), about half male and half female, took part in an experiment in which they first played a video game for 35 minutes. Half played a non-violent pinball or golf game, and half played one of the ultra-violent Grand Theft Auto games.

During the game they had the opportunity to eat M&M’s freely from a bowl (the amount they scoffed provided a measure of self-control), and after the game they had the opportunity take a quiz to earn raffle tickets (and the opportunity to cheat on the quiz, which provided a measure of dishonesty). They also played a game during which they could deliver unpleasant noises to a fellow player as punishments (which was used to measure of aggression).

Analysis of the results showed that those who played the violent video game had lower scores when it came to the self-control measure, cheated more and were more aggressive. What’s more, these effects were most pronounced for those who had high scores on a scale of “moral disengagement” – which measures how loose your moral thinking is. In other words, if you don’t think too hard about right and wrong, you score highly.

How plausible is this?

This is a well designed study, which uses random allocation to the two groups to try to properly assess causation (does the violent video game cause immoral behaviour?).

The choice of control condition was reasonable (the other video games were tested and found to be just as enjoyed by the participants), and the measures are all reasonable proxies for the things we are interested in. Obviously you can’t tell if weakened self-control for eating chocolate will mean weakened self-control for more important behaviour, but it’s a nice specific measure which is practical in an experiment and which just might connect to the wider concept.

The number of participants is also large enough that we can give the researchers credit for putting in the effort. Getting about 85 people in each group should give a minimum of statistical power, which means any effects might be reliable.

As an experimental psychologist, there’s lots for me to like about this study. The only obvious potential problem that I can see is that of demand effects, subtle cues that can make participants aware of what the experimenter expects to find or how they should behave. The participants were told they were in a study which looked at the effects of video games, so it isn’t impossible that some element of their behaviour was playing up to what they reasonably guessed the researchers were looking for and it doesn’t look like the researchers checked if this might be the case.

Tom’s take

You can’t leap to conclusions from a single study, of course – even a well designed one. We should bear in mind the history of moral panics around new technology and media. Today we’re concerned with violent video games, 50 years ago it was comic books and jazz. At least jazz is no longer corrupting young people.

Is our worry about violent video games just another page in the history of adults worrying about what young people are up to? That’s certainly a factor, but unlike jazz, it does seem psychologically plausible that a game where you enjoy reckless killing and larceny might encourage players to be self-indulgent and nasty.

Reviews suggest violent media may be a risk factor for violent behaviour, just like cigarette smoke is a risk factor for cancer. Most people who play video games won’t commit violent acts, just like most people who passive smoke won’t get cancer.

The problem is other research reviews into impact of violent entertainment on our behaviour suggest the evidence for a negative effect is weak and contradictory.

Video games are a specific example of the general topic of if and how media affect our behaviour. Obviously, we are more than complete zombies, helpless to resist every suggestion or example, but we’re also less than completely independent creatures, immune to the influence of other people and all forms of entertainment. Where the balance lies between these extremes is controversial.

For now, I’m going to keep an open mind, but as a personal choice I’m probably not going to get the kids GTA for Christmas.

Read more

The original paper: Interactive Effect of Moral Disengagement and Violent Video Games on Self-Control, Cheating, and Aggression

@PeteEtchells provides a good summary of the scientific (lack of) consensus: What is the link between violent video games and aggression?

Commentary by one researcher on the problems in the field of video game research: The Challenges of Accurate Reporting on Video Game Research

And a contrary research report: A decade long study of over 11,000 children finds no negative impact of video games

Tom Stafford does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The Conversation

This article was originally published at The Conversation.
Read the original article, or other columns in the series