The effect of diminished belief in free will

Studies have shown that people who believe things happen randomly and not through our own choice often behave much worse than those who believe the opposite.

Are you reading this because you chose to? Or are you doing so as a result of forces beyond your control?

After thousands of years of philosophy, theology, argument and meditation on the riddle of free will, I’m not about to solve it for you in this column (sorry). But what I can do is tell you about some thought-provoking experiments by psychologists, which suggest that, regardless of whether we have free will or not, whether we believe we do can have a profound impact on how we behave.

The issue is simple: we all make choices, but could those choices be made otherwise? From a religious perspective it might seem as if a divine being knows all, including knowing in advance what you will choose (so your choices could not be otherwise). Or we can take a physics-based perspective. Everything in the universe has physical causes, and as you are part of the universe, your choices must be caused (so your choices could not be otherwise). In either case, our experience of choosing collides with our faith in a world which makes sense because things have causes.

Consider for a moment how you would research whether a belief in free will affects our behaviour. There’s no point comparing the behaviour of people with different fixed philosophical perspectives. You might find that determinists, who believe free will is an illusion and that we are all cogs in a godless universe, behave worse than those who believe we are free to make choices. But you wouldn’t know whether this was simply because people who like to cheat and lie become determinists (the “Yes, I lied, but I couldn’t help it” excuse).

What we really need is a way of changing people’s beliefs about free will, so that we can track the effects of doing so on their behaviour. Fortunately, in recent years researchers have developed a standard method of doing this. It involves asking subjects to read sections from Francis Crick’s book The Astonishing Hypothesis. Crick was one of the co-discoverers of DNA’s double-helix structure, for which he was awarded the Nobel prize. Later in his career he left molecular biology and devoted himself to neuroscience. The hypothesis in question is his belief that our mental life is entirely generated by the physical stuff of the brain. One passage states that neuroscience has killed the idea of free will, an idea that most rational people, including most scientists, now believe is an illusion.

Psychologists have used this section of the book, or sentences taken from it or inspired by it, to induce feelings of determinism in experimental subjects. A typical study asks people to read and think about a series of sentences such as “Science has demonstrated that free will is an illusion”, or “Like everything else in the universe, all human actions follow from prior events and ultimately can be understood in terms of the movement of molecules”.

The effects on study participants are generally compared with those of other people asked to read sentences that assert the existence of free will, such as “I have feelings of regret when I make bad decisions because I know that ultimately I am responsible for my actions”, or texts on topics unrelated to free will.

And the results are striking. One study reported that participants who had their belief in free will diminished were more likely to cheat in a maths test. In another, US psychologists reported that people who read Crick’s thoughts on free will said they were less likely to help others.

Bad taste

A follow-up to this study used an ingenious method to test this via aggression to strangers. Participants were told a cover story about helping the experimenter prepare food for a taste test to be taken by a stranger. They were given the results of a supposed food preference questionnaire which indicated that the stranger liked most foods but hated hot food. Participants were also given a jar of hot sauce. The critical measure was how much of the sauce they put into the taste-test food. Putting in less sauce, when they knew that the taster didn’t like hot food, meant they scored more highly for what psychologists call “prosociality”, or what everyone else calls being nice.

You’ve guessed it: Participants who had been reading about how they didn’t have any free will chose to give more hot sauce to the poor fictional taster – twice as much, in fact, as those who read sentences supporting the idea of freedom of choice and responsibility.

In a recent study carried out at the University of Padova, Italy, researchers recorded the brain activity of participants who had been told to press a button whenever they wanted. This showed that people whose belief in free will had taken a battering thanks to reading Crick’s views showed a weaker signal in areas of the brain involved in preparing to move. In another study by the same team, volunteers carried out a series of on-screen tasks designed to test their reaction times, self control and judgement. Those told free will didn’t exist were slower, and more likely to go for easier and more automatic courses of action.

This is a young research area. We still need to check that individual results hold up, but taken all together these studies show that our belief in free will isn’t just a philosophical abstraction. We are less likely to behave ethically and kindly if our belief in free will is diminished.

This puts an extra burden of responsibility on philosophers, scientists, pundits and journalists who use evidence from psychology or neuroscience experiments to argue that free will is an illusion. We need to be careful about what stories we tell, given what we know about the likely consequences.

Fortunately, the evidence shows that most people have a sense of their individual freedom and responsibility that is resistant to being overturned by neuroscience. Those sentences from Crick’s book claim that most scientists believe free will to be an illusion. My guess is that most scientists would want to define what exactly is meant by free will, and to examine the various versions of free will on offer, before they agree whether it is an illusion or not.

If the last few thousands of years have taught us anything, the debate about free will may rumble on and on. But whether the outcome is inevitable or not, these results show that how we think about the way we think could have a profound effect on us, and on others.

This was published on BBC Future last week. See the original, ‘Does non-belief in free will make us better or worse?‘ (it is identical apart from the title, and there’s a nice picture on that site). If the neuroscience and the free will debate floats your boat, you can check out this video of the Sheffield Salon on the topic “‘My Brain Made Me Do It’ – have neuroscience and evolutionary psychology put free will on the slab?“. I’m the one on the left.

Jazz no longer corrupting young people

A study published in the journal Pediatrics looked at the link between music preferences at age 12 and adolescent delinquency – finding that an early liking for ‘rebellious’ music predicted small scale anti-social behaviour like shoplifting, petty theft and vandalism.

Rebellious music turns out to be defined as variations of rock, hip-hop and electronica. But one of the interesting findings was that kids who liked jazz music were slightly less likely to be antisocial:

Although some music preferences were positively associated with delinquency, liking jazz at age 12 correlated negatively with delinquency (r = –0.12), but did not relate to age 16 delinquency.

Historically, this is interesting because jazz, at the height of its popularity, was widely linked to delinquency, drug-taking, insanity and sexual promiscuity:

Dig this:

The human organism responds to musical vibrations. This fact is universally recognized. What instincts then are aroused by jazz? Certainly not deeds of valor or martial courage, for all marches and patriotic hymns are of regular rhythm and simple harmony; decidedly not contentment or serenity, for the songs of home and the love of native land are all of the simplest melody and harmony with noticeably regular rhythm. Jazz disorganizes all regular laws and order; it stimulates to extreme deeds, to a breaking away from all rules and conventions; it is harmful and dangerous, and its influence is wholly bad.

That’s from the August 1921 edition of the Ladies Home Journal, one of the most widely read magazines of the time.

You’ll notice a lot of racism around the ‘jazz is bad’ vibe. This was picked up at the time as it happens.

However, I would guess that 50’s jazz, like modern rebellious music, could have had a statistical link to minor delinquency, but like today, it was probably not causal. Rebellious people gravitate toward rebellious music.

Jazz, however, aint no longer rebellious. In the popular imagination, it’s a soundtrack for beard-strokers, and the days of banging up a quarter gram of snow and downing a couple of bennies before losing it in a sweaty jazz joint are long gone.

In fact, it’s so no-longer-rebellious that it was the only type of music to predict lower delinquency.

One thing hasn’t changed though. The slightly awkward tone that sounds like your dad lamenting the death of proper music. With a tune. And lyrics. You know, proper lyrics, that tell a story.

The opening sentence of the Pediatrics study:

During the 1980s and 1990s, the loudest and most rebellious forms of rock (eg, heavy metal, gothic), African American music (hip-hop, particularly gangstarap), and electronic dance music (house, techno, hardhouse) were labeled by adults as “problem” music and perceived as promoting violence, substance use, promiscuous sex, blasphemy, and depression

Can I get a nice cup of tea from the massive tonight?
 

Link to study ‘Early Adolescent Music Preferences and Minor Delinquency’.

Madness and hallucination in The Shining

Roger Ebert’s 2006 review of Stanley Kubrick’s The Shining turns out to be a brilliant exploration of hallucination, madness and unreliable witnessing in a film he describes as “not about ghosts but about madness and the energies it sets loose”.

Kubrick is telling a story with ghosts (the two girls, the former caretaker and a bartender), but it isn’t a “ghost story,” because the ghosts may not be present in any sense at all except as visions experienced by Jack or Danny.

The movie is not about ghosts but about madness and the energies it sets loose in an isolated situation primed to magnify them. Jack is an alcoholic and child abuser who has reportedly not had a drink for five months but is anything but a “recovering alcoholic.” When he imagines he drinks with the imaginary bartender, he is as drunk as if he were really drinking, and the imaginary booze triggers all his alcoholic demons, including an erotic vision that turns into a nightmare. We believe Hallorann when he senses Danny has psychic powers, but it’s clear Danny is not their master; as he picks up his father’s madness and the story of the murdered girls, he conflates it into his fears of another attack by Jack. Wendy, who is terrified by her enraged husband, perhaps also receives versions of this psychic output. They all lose reality together.

A psychologically insightful piece on one of the classics of psychological horror.
 

Link to Roger Ebert’s 2006 review of The Shining.

18 minutes of trauma

I’ve just found one of the best discussions on the importance and limits of the concept of post-traumatic stress disorder on a programme from the Why Factor on BBC World Service.

It’s a brief programme, only 18 minutes long, but packs in a remarkably incisive look at PTSD that tackles its causes, its cultural limits and its increasing use as an all-purpose folk description for painful reactions to difficult events.

Both compassionate and critical, it’s one of the best discussions of post-trauma and its diagnosis I have heard for a while.

As is typical for the internet-impaired BBC radio pages, the podcast is on an entirely different page, so you might want to download the mp3 directly.
 

Link to programme page and streamed audio.
mp3 of programme audio.

A war of biases

Here’s an interesting take on terrorism as a fundamentally audience-focused activity that relies on causing fear to achieve political ends and whether citizen-led community monitoring schemes actually serve to amplify the effects rather than make us feel safer.

It’s from an article just published in Journal of Police and Criminal Psychology by political scientist Alex Braithwaite:

A long-held premise in the literature on terrorism is that the provocation of a sense of fear within a mass population is the mechanism linking motivations for the use of violence with the anticipated outcome of policy change. This assumption is the pivot point upon and around which most theories of terrorism rest and revolve. Martha Crenshaw, for instance, claims, the ‘political effectiveness of terrorism is importantly determined by the psychological effects of violence on audiences’…

Terrorists prioritize communication of an exaggerated sense of their ability to do harm. They do this by attempting to convince the population that their government is unable to protect them. It follows, then, that any attempt at improving security policy ought to center upon gaining a better understanding of the factors that affect public perceptions of security.

States with at least minimal historical experience of terrorism typically implore their citizens to participate actively in the task of monitoring streets, buildings, transportation, and task them with reporting suspicious activities and behaviors… I argue that if there is evidence to suggest that such approaches meaningfully improve state security this evidence is not widely available and that, moreover, such approaches are likely to exacerbate rather than alleviate public fear.

In the article, Braithwaite presents evidence that terrorist attacks genuinely do exaggerate our fear of danger by examining opinion polls close to terrorist attacks.

For example, after 9/11 a Gallup poll found that 66% of Americans reported believing that “further acts of terrorism are somewhat or very likely in the coming weeks” while 56% “worried that they or a member of their family will become victim of a terrorist attack”.

With regard to community monitoring and reporting schemes (‘Call us if you see anything suspicious in your neighbourhood’) Braithwaite notes that there is no solid evidence that they make us physically safer. But unfortunately, there isn’t any hard evidence to suggest that they make us more fearful either.

In fact, you could just as easily argue that even if they are useless, they might build confidence due to the illusion of control where we feel like we are having an effect on external events simply because we are participating.

It may be, of course, that authorities don’t publish the effectiveness figures for community monitoring schemes because even if they do genuinely make a difference, terrorists might have the same difficulty as the public and over-estimate their effectiveness.

Perhaps the war on terror is being fought with cognitive biases.
 

Link to locked academic article on fear and terrorism.

It is mind control but not as we know it

The Headlines

The Independent: First ever human brain-to-brain interface successfully tested

BBC News: Are we close to making human ‘mind control’ a reality?

Visual News: Mind Control is Now a Reality: UW Researcher Controls Friend Via an Internet Connection

The story

Using the internet, one researcher remotely controls the finger of another, using it to play a simple video game.

What they actually did

University of Washington researcher Rajesh Rao watches a very simple video game, which involved firing a cannon at incoming rockets (and avoiding firing at incoming supply planes). Electrical signals from his scalp were recorded using a technology called EEG and processed by a computer. The resulting signal was sent over the internet, and across campus, to a lab where another researcher, Andrea Stocco, watches the same video game with his finger over the “fire” button.

Unlike Rao, Stocco wears a magnetic coil over his head. This is designed to invoke electrical activity, not record it. When Rao imagines pressing the fire button, the coil activates the area of Stocco’s brain that makes his finger twitch, thus firing the cannon and completing a startling demonstration of “brain to brain” mind control over the internet.

You can read more details in the University of Washington press release or on the “brain2brain” website where this work is published.

How plausible is this?

EEG recording is a very well established technology, and takes advantage of the fact that the cells of our brain operate by passing around electrochemical signals which can be read from the surface of the scalp with simple electrodes. Unfortunately, the intricate details of brain activity tend to get muffled by the scalp, and the fact that you are recording at one specific point in space, so the technology’s strength is more in telling us that brain activity has changed, rather than in saying how or exactly where brain activity has changed.

The magnetic coil which made the receiver’s finger twitch is also well established, and known in the business as Transcranial Magnetic Stimulation (TMS). An alternating magnetic field is used to alter brain activity underneath the coil. I’ve written about it here before.

The effect is relatively crude. You can’t make someone play the violin, for example, but activating the motor cortex in the right region can generate a finger twitch. So, in summary, the story is very plausible. The researchers are well respected in this area and open about the limitations of their research. Although the experiment wasn’t published in a peer-reviewed journal, we have every reason to believe what we’re being told here.

Tom’s take

This is a wonderful piece of “proof of concept” research, which is completely plausible given existing technology, but yet hints at the possibilities which might soon become available.

The real magic is in the signal processing done. The dizzying complexities of brain activity are compressed into an EEG signal which is still highly complex, and pretty opaque as to what it means – hardly mind reading.

The research team then managed to find a reliable change in the EEG signal which reflected when Rao was thinking about pressing the fire button. The signal – just a simple “go”, as far as I can tell – was then sent over the internet. This “go” signal then triggered the TMS, which is either on or off.

In information terms, this is close to as simple as it gets. Even producing a signal which said what to fire at, as well as when to fire, would be a step change in complexity and wasn’t attempted by the group. TMS is a pretty crude device. Even if the signal the device received was more complex, it wouldn’t be able to make you perform complex, fluid movements, such as those required to track a moving object, tie your shoelaces or pluck a guitar. But this is a real example of brain to brain communication.

As the field develops the thing to watch is not whether this kind of communication can be done (we would have predicted it could be), but exactly how much information is contained in the communication.

A similar moral holds for reports that researchers can read thoughts from brain scans. This is true, but misleading. Many people imagine that such thought-reading gives researchers a read out in full technicolour mentalese, something like “I would like peas for dinner”. The reality is that such experiments allow the researchers to take a guess at what you are thinking based on them having already specified a very limited set of things which you can think about (for example peas or chips, and no other options).

Real progress on this front will come as we identify with more and more precision the brain areas that underlie complex behaviours. Armed with this knowledge, brain interface researchers will be able to use simple signals to generate complex responses by targeting specific circuits.

Read more

The original research report: Direct Brain-to-Brain Communication in Humans: A Pilot Study

Previously at The Conversation, another column on TMS: Does brain stimulation make you better at maths?

Thinking about brain interfaces is helped by a bit of information theory. To read a bit more about that field I recommend James Gleik’s book The Information: A History, a Theory, a Flood

The Conversation

This article was originally published at The Conversation.
Read the original article.

The rise of the circuit-based human

Image from Sociedad Española de NeuroCienciasI’ve got a piece in The Observer about how we’re moving towards viewing the brain as a series of modifiable brain circuits each responsible for distinct aspects of experience and behaviour.

The ‘brain circuit’ aspect is not new but the fact that neuroscience and medicine, on the billion-dollar global level, are reorienting themselves to focus on identifying and, crucially, altering brain circuits is a significant change we’ve seen only recently.

What many people don’t realise, is the extent to which direct stimulation of brain circuits by implanted electrodes is already happening.

Perhaps more surprising for some is the explosion in deep brain stimulation procedures, where electrodes are implanted in the brains of patients to alter electronically the activity in specific neural circuits. Medtronic, just one of the manufacturers of these devices, claims that its stimulators have been used in more than 100,000 patients. Most of these involve well-tested and validated treatments for Parkinson’s disease, but increasingly they are being trialled for a wider range of problems. Recent studies have examined direct brain stimulation for treating pain, epilepsy, eating disorders, addiction, controlling aggression, enhancing memory and for intervening in a range of other behavioural problems.

More on how we are increasingly focussed on hacking our circuits in the rest of the article.
 

Link to ‘Changing brains: why neuroscience is ending the Prozac era’.

This complex and tragic event supports my own view

As shots rang out across the courtyard, I ducked behind my desk, my adrenaline pumping. Enraged by the inexplicable violence of this complex and multi-faceted attack, I promised the public I would use this opportunity to push my own pet theory of mass shootings.

Only a few days have passed since this terrible tragedy and I want to start by paying lip service to the need for respectful remembrance and careful evidence-gathering before launching into my half-cocked ideas.

The cause was simple. It was whatever my prejudices suggested would cause a mass shooting and this is being widely ignored by the people who have the power to implement my prejudices as public policy.

I want to give you some examples of how ignoring my prejudices directly led to the mass shooting.

The gunman grew up in an American town and had a series of experiences, some common to millions of American people, some unique to him. But it wasn’t until he started to involve himself in the one thing that I particularly object to, that he started on the path to mass murder.

The signs were clear to everyone but they were ignored because other people haven’t listened to the same point-of-view I expressed on the previous occasion the opportunity arose.

Research on the risk factors for mass shootings has suggested that there are a number of characteristics that have an uncertain statistical link to these tragic events but none that allow us to definitively predict a future mass shooting.

But I want to use the benefit of hindsight to underline one factor I most agree with and describe it as if it can be clearly used to prevent future incidents.

I am going to try and convince you of this in two ways. I am going to selectively discuss research which supports my position and I’m going to quote an expert to demonstrate that someone with a respected public position agrees with me.

Several scientific papers in a complex and unsettled debate about this topic could be taken to support my position. A government report also has a particular statistic which I like to quote.

Highlighting these findings may make it seem like my position is the most probable explanation despite no clear overall conclusion but a single quote from one of the experts will seal the issue in my favour.

“Mass shootings” writes forensic psychiatrist Anand Pandya, an Associate Clinical Professor in the Department of Psychiatry and Behavioral Neurosciences at the UCLA School of Medicine, Los Angeles, “have repeatedly led to political discourse”. But I take from his work that my own ideas, to quote Professor Pandya, “may be useful after future gun violence”.

Be warned. People who don’t share my biases are pushing their own evidence-free theories in the media, but without hesitation, I can definitely say they are wrong and, moreover, biased.

It is clear that the main cause of this shooting was the thing I disliked before the mass shooting happened. I want to disingenuously imply that if my ideas were more widely accepted, this tragedy could have been averted.

Do we want more young people to die because other people don’t agree with me?

UPDATE: Due to the huge negative reaction this article has received, I would like to make some minor concession to my critics while accusing them of dishonesty and implying that they are to blame for innocent deaths. Clearly, we should be united by in the face of such terrible events and I am going to appeal to your emotions to emphasise that not standing behind my ideas suggests that you are against us as a country and a community.

A comic repeat with video games and violence

An article in the Guardian Headquarters blog discusses the not very clear evidence for the link between computer games and violence and makes a comparison to the panic over ‘horror comics’ in the 1950s.

The Fifties campaign against comics was driven by a psychiatrist called Fredric Wertham and his book The Seduction of the Innocent.

We’ve discussed before on Mind Hacks how Wertham has been misunderstood. He wasn’t out to ban comics, just keep adult themes out of kids magazines.

However, his idea of what ‘adult themes’ might be were certainly pretty odd. This is Wertham’s testimony to a hearing in the US Senate.

I would like to point out to you one other crime comic book which we have found to be particularly injurious to the ethical development of children and those are the Superman comic books. They arose in children’s fantasies of sadistic joy in seeing other people punished over and over again, while you yourself remain immune. We have called it the “Superman complex.” In these comic books, the crime is always real and Superman’s triumph over [evil] is unreal. Moreover, these books like any other, teach complete contempt of the police…

I may say here on this subject there is practically no controversy… as long as the crime comic books industry exists in its present form, there are no secure homes. …crime comic books, as I define them, are the overwhelming majority of all comic books… There is an endless stream of brutality… I can only say that, in my opinion, this is a public-health problem.

The ‘Superman causes sadism’ part aside, this is a remarkably similar argument to the one used about violent video games. It’s not a matter of taste or decency, it’s a public health problem.

In fact, an article in the Mayo Clinic Proceedings had a neat comparison between arguments about 1950s comic books and modern day video games which turn out to be very similar.

Moral of the story: wait for sixty years when the debate about violent holograms kicks off and they’ll leave you to play your video games in peace.
 

Link to ‘What is the link between violent video games and aggression?’
Link to article on video games and comic panics in Mayo Clinic Proceedings.

A coming revolution walks a fine line

The Chronicle of Higher Education has an excellent in-depth article on the most likely candidate for a revolution in mental health research: the National Institute of Mental Health’s RDoC or Research Domain Criteria project.

The article is probably the best description of the project this side of the scientific literature and considering that the RDoC is likely to fundamentally change how mental illness is understood, and eventually, treated, it is essential reading.

It is also interesting for the fact that the leaders of the RDoC project continue to hammer the reputation of the long-standing psychiatric diagnosis manual – the DSM.

First though, it’s worth knowing a little about what the RDoC actually is. Essentially, it’s a catalogue of well-defined neural circuits that are associated with specific cognitive functions or emotional responses.

If you look at the RDoC matrix you can see how everything has been divided. For example, stress regulation is associated with the raphe nuclei circuits and serotonin system.

The idea is that mental illnesses would be better understood as dysfunctions in various of these core components of behaviour rather than the traditional collections of symptoms that have been rather haphazardly formed into diagnoses.

Conceptually, it’s like a sort of neuropsychological Lego that should allow researchers to focus on agreed components of the brain and see how they map on to genes, behaviour, experience and so on.

It’s meant to be updated over time so ‘bricks’ can be modified or added as they become confirmed. An advantage is that is may allow a more accessible structure to understanding the brain for those not trained in neuroscience but there is a danger that it will over-simplify the components of experience and behaviour in some people’s minds (and, of course, research).

The RDoC has been around for several years but it recently hit the headlines when NIMH director Thomas Insel wrote a blog post promoting it two weeks before the launch of the American Psychiatric Association’s diagnostic manual, the DSM-5, saying that the DSM ‘lacks validity’ and that “patients with mental disorders deserve better”.

After a media storm, Insel wrote another piece jointly with the president of the American Psychiatric Association that involved some furious backpeddling where the DSM was described as a “key resource for delivering the best available care” and “complementary” to the RDoC approach.

But in the Chronicle article, head of the RDoC project, Bruce Cuthbert, makes no bones about the DSM’s faults:

“If you think about it the way I think about it, actually the DSM is sloppy in both counts. There’s no particular biological test in it, but the psychology is also very weak psychology. It’s folk psychology without any quantification involved.”

The DSM will not, of course, suddenly disappear. “As flawed as the DSM is, we have no substitute for the clinical realm for insurance reimbursement,” ex-NIMH Director Steven Hyman says in the article.

It’s worth noting that this is not actually true. In the US, diagnosis is usually made according to DSM definitions but insurance reimbursement is charged by codes from the ICD-10 – the free diagnostic manual from the World Health Organisation.

But the fact that the most senior psychiatric researchers in the US are now openly and persistently highlighting that the DSM is not fit for the purpose of advancing science and psychiatric treatment is a damning condemnation of the manual – no matter how they try and sugar-coat it.

The fact is, the NIMH have to walk a fine line. They need to both condemn the DSM for being a mess while trying not to shatter confidence in a system used to treat millions of patients every year.

Best of luck with that.
 

Link to Chronicle article ‘A Revolution in Mental Health’.

Drug addiction: The complex truth

We’re told studies have proven that drugs like heroin and cocaine instantly hook a user. But it isn’t that simple – little-known experiments over 30 years ago tell a very different tale.

Drugs are scary. The words “heroin” and “cocaine” make people flinch. It’s not just the associations with crime and harmful health effects, but also the notion that these substances can undermine the identities of those who take them. One try, we’re told, is enough to get us hooked. This, it would seem, is confirmed by animal experiments.

Many studies have shown rats and monkeys will neglect food and drink in favour of pressing levers to obtain morphine (the lab form of heroin). With the right experimental set up, some rats will self-administer drugs until they die. At first glance it looks like a simple case of the laboratory animals losing control of their actions to the drugs they need. It’s easy to see in this a frightening scientific fable about the power of these drugs to rob us of our free will.

But there is more to the real scientific story, even if it isn’t widely talked about. The results of a set of little-known experiments carried out more than 30 years ago paint a very different picture, and illustrate how easy it is for neuroscience to be twisted to pander to popular anxieties. The vital missing evidence is a series of studies carried out in the late 1970s in what has become known as “Rat Park”. Canadian psychologist Bruce Alexander, at the Simon Fraser University in British Columbia, Canada, suspected that the preference of rats to morphine over water in previous experiments might be affected by their housing conditions.

To test his hypothesis he built an enclosure measuring 95 square feet (8.8 square metres) for a colony of rats of both sexes. Not only was this around 200 times the area of standard rodent cages, but Rat Park had decorated walls, running wheels and nesting areas. Inhabitants had access to a plentiful supply of food, perhaps most importantly the rats lived in it together.

Rats are smart, social creatures. Living in a small cage on their own is a form of sensory deprivation. Rat Park was what neuroscientists would call an enriched environment, or – if you prefer to look at it this way – a non-deprived one. In Alexander’s tests, rats reared in cages drank as much as 20 times more morphine than those brought up in Rat Park. 

Inhabitants of Rat Park could be induced to drink more of the morphine if it was mixed with sugar, but a control experiment suggested that this was because they liked the sugar, rather than because the sugar allowed them to ignore the bitter taste of the morphine long enough to get addicted. When naloxone, which blocks the effects of morphine, was added to the morphine-sugar mix, the rats’ consumption didn’t drop. In fact, their consumption increased, suggesting they were actively trying to avoid the effects of morphine, but would put up with it in order to get sugar.

Woefully incomplete’

The results are catastrophic for the simplistic idea that one use of a drug inevitably hooks the user by rewiring their brain. When Alexander’s rats were given something better to do than sit in a bare cage they turned their noses up at morphine because they preferred playing with their friends and exploring their surroundings to getting high.

Further support for his emphasis on living conditions came from another set of tests his team carried out in which rats brought up in ordinary cages were forced to consume morphine for 57 days in a row. If anything should create the conditions for chemical rewiring of their brains, this should be it. But once these rats were moved to Rat Park they chose water over morphine when given the choice, although they did exhibit some minor withdrawal symptoms.

You can read more about Rat Park in the original scientific report. A good summary is in this comic by Stuart McMillen. The results aren’t widely cited in the scientific literature, and the studies were discontinued after a few years because they couldn’t attract funding. There have been criticisms of the study’s design and the few attempts that have been made to replicate the results have been mixed.

Nonetheless the research does demonstrate that the standard “exposure model” of addiction is woefully incomplete. It takes far more than the simple experience of a drug – even drugs as powerful as cocaine and heroin – to make you an addict. The alternatives you have to drug use, which will be influenced by your social and physical environment, play important roles as well as the brute pleasure delivered via the chemical assault on your reward circuits.

For a psychologist like me it suggests that even addictions can be thought of using the same theories we use to think about other choices, there isn’t a special exception for drug-related choices. Rat Park also suggests that when stories about the effects of drugs on the brain are promoted to the neglect of the discussion of the personal and social contexts of addiction, science is servicing our collective anxieties rather than informing us.

This is my BBC Future article from tuesday. The original is here. The Foddy article I link to in the last paragraph is great, read that. As is Stuart’s comic.

A furious infection but a fake fear of water

RadioLab has an excellent short episode on one of the most morbidly fascinating of brain infections – rabies.

Rabies is a virus that can very quickly infect the brain. When it does, it causes typical symptoms of encephalitis (brain inflammation) – headache, sore neck, fever, delirium and breathing problems – and it is almost always fatal.

It also has some curious behavioural effects. It can make people hyper-reactive and can lead to uncontrolled muscle spasms due to its effect on the action coordination systems in the brain. With the pain and distress, some people can become aggressive.

This is known as the ‘furious’ stage and when we describe some as ‘rabid with anger’ it is a metaphor drawn from exactly this.

Rabies can also cause what is misleadingly called ‘hydrophobia’ or fear of water. You can see this in various videos that have been uploaded to YouTube that show rabies-infected patients trying to swallow and reacting quite badly.

But rabies doesn’t actually instil a fear of water in the infected person but instead causes dysphagia – difficulty with swallowing – due to the same disruption to the brain’s action control systems.

We tend to take swallowing for granted but it is actually one of our most complex actions and requires about 50 muscles to complete successfully.

Problems swallowing are not uncommon after brain injury (particularly after stroke) and speech and language therapists can spend a lot of their time on neurorehabilitation wards training people to reuse and re-coordinate their swallow to stop them choking on food.

As we know from waterboarding, choking can induce panic, and it’s not so much that rabies creates a fear of water, but a difficulty swallowing and hence experiences of choking. This makes the person want to avoid trying to swallow liquids.

Bathing, for example, wouldn’t trigger this aversion and that’s why rabies doesn’t really cause a ‘fear of water’ but more a ‘fear of choking on liquids due to impaired swallowing’.

The RadioLab episode discusses the case that launched the controversial Milwaukee protocol – a technique for treating rabies that involves putting you into a drug-induced coma to protect your brain until your body has produced the anti-rabies antibodies.

It’s a fascinating and compelling episode so well worth checking out.

UPDATE: This old medical film on YouTube goes through the stages of rabies infection. Warning: it’s a bit gruesome and has a melodramatic soundtrack but it is quite informative.

 

Link to RadioLab episode ‘Rodney Versus Death’.

Why the other queue always seem to move faster than yours

Whether it is supermarkets or traffic, there are two possible explanations for why you feel the world is against you, explains Tom Stafford.

Sometimes I feel like the whole world is against me. The other lanes of traffic always move faster than mine. The same goes for the supermarket queues. While I’m at it, why does it always rain on those occasions I don’t carry an umbrella, and why do wasps always want to eat my sandwiches at a picnic and not other people’s?

It feels like there are only two reasonable explanations. Either the universe itself has a vendetta against me, or some kind of psychological bias is creating a powerful – but mistaken – impression that I get more bad luck than I should. I know this second option sounds crazy, but let’s just explore this for a moment before we get back to the universe-victim theory.

My impressions of victimisation are based on judgements of probability. Either I am making a judgement of causality (forgetting an umbrella makes it rain) or a judgement of association (wasps prefer the taste of my sandwiches to other people’s sandwiches). Fortunately, psychologists know a lot about how we form impressions of causality and association, and it isn’t all good news.

Our ability to think about causes and associations is fundamentally important, and always has been for our evolutionary ancestors – we needed to know if a particular berry makes us sick, or if a particular cloud pattern predicts bad weather. So it isn’t surprising that we automatically make judgements of this kind. We don’t have to mentally count events, tally correlations and systematically discount alternative explanations. We have strong intuitions about what things go together, intuitions that just spring to mind, often after very little experience. This is good for making decisions in a world where you often don’t have enough time to think before you act, but with the side-effect that these intuitions contain some predictable errors.

One such error is what’s called “illusory correlation”, a phenomenon whereby two things that are individually salient seem to be associated when they are not. In a classic experiment volunteers were asked to look through psychiatrists’ fabricated case reports of patients who had responded to the Rorschach ink blot test. Some of the case reports noted that the patients were homosexual, and some noted that they saw things such as women’s clothes, or buttocks in the ink blots. The case reports had been prepared so that there was no reliable association between the patient notes and the ink blot responses, but experiment participants – whether trained or untrained in psychiatry – reported strong (but incorrect) associations between some ink blot signs and patient homosexuality.

One explanation is that things that are relatively uncommon, such as homosexuality in this case, and the ink blot responses which contain mention of women’s clothes, are more vivid (because of their rarity). This, and an effect of existing stereotypes, creates a mistaken impression that the two things are associated when they are not. This is a side effect of an intuitive mental machinery for reasoning about the world. Most of the time it is quick and delivers reliable answers – but it seems to be susceptible to error when dealing with rare but vivid events, particularly where preconceived biases operate. Associating bad traffic behaviour with ethnic minority drivers, or cyclists, is another case where people report correlations that just aren’t there. Both the minority (either an ethnic minority, or the cyclists) and bad behaviour stand out. Our quick-but-dirty inferential machinery leaps to the conclusion that the events are commonly associated, when they aren’t.

So here we have a mechanism which might explain my queuing woes. The other lanes or queues moving faster is one salient event, and my intuition wrongly associates it with the most salient thing in my environment – me. What, after all, is more important to my world than me. Which brings me back to the universe-victim theory. When my lane is moving along I’m focusing on where I’m going, ignoring the traffic I’m overtaking. When my lane is stuck I’m thinking about me and my hard luck, looking at the other lane. No wonder the association between me and being overtaken sticks in memory more.

This distorting influence of memory on our judgements lies behind a good chunk of my feelings of victimisation. In some situations there is a real bias. You really do spend more time being overtaken in traffic than you do overtaking, for example, because the overtaking happens faster. And the smoke really does tend follow you around the campfire, because wherever you sit creates a warm up-draught that the smoke fills. But on top of all of these is a mind that over-exaggerates our own importance, giving each of us the false impression that we are more important in how events work out than we really are.

This is my BBC Future post from last Tuesday. The original is here.