National Institute of Mental Health abandoning the DSM

In a potentially seismic move, the National Institute of Mental Health – the world’s biggest mental health research funder, has announced only two weeks before the launch of the DSM-5 diagnostic manual that it will be “re-orienting its research away from DSM categories”.

In the announcement, NIMH Director Thomas Insel says the DSM lacks validity and that “patients with mental disorders deserve better”.

This is something that will make very uncomfortable reading for the American Psychiatric Association as they trumpet what they claim is the ‘future of psychiatric diagnosis’ only two weeks before it hits the shelves.

As a result the NIMH will now be preferentially funding research that does not stick to DSM categories:

Going forward, we will be supporting research projects that look across current categories – or sub-divide current categories – to begin to develop a better system. What does this mean for applicants? Clinical trials might study all patients in a mood clinic rather than those meeting strict major depressive disorder criteria. Studies of biomarkers for “depression” might begin by looking across many disorders with anhedonia or emotional appraisal bias or psychomotor retardation to understand the circuitry underlying these symptoms. What does this mean for patients? We are committed to new and better treatments, but we feel this will only happen by developing a more precise diagnostic system.

As an alternative approach, Insel suggests the Research Domain Criteria (RDoC) project, which aims to uncover what it sees as the ‘component parts’ of psychological dysregulation by understanding difficulties in terms of cognitive, neural and genetic differences.

For example, difficulties with regulating the arousal system might be equally as involved in generating anxiety in PTSD as generating manic states in bipolar disorder.

Of course, this ‘component part’ approach is already a large part of mental health research but the RDoC project aims to combine this into a system that allows these to be mapped out and integrated.

It’s worth saying that this won’t be changing how psychiatrists treat their patients any time soon. DSM-style disorders will still be the order of the day, not least because a great deal of the evidence for the effectiveness of medication is based on giving people standard diagnoses.

It is also true to say that RDoC is currently little more than a plan at the moment – a bit like the Mars mission: you can see how it would be feasible but actually getting there seems a long way off. In fact, until now, the RDoC project has largely been considered to be an experimental project in thinking up alternative approaches.

The project was partly thought to be radical because it has many similarities to the approach taken by scientific critics of mainstream psychiatry who have argued for a symptom-based approach to understanding mental health difficulties that has often been rejected by the ‘diagnoses represent distinct diseases’ camp.

The NIMH has often been one of the most staunch supporters of the latter view, so the fact that it has put the RDoC front and centre is not only a slap in the face for the American Psychiatric Association and the DSM, it also heralds a massive change in how we might think of mental disorders in decades to come.
 

Link to NIMH announcement ‘Transforming Diagnosis’.

Race perception isn’t automatic

Last week’s column for BBC Future describes a neat social psychology experiment from an unlikely source. Three evolutionary psychologists reasoned that that claims that we automatically categorise people by the ethnicity must be wrong. Here’s how they set out to prove it. The original column is here.

For years, psychologists thought we instantly label each other by ethnicity. But one intriguing study proposes this is far from inevitable, with obvious implications for tackling racism.

When we meet someone we tend to label them in certain ways. “Tall guy” you might think, or “Ugly kid”. Lots of work in social psychology suggests that there are some categorisations that spring faster to mind. So fast, in fact, that they can be automatic. Sex is an example: we tend to notice if someone is a man or a woman, and remember that fact, without any deliberate effort. Age is another example. You can see this in the way people talk about others. If you said you went to a party and met someone, most people wouldn’t let you continue with your story until you said if it was a man or a woman, and there’s a good chance they’d also want to know how old they were too.

Unfortunately, a swathe of evidence from the 1980s and 1990s also seemed to suggest that race is an automatic categorisation, in that people effortlessly and rapidly identified and remembered which ethnic group an individual appeared to belong to. “Unfortunate”, because if perceiving race is automatic then it lays a foundation for racism, and appears to put a limit on efforts to educate people to be “colourblind”, or put aside prejudices in other ways.

Over a decade of research failed to uncover experimental conditions that could prevent people instinctively categorising by race, until a trio of evolutionary psychologists came along with a very different take on the subject. Now, it seems only fair to say that evolutionary psychologists have a mixed reputation among psychologists. As a flavour of psychology it has been associated with political opinions that tend towards the conservative. Often, scientific racists claim to base their views on some jumbled version of evolutionary psychology (scientific racism is racism dressed up as science, not racisms based on science, in case you wondered). So it was a delightful surprise when researchers from one of the world centres for evolutionary psychology intervened in the debate on social categorisation, by conducting an experiment they claimed showed that labelling people by race was far less automatic and inevitable than all previous research seemed to show.

Powerful force

The research used something called a “memory confusion protocol”. This works by asking experiment participants to remember a series of pictures of individuals, who vary along various dimensions – for example, some have black hair and some blond, some are men, some women, etc. When participants’ memories are tested, the errors they make reveal something about how they judged the pictures of individuals – what sticks in their mind most and least. If a participant more often confuses a black-haired man with a blond-haired man, it suggests that the category of hair colour is less important than the category of gender (and similarly, if people rarely confuse a man for a woman, that also shows that gender is the stronger category).

Using this protocol, the researchers tested the strength of categorisation by race, something all previous efforts had shown was automatic. The twist they added was to throw in another powerful psychological force – group membership. People had to remember individuals who wore either yellow or grey basketball shirts, and whose pictures were presented alongside statements indicating which team they were in. Without the shirts, the pattern of errors were clear: participants automatically categorised the individuals by their race (in this case: African American or Euro American). But with the coloured shirts, this automatic categorisation didn’t happen: people’s errors revealed that team membership had become the dominant category, not the race of the players.

It’s important to understand that the memory test was both a surprise – participants didn’t know it was coming up – and an unobtrusive measure of racial categorising. Participants couldn’t guess that the researchers were going to make inferences about how they categorised people in the pictures – so if they didn’t want to appear to perceive people on the basis of race, it wouldn’t be clear how they should change their behaviour to do this. Because of this we can assume we have a fairly direct measure of their real categorisation, unbiased by any desire to monitor how they appear.

So despite what dozens of experiments had appeared to show, this experiment created a situation where categorisation by race faded into the background. The explanation, according to the researchers, is that race is only important when it might indicate coalitional information – that is, whose team you are on. In situations where race isn’t correlated with coalition, it ceases to be important. This, they claim, makes sense from an evolutionary perspective. For most of ancestors age and gender would be important predictors of another person’s behaviour, but race wouldn’t – since most people lived in areas with no differences as large as the ones we associate with “race” today (a concept, incidentally, which has little currency among human biologists).

Since the experiment was published, the response from social psychologists has been muted. But supporting evidence is beginning to be reported, suggesting that the finding will hold. It’s an unfortunate fact of human psychology that we are quick to lump people into groups, even on the slimmest evidence. And once we’ve identified a group, it’s also seems automatic to jump to conclusions about what they are like. But this experiment suggests that although perceiving groups on the basis of race might be easy, it is far from inevitable.

Prescribe it again, Sam

We tend to think of Prozac as the first ‘fashionable’ psychiatric drug but it turns out popular memory is short because a tranquilizer called Miltown hit the big time thirty years before.

This is from a wonderful book called The Age of Anxiety: A History of America’s Turbulent Affair with Tranquilizers by Andrea Tone and it describes how the drug became a Hollywood favourite and even inspired its own cocktails.

Miltown was frequently handed out at parties and premieres, a kind of pharmaceutical appetizer for jittery celebrities. Frances Kaye, a publicity agent, described a movie party she attended at a Palm Springs resort. A live orchestra entertained a thousand-odd guests while a fountain spouted champagne against the backdrop of a desert sky. As partiers circulated, a doctor made rounds like a waiter, dispensing drugs to guests from a bulging sack. On offer were amphetamines and barbituates, standard Hollywood party fare, but guests wanted Miltown. The little white pills “were passed around like peanuts,” Kaye remembered. What she observed about party pill popping was not unique. “They all used to go for ‘up pills’ or ‘down pills,'” one Hollywood regular noted. “But now it’s the ‘don’t-give-a-darn-pills.'”

The Hollywood entertainment culture transformed a pharmaceutical concoction into a celebrity fetish, a coveted commodity of the fad-prone glamour set. Female entertainers toted theirs in chic pill boxes designed especially for tranquilizers, which became, according to one celebrity, as ubiquitous at Hollywood parties as the climatically unnecessary mink coat…

Miltown even inspired a barrage of new alcoholic temptations, in which the pill was the new defining ingredient. The Miltown Cocktail was a Bloody Mary (vodka and tomato juice) spiked with a single pill, and a Guided Missile, popular among the late night crowd on the Sunset Strip, consisted of a double shot of vodka and two Miltowns. More popular still was the Miltini, a dry martini in which Miltown replaced the customary olive.

Andrea Tone’s book is full of surprising snippets about how tranquilisers and anti-anxiety drugs have affected our understanding of ourselves and our culture.

It’s very well researched and manages to hit that niche of being gripping for the non-specialist while being extensive enough that professionals will learn a lot.
 

Link to details for The Age of Anxiety book.

2013-04-27 Spike activity

Quick links from the past week in mind and brain news:

Psychiatry needs its Higgs boson moment says and article in New Scientist which describes some interesting but disconnected findings suggesting it ‘aint going to get it soon.

Wall Street Journal has an overenthusiastic article on how advances in genetics and neuroscience are ‘revolutionizing’ our understanding of violent behavior. Not quite but not a bad read in parts.

The new series of BBC Radio 4 wonderful series of key studies in psychology, Mind Changers, has just started. Streamed only because the BBC think radio simulations are cute.

Reuters reports that fire kills dozens in Russian psychiatric hospital tragedy.

Author and psychologist Charles Fernyhough discusses how neuroscience is dealt with in literary fiction in a piece for The Guardian.

Nature profiles one of the few people doing gun violence research in the US – the wonderfully named emergency room doctor Garen Wintemute.

The Man With Uncrossed Eyes. Fascinating case study covered by Neuroskeptic.

Wired reports that scientists have built a baseball-playing robot with 100,000-neuron fake brain. To the bunkers!

“Let’s study Tamerlan Tsarnaev’s brain” – The now seemingly compulsory article that argues for some sort of pointless scientific investigation after some horrible tragedy appears in the Boston Globe. See also: Let’s study the Newtown shooter’s DNA.

Wired report from a recent conference on the medical potential of psychedelic drugs.

Adam Phillips, one of the most thoughtful and interesting of the new psychoanalyst writers, is profiled by Newsweek.

Deeper into genetic challenges to psychiatric diagnosis

For my recent Observer article I discussed how genetic findings are providing some of the best evidence that psychiatric diagnoses do not represent discrete disorders.

As part of that I spoke to Michael Owen, a psychiatrist and researcher based at Cardiff University, who has been leading lots of the rethink on the nature of psychiatric disorders.

As a young PhD student I sat in on lots of Prof Owen’s hospital ward rounds and learnt a great deal about how science bumps up against the real world of individuals’ lives.

One of the things that most interested me about Owen’s work is that, back in the day, he was working towards finding ‘the genetics of’ schizophrenia, bipolar and so on.

But since then he and his colleagues have gathered a great deal of evidence that certain genetic differences raise the chances of developing a whole range of difficulties – from epilepsy to schizophrenia to ADHD – rather these differences being associated with any one disorder.

As many of these genetic changes can affect brain development in subtle ways, it is looking increasingly likely that genetics determines how sensitive we are to life events as the brain grows and develops – suggesting a neurodevelopmental theory of these disorders that considers both neurobiology and life experience as equally important.

I asked Owen several questions for the Observer article but I couldn’t reply the answers in full, so I’ve reproduced them below as they’re a fascinating insight into how genetics is challenging psychiatry.

I remember you looking for the ‘genes for schizophrenia’ – what changed your mind?

For most of our genetic studies we used conventional diagnostic criteria such as schizophrenia, bipolar disorder and ADHD. However, what we then did was look for overlap between the genetic signals across diagnostic categories and found that these were striking. This occurred not just for schizophrenia and bipolar disorder, which to me as an adult psychiatrist who treats these conditions was not surprising, but also between adult disorders like schizophrenia and childhood disorders like autism and ADHD.

What do the current categories of psychiatric diagnosis represent?

The current categories were based on the categories in general use by psychiatrists. They were formalized to make them more reliable and have been developed over the years to take into account developments in thinking and practice. They are broad groupings of patients based upon the clinical presentation especially the most prominent symptoms and other factors such as age at onset, and course of illness. In other words they describe syndromes (clinically recognizable features that tend to occur together) rather than distinct diseases. They are clinically useful in so far as they group patients in regard to potential treatments and likely outcome. The problem is that many doctors and scientists have come to assume that they do in fact represent distinct diseases with separate causes and distinct mechanisms. In fact the evidence, not just from molecular genetics, suggests that there is no clear demarcation between diagnostic categories in symptoms or causes (genetic or environmental).

There is an emerging belief which has been stimulated by recent genetic findings that it is perhaps best to view psychiatric disorders more in terms of constellations of symptoms and syndromes, which cross current diagnostic categories and view these in dimensional terms. This is reflected by the inclusion of dimensional measures in DSM5, which, it is hoped, will allow these new views to stimulate research and to be developed based on evidence.

In the meantime the current categories, slightly modified, remain the focus of DSM-5. But I think that there is a much greater awareness now that these are provisional and will replaced when the weight of scientific evidence is sufficiently strong.

The implications of recent findings are probably more pressing for research where there is a need to be less constrained by current diagnostic categories and to refocus onto the mechanisms underlying symptom domains rather than diagnostic categories. This in turn might lead to new diagnostic systems and markers. The discovery of specific risk genes that cut across diagnostic groupings offers one approach to investigating this that we will take forward in Cardiff.

There is a lot of talk of endophenotypes and intermediate phenotypes that attempt to break down symptoms into simpler form of difference and dysfunction in the mind and brain. How will we know when we have found a valid one?

Research into potential endophenotypes has clear intuitive appeal but I think interpretation of the findings is hampered by a couple of important conceptual issues. First, as you would expect from what I have already said, I don’t think we can expect to find endophenotypes for a diagnostic group as such. Rather we might expect them to relate to specific subcomponents of the syndrome (symptoms, groups of symptoms etc).

Second, the assumption that a putative endophenotype lies on the disease pathway (ie is intermediate between say gene and clinical phenotype) has to be proved and cannot just be assumed. For example there has been a lot of work on cognitive dysfunction and brain imaging in psychiatry and widespread abnormalities have been reported. But it cannot be assumed that an individual cognitive or imaging phenotype lies on the pathway to a particular clinical disorder of component of the disorder. This has to be proven either through an intervention study in humans or model systems (both currently challenging), or statistically which requires much larger studies than are usually undertaken. I think that many of the findings from imaging and cognition studies will turn out to be part of the broad phenotype resulting from whatever brain dysfunction is present and not on the causal pathway to psychiatric disorder.

Using the tools of biological psychiatry you have come to a conclusion often associated with psychiatry’s critics (that the diagnostic categories do not represent specific disorders). What reactions have you encountered from mainstream psychiatry?

I have found that most psychiatrists working at the front line are sympathetic. In fact psychiatrists already treat symptoms rather than diagnoses. For example they will consider prescribing an antipsychotic if someone is psychotic regardless of whether the diagnosis is schizophrenia or bipolar disorder. They also recognize that many patients don’t fall neatly into current categories. For example many patients have symptoms of both schizophrenia and bipolar disorder sometimes at the same time and sometimes at different time points. Also patients who fulfill diagnostic criteria for schizophrenia in adulthood often have histories of childhood diagnoses such as ADHD or autistic spectrum.

The inertia comes in part from the way in which services are structured. In particular the distinction between child and adult services has many justifications but it leads to patents with long term problems being transferred to a new team at a vulnerable age, receiving different care and sometimes a change in diagnosis. Many of us now feel that we should develop services that span late childhood and early adulthood to ensure continuity over this important period. There are also international differences. So in the US mood disorders (including bipolar) are often treated by different doctors in different clinics to schizophrenia.

There is also a justifiable unwillingness to discard the current system until there is strong evidence for a better approach. The inclusion of dimensional measures in DSM5 reflects the acceptance of the psychiatric establishment that change is needed and acknowledges the likely direction of travel. I think that psychiatry’s acknowledgment of its diagnostic shortcomings is a sign of its maturity. Psychiatric disorders are the most complex in medicine and some of the most disabling. We have treatments that help some of the people some of the time and we need to target these to the right people at the right time. By acknowledging the shortcomings of our current diagnostic categories we are recognizing the need to treat patients as individuals and the fact that the outcome of psychiatric disorders is highly variable.

Like a part of me is missing

Matter magazine has an amazing article about the world of underground surgery for healthy people who feel that their limb is not part of their body and needs to be removed.

The condition is diagnosed as body integrity identity disorder or BIID but it has a whole range of interests and behaviours associated with it and people with the desire often do not feel it is a disorder in itself.

Needless to say, surgeons have not been lining up to amputate completely healthy limbs but there are clinics around the world that do the operations illegally.

The Matter article follows someone as they obtain one of these procedures and discusses the science of why someone might feel so uncomfortable about having a working limb they were born with.

But there is a particularly eye-opening bit where it mentions something fascinating about the first scientific article that discussed the condition, published in 1977.

One of the co-authors of the 1977 paper was Gregg Furth, who eventually became a practising psychologist in New York. Furth himself suffered from the condition and, over time, became a major figure in the BIID underground. He wanted to help people deal with their problem, but medical treatment was always controversial — often for good reason. In 1998, Furth introduced a friend to an unlicensed surgeon who agreed to amputate the friend’s leg in a Tijuana clinic. The patient died of gangrene and the surgeon was sent to prison. A Scottish surgeon named Robert Smith, who practised at the Falkirk and District Royal Infirmary, briefly held out legal hope for BIID sufferers by openly performing voluntary amputations, but a media frenzy in 2000 led British authorities to forbid such procedures. The Smith affair fuelled a series of articles about the condition — some suggesting that merely identifying and defining such a condition could cause it to spread, like a virus.

Undeterred, Furth found a surgeon in Asia who was willing to perform amputations for about $6,000. But instead of getting the surgery himself, he began acting as a go-between, putting sufferers in touch with the surgeon.

 

Link to Matter article on the desire to be an amputee.

What does it take to spark prejudice?

Short answer: surprisingly little. Continuing the theme of revisiting classic experiments in psychology, last week’s BBC Future column was on Tajfel’s Minimal Group Paradigm. The original is here. Next week we’re going to take this foundation and look at some evolutionary psychology of racism (hint: it won’t be what you’d expect).

How easy is it for the average fair-minded person to form biased, preconceived views within groups? Surprisingly easy, according to psychology studies.

One of the least charming but most persistent aspects of human nature is our capacity to hate people who are different. Racism, sexism, ageism, it seems like all the major social categories come with their own “-ism”, each fuelled by regrettable prejudice and bigotry.

Our tendency for groupness appears to be so strong there seems little more for psychology to teach us. It’s not as if we need it proven that favouring our group over others is a common part of how people think – history provides all the examples we need. But one psychologist, Henri Tajfel, taught us something important. He showed exactly how little encouragement we need to treat people in a biased way because of the group they are in.

Any phenomenon like this in the real world comes entangled with a bunch of other, complicating phenomenon. When we see prejudice in the everyday world it is hard to separate out psychological biases from the effects of history, culture and even pragmatism (sometimes people from other groups really are out to get you).

As a social psychologist, Tajfel was interested in the essential conditions of group prejudice. He wanted to know what it took to turn the average fair-minded human into their prejudiced cousin.

He wanted to create a microscope for looking at how we think when we’re part of a group, even when that group has none of the history, culture or practical importance that groups normally do. To look at this, he devised what has become known as the “minimal group paradigm

The minimal group paradigm works like this: participants in the experiment are divided into groups on some arbitrary basis. Maybe eye-colour, maybe what kind of paintings they like, or even by tossing a coin. It doesn’t matter what the basis for group membership is, as long as everyone gets a group and knows what it is. After being told they are in a group, participants are divided up so that they are alone when they make a series of choices about how rewards will be shared among other people in the groups. From this point on, group membership is entirely abstract. Nobody else can be seen, and other group members are referred to by an anonymous number. Participants make choices such as “Member Number 74 (group A) to get 10 points and Member 44 (group B) to get 8 points”, versus “Member Number 74 (group A) to get 2 points and Member 44 (group B) to get 6 points”, where the numbers are points which translate into real money.

You won’t be surprised to learn that participants show favouritism towards their own group when dividing the money. People in group A were more likely to choose the first option I gave above, rather than the second. What is more surprising is that people show some of this group favouritism even when it ends up costing them points – so people in group B sometimes choose the second option, or options like it, even though it provides fewer points than the first option. People tend to opt for the maximum total reward (as you’d expect from the fair-minded citizen), but they also show a tendency to maximise the difference between the groups (what you’d expect from the prejudiced cousin).

The effect may be small, but this is a situation where the groups have been plucked out of the air by the experimenters. Every participant knows which group he or she is in, but they also know that they weren’t in this group before they started the experiment, that their assignment was arbitrary or completely random, and that the groups aren’t going to exist in any meaningful way after the experiment. They also know that their choices won’t directly affect them (they are explicitly told that they won’t be given any choices to make about themselves). Even so, this situation is enough to evoke favouritism.

So, it seems we’ll take the most minimal of signs as a cue to treat people differently according to which group they are in. Tajfel’s work suggests that in-group bias is as fundamental to thinking as the act of categorisations itself. If we want to contribute to a fairer world we need to be perpetually on guard to avoid letting this instinct run away with itself.

A stiff moment in scientific history

Photo by Flickr user NASA's Marshall Space Flight Center. Click for source.In 1983 psychiatrist Giles Brindley demonstrated the first drug treatment for erectile dysfunction in a rather unique way. He took the drug and demonstrated his stiff wicket to the audience mid-way through his talk.

Scientific journal BJU International has a pant-wettingly hilarious account of the events of that day which made both scientific and presentation history.

Professor Brindley, still in his blue track suit, was introduced as a psychiatrist with broad research interests. He began his lecture without aplomb. He had, he indicated, hypothesized that injection with vasoactive agents into the corporal bodies of the penis might induce an erection. Lacking ready access to an appropriate animal model, and cognisant of the long medical tradition of using oneself as a research subject, he began a series of experiments on self-injection of his penis with various vasoactive agents, including papaverine, phentolamine, and several others. (While this is now commonplace, at the time it was unheard of). His slide-based talk consisted of a large series of photographs of his penis in various states of tumescence after injection with a variety of doses of phentolamine and papaverine. After viewing about 30 of these slides, there was no doubt in my mind that, at least in Professor Brindley’s case, the therapy was effective. Of course, one could not exclude the possibility that erotic stimulation had played a role in acquiring these erections, and Professor Brindley acknowledged this.

The Professor wanted to make his case in the most convincing style possible. He indicated that, in his view, no normal person would find the experience of giving a lecture to a large audience to be erotically stimulating or erection-inducing. He had, he said, therefore injected himself with papaverine in his hotel room before coming to give the lecture, and deliberately wore loose clothes (hence the track-suit) to make it possible to exhibit the results. He stepped around the podium, and pulled his loose pants tight up around his genitalia in an attempt to demonstrate his erection.

At this point, I, and I believe everyone else in the room, was agog. I could scarcely believe what was occurring on stage. But Prof. Brindley was not satisfied. He looked down sceptically at his pants and shook his head with dismay. ‘Unfortunately, this doesn’t display the results clearly enough’. He then summarily dropped his trousers and shorts, revealing a long, thin, clearly erect penis. There was not a sound in the room. Everyone had stopped breathing.

But the mere public showing of his erection from the podium was not sufficient. He paused, and seemed to ponder his next move. The sense of drama in the room was palpable. He then said, with gravity, ‘I’d like to give some of the audience the opportunity to confirm the degree of tumescence’. With his pants at his knees, he waddled down the stairs, approaching (to their horror) the urologists and their partners in the front row. As he approached them, erection waggling before him, four or five of the women in the front rows threw their arms up in the air, seemingly in unison, and screamed loudly. The scientific merits of the presentation had been overwhelmed, for them, by the novel and unusual mode of demonstrating the results.

The screams seemed to shock Professor Brindley, who rapidly pulled up his trousers, returned to the podium, and terminated the lecture. The crowd dispersed in a state of flabbergasted disarray. I imagine that the urologists who attended with their partners had a lot of explaining to do. The rest is history. Prof Brindley’s single-author paper reporting these results was published about 6 months later.

 

Link to full account of that fateful day (via @DrPetra)

Amid the borderlands

I’ve got an article in The Observer on how some of the best evidence against the idea that psychiatric diagnoses like ‘schizophrenia’ describe discrete ‘diseases’ comes not from the critics of psychiatry, but from medical genetics.

I found this a fascinating outcome because it puts both sides of the polarised ‘psychiatry divide’ in quite an uncomfortable position.

The “mental illness is a genetic brain disease” folks find that their evidence of choice – molecular genetics – has undermined the validity of individual diagnoses, while the “mental illness is socially constructed” folks find that the best evidence for their claims comes from neurobiology studies.

The evidence that underlies this uncomfortable position comes recent findings that genetic risks that were originally thought to be specific for individual diagnoses turn out to risks for a whole load of later difficulties – from epilepsy, to schizophrenia to learning disability.

In other words, the genetic risk seems to be for neurodevelopmental difficulties but if and how they appear depends on lots of other factors that occur during your life.

The neurobiological evidence has not ‘reduced’ human experience to chemicals, but shown that individual life stories are just as important.
 

Link to Observer article.
Link to brief scientific review article on the topic.

Why money won’t buy you happiness

Here’s my column for BBC Future from last week. It was originally titled ‘Why money can’t buy you happiness‘, but I’ve just realised that it would be more appropriately titled if I used a “won’t” rather than a “can’t”. There’s a saying that people who think money can’t buy happiness don’t know where to shop. This column says, more or less, that knowing where to shop isn’t the problem, its shopping itself.

Hope a lottery win will make you happy forever? Think again, evidence suggests a big payout won’t make that much of a difference. Tom Stafford explains why.

 

Think a lottery win would make you happy forever? Many of us do, including a US shopkeeper who just scooped $338 million in the Powerball lottery – the fourth largest prize in the game’s history. Before the last Powerball jackpot in the United States, tickets were being snapped up at a rate of around 130,000 a minute. But before you place all your hopes and dreams on another ticket, here’s something you should know. All the evidence suggests a big payout won’t make that much of a difference in the end.

Winning the lottery isn’t a ticket to true happiness, however enticing it might be to imagine never working again and being able to afford anything you want. One study famously found that people who had big wins on the lottery ended up no happier than those who had bought tickets but didn’t win. It seems that as long as you can afford to avoid the basic miseries of life, having loads of spare cash doesn’t make you very much happier than having very little.

One way of accounting for this is to assume that lottery winners get used to their new level of wealth, and simply adjust back to a baseline level of happiness –something called the “hedonic treadmill”. Another explanation is that our happiness depends on how we feel relative to our peers. If you win the lottery you may feel richer than your neighbours, and think that moving to a mansion in a new neighbourhood would make you happy, but then you look out of the window and realise that all your new friends live in bigger mansions.

Both of these phenomena undoubtedly play a role, but the deeper mystery is why we’re so bad at knowing what will give us true satisfaction in the first place. You might think we should be able to predict this, even if it isn’t straightforward. Lottery winners could take account of hedonic treadmill and social comparison effects when they spend their money. So, why don’t they, in short, spend their winnings in ways that buy happiness?

Picking up points

Part of the problem is that happiness isn’t a quality like height, weight or income that can be easily measured and given a number (whatever psychologists try and pretend). Happiness is a complex, nebulous state that is fed by transient simple pleasures, as well as the more sustained rewards of activities that only make sense from a perspective of years or decades. So, perhaps it isn’t surprising that we sometimes have trouble acting in a way that will bring us the most happiness. Imperfect memories and imaginations mean that our moment-to-moment choices don’t always reflect our long-term interests.

It even seems like the very act of trying to measuring it can distract us from what might make us most happy. An important study by Christopher Hsee of the Chicago School of Business and colleagues showed how this could happen.

Hsee’s study was based around a simple choice: participants were offered the option of working at a 6-minute task for a gallon of vanilla ice cream reward, or a 7-minute task for a gallon of pistachio ice cream. Under normal conditions, less than 30% of people chose the 7-minute task, mainly because they liked pistachio ice cream more than vanilla. For happiness scholars, this isn’t hard to interpret –those who preferred pistachio ice cream had enough motivation to choose the longer task. But the experiment had a vital extra comparison. Another group of participants were offered the same choice, but with an intervening points system: the choice was between working for 6 minutes to earn 60 points, or 7 minutes to earn 100 points. With 50-99 points, participants were told they could receive a gallon of vanilla ice cream. For 100 points they could receive a gallon of pistachio ice cream. Although the actions and the effects are the same, introducing the points system dramatically affected the choices people made. Now, the majority chose the longer task and earn the 100 points, which they could spend on the pistachio reward – even though the same proportion (about 70%) still said they preferred vanilla.

Based on this, and other experiments [5], Hsee concluded that participants are maximising their points at the expense of maximising their happiness. The points are just a medium – something that allows us to get the thing that will create enjoyment. But because the points are so easy to measure and compare – 100 is obviously much more than 60 – this overshadows our knowledge of what kind of ice cream we enjoy most.

So next time you are buying a lottery ticket because of the amount it is paying out, or choosing wine by looking at the price, or comparing jobs by looking at the salaries, you might do well to remember to think hard about how much the bet, wine, or job will really promote your happiness, rather than simply relying on the numbers to do the comparison. Money doesn’t buy you happiness, and part of the reason for that might be that money itself distracts us from what we really enjoy.

 

A cuckoo’s nest museum

The New York Times reports that the psychiatric hospital used as the backdrop for the 1975 film One Flew Over the Cuckoo’s Nest has been turned into a museum of mental health.

In real life the institution was Oregon State Hospital and the article is accompanied by a slide show of images from the hospital and museum.

The piece also mentions some fascinating facts about the film – not least that some of the actors were actually genuine employees and patients in the hospital.

But the melding of real life and art went far beyond the film set. Take the character of John Spivey, a doctor who ministers to Jack Nicholson’s doomed insurrectionist character, Randle McMurphy. Dr. Spivey was played by Dr. Dean Brooks, the real hospital’s superintendent at the time.

Dr. Brooks read for the role, he said, and threw the script to the floor, calling it unrealistic — a tirade that apparently impressed the director, Milos Forman. Mr. Forman ultimately offered him the part, Dr. Brooks said, and told the doctor-turned-actor to rewrite his lines to make them medically correct. Other hospital staff members and patients had walk-on roles.

 

Link to NYT article ‘Once a ‘Cuckoo’s Nest,’ Now a Museum’.

Gotham psychologist

Andrea Letamendi is a clinical psychologist who specialises in the treatment and research of traumatic stress disorders but also has a passionate interest in how psychological issues are depicted in comics.

She puts her thoughts online in her blog Under the Mask which also discuss social issues in fandom and geek culture.

Recently, she was paid a wonderful compliment when she appeared in Batgirl #16 as Barbara Gordon’s psychologist.
 

I’ve always been of the opinion that comics are far more psychologically complex than they’re given credit for. In fact, one of my first non-academic articles was about the depiction of madness in Batman.

It’s also interesting that comics are now starting to explicitly address psychological issues. It’s not always done entirely successfully it has to be said.

Darwyn’s Cooke’s Ego storyline looked at Batman’s motivations through his traumatic past but shifts between subtle brilliance and clichés about mental illness in a slightly unsettling way.

Andrea Letamendi has a distinctly more nuanced take, however, and if you would like to know more about her work with superheroes do check the interview on Nerd Span.
 

Link to Letamendi’s Under the Mask (on Twitter as @ArkhamAsylumDoc)
Link to Nerd Span interview.

Hallucinating sheet music

Oliver Sacks has just published an article on ‘Hallucinations of musical notation’ in the neurology journal Brain that recounts eight cases of illusory sheet music escaping into the world.

The article makes the interesting point that the hallucinated musical notation is almost always nonsensical – either unreadable or not describing any listenable music – as described in this case study.

Arthur S., a surgeon and amateur pianist, was losing vision from macular degeneration. In 2007, he started ‘seeing’ musical notation for the first time. Its appearance was extremely realistic, the staves and clefs boldly printed on a white background ‘just like a sheet of real music’, and Dr. S. wondered for a moment whether some part of his brain was now generating his own original music. But when he looked more closely, he realized that the score was unreadable and unplayable. It was inordinately complicated, with four or six staves, impossibly complex chords with six or more notes on a single stem, and horizontal rows of multiple flats and sharps. It was, he said, ‘a potpourri of musical notation without any meaning’. He would see a page of this pseudo-music for a few seconds, and then it would suddenly disappear, replaced by another, equally nonsensical page. These hallucinations were sometimes intrusive and might cover a page he was trying to read or a letter he was trying to write.

Though Dr. S. has been unable to read real musical scores for some years, he wonders, as did Mrs. J., whether his lifelong immersion in music and musical scores might have determined the form of his hallucinations.

Sadly, the article is locked behind a paywall. However you can always request it via the #icanhazpdf hashtag on twitter .
 

Link to locked article on ‘Hallucinations of musical notation’.

The postmortem portraits of Phineas Gage

A new artform has emerged – the post-mortem neuroportrait. Its finest subject, Phineas Gage.

Gage was a worker extending the tracks of the great railways until he suffered the most spectacular injury. As he was setting a gunpowder charge in a rock with a large tamping iron, the powder was lit by an accidental spark. The iron was launched through his skull.

He became famous in neuroscience because he lived – rare for the time – and had psychological changes as a result of his neurological damage.

His story has been better told elsewhere but the interest has not died – studies on Gage’s injury have continued to the present day.

There is a scientific veneer, of course, but it’s clear that the fascination with the freak Phineas has its own morbid undercurrents.

Image from Wikipedia. Click for source.The image is key.

The first such picture was constructed with nothing more than pen and ink. Gage’s doctor John Harlow sketched his skull which Harlow had acquired after the patient’s death.

This Gage is forever fleshless, the iron stuck mid-flight, the shattered skull frozen as it fragments.

Harlow’s sketch is the original and the originator. The first impression of Gage’s immortal soul.

Gage rested as this rough sketch for over 100 years but he would rise again.

In 1994, a team led by neuroscientist Hannah Damasio used measurements of Gage’s skull to trace the path of the tamping iron and reconstruct its probable effect on the brain.

Gage’s disembodied skull appears as a strobe lit danse macabre, the tamping iron turned into a bolt of pure digital red and Gage’s brain, a deep shadowy grey.

It made Gage a superstar but it sealed his fate.

Every outing needed a more freaky Phineas. Like a low-rent-celebrity, every new exposure demanded something more shocking.

A 2004 study by Peter Ratiu and Ion-Florin Talos depicted Gage alongside his actual cranium – his digital skull screaming as a perfect blue iron pushed through his brain and shattered his face – the disfigurement now a gory new twist to the portrait.

In contrast, his human remains are peaceful – unmoved by the horrors inflicted on their virtual twin.

But the most recent Gage is the most otherworldly. A study by John Darrell Van Horn and colleagues examined how the path of the tamping iron would have affected the strands of white matter – the “brain’s wiring” – that connects cortical areas.

From Van Horn et al. (2012) PLoS One. 2012;7(5):e37454A slack-jawed Gage is now pierced by a ghostly iron bar that passes almost silently though his skull.

Gage himself is equally supernatural.

Blank white eyes float lifelessly in his eye sockets – staring into the digital blackness.

His white matter tracts appear within his cranium but are digitally dyed and seem to resemble multi-coloured hair standing on end like the electrified mop of a fairground ghoul.

But as the immortal Gage has become more horrifying over time, living portraits of the railwayman have been discovered. They show an entirely different side to the shattered skull celebrity.

To date, two portraits have been identified. They both show a ruggedly handsome, well-dressed man.

He has gentle flesh. Rather than staring into blackness, he looks at us.

Like a 19th century auto-whaler holding his self-harpoon, he grips the tamping iron, proud and defiant.

I prefer this living Phineas.

He does not become more alien with every new image.

He is at peace with a brutal, chaotic world.

He knows what he has lived through.

Fuck the freak flag, he says.

I’m a survivor.

A new horizon of sex and gender

Image from Wikipedia. Click for source.If you only listen to one radio programme this week, make it the latest edition of BBC Radio 4’s Analysis on the under-explored science of gender.

The usual line goes that ‘sex is biological while gender is social’ – meaning that while genetics determines our sex, how masculine or feminine we are is determined by specific cultural practices.

It turns out to be a little more complicated than this. It has long been known (although frequently forgotten) that typical sex markers like body shape and genitalia are actually quite diverse to the point of being ambiguous in some.

Similarly, while genetics is considered the ultimate arbiter of sex with XX indicating female and XY indicating male – XYY, XXY and XXX are surprisingly common.

On the other hand, there is evidence that some gender-related behaviours may be related to the biology of development and not solely to cultural factors.

But even with these caveats considered, what gender we ‘feel’ also turns out to be subject to a wide amount of variation with some people saying they have the gender of another sex, or that their gender is fluid, or that they have no gender at all.

The latest edition of Analysis explores this in detail, looking at how we can understand ‘disorders’ of gender in this context, what it means to you are transgender, or whether we should just dump the whole concept of a one-or-the-other gender completely.

A genuinely challenging, horizon pushing programme.
 

Link to programme page with streamed audio.
mp3 of programme.

When your actions contradict your beliefs

Last week’s BBC Future column. The original is here. Classic research, digested!

If at first you don’t succeed, lower your standards. And if you find yourself acting out of line with your beliefs, change them. This sounds like motivational advice from one of the more cynical self-help books, or perhaps a Groucho Marx line (“Those are my principles, and if you don’t like them… well, I have others…”), but in fact it is a caricature of one of the most famous theories in social psychology.

Leon Festinger’s Dissonance Theory is an account of how our beliefs rub up against each other, an attempt at a sort of ecology of mind. Dissonance Theory offers an explanation of topics as diverse as why oil company executives might not believe in climate change, why army units have brutal initiation ceremonies, and why famous books might actually be boring.

The classic study on dissonance theory was published by Festinger and James Carlsmith in 1959. You can find a copy thanks to the Classics in the History of Psychology archive. I really recommend reading the full thing. Not only is it short, but it is full of enjoyable asides. Back in the day psychology research was a lot more fun to write up.

Festinger and Carlsmith were interested in testing what happened when people acted out of line with their beliefs. To do this, they made their participants spend an hour doing two excruciatingly boring tasks. The first task was filling a tray with spools, emptying it, then filling it again (and so on). The second was turning 48 small pegs a quarter-turn clockwise; and then once that was finished, going back to the beginning and doing another quarter-turn for each peg (and so on). Only after this tedium, and at the point which the participants believed the experiment was over, did the real study get going. The experimenter said that they needed someone to fill in at the last minute and explain the tasks to the next subject. Would they mind? And also, could they make the points that “It was very enjoyable”, “I had a lot of fun”, “I enjoyed myself”, “It was very interesting”, “It was intriguing”, and “It was exciting”?

Of course the “experiment” was none of these things. But, being good people, with some pleading if necessary, they all agreed to explain the experiment to the next participant and make these points. The next participant was, of course, a confederate of the experimenter. We’re not told much about her, except that she was an undergraduate specifically hired for the role. The fact that all 71 participants in the experiment were male, and, that one of the 71 had to be excluded from the final analysis because he demanded her phone number so he could explain things further, suggests that Festinger and Carlsmith weren’t above ensuring that there were some extra motivational factors in the mix.

Money talks

For their trouble, the participants were paid $1, $20, or nothing. After explaining the task the original participants answered some questions about how they really felt about the experiment. At the time, many psychologists would have predicted that the group paid the most would be affected the most – if our feelings are shaped by rewards, the people paid $20 should be the ones who said they enjoyed it the most.

In fact, people paid $20 tended to feel the same about the experiment as the people paid nothing. But something strange happened with the people paid $1. These participants were more likely to say they really did find the experiment enjoyable. They judged the experiment as more important scientifically, and had the highest desire to participate in future similar experiments. Which is weird, since nobody should really want to spend another hour doing mundane, repetitive tasks.

Festinger’s Dissonance theory explains the result. The “Dissonance” is between the actions of the participants and their beliefs about themselves. Here they are, nice guys, lying to an innocent woman. Admittedly there are lots of other social forces at work – obligation, authority, even attraction. Festinger’s interpretation is that these things may play a role in how the participants act, but they can’t be explicitly relied upon as reasons for acting. So there is a tension between their belief that they are a nice person and the knowledge of how they acted. This is where the cash payment comes in. People paid $20 have an easy rationalisation to hand. “Sure, I lied”, they can say to themselves, “but I did it for $20”. The men who got paid the smaller amount, $1, can’t do this. Giving the money as a reason would make them look cheap, as well as mean. Instead, the story goes, they adjust their beliefs to be in line with how they acted. “Sure, the experiment was kind of interesting, just like I told that girl”, “It was fun, I wouldn’t mind being in her position” and so on.

So this is cognitive dissonance at work. Normally it should be a totally healthy process – after all, who could object to people being motivated to reduce contradictions in their beliefs (philosophers even make a profession of out this), but in circumstances where some of our actions or our beliefs exist for reasons which are too complex, too shameful, or too nebulous to articulate, it can lead to us changing perfectly valid beliefs, such as how boring and pointless a task was.

Fans of cognitive dissonance will tell you that this is why people forced to defend a particular position – say because it is their job – are likely to end up believing it. It can also suggest a reason for why military services, high school sports teams and college societies have bizarre and punishing initiation rituals. If you’ve been through the ritual, dissonance theory predicts, you’re much more likely to believe the group is a valuable one to be a part of (the initiation hurt, and you’re not a fool, so it must have been worth it right?).

For me, I think dissonance theory explains why some really long books have such good reputations, despite the fact that they may be as repetitive and pointless as Festinger’s peg task. Get to the end of a three-volume, several thousand page, conceptual novel and you’re faced with a choice: either you wasted your time and money, and you feel a bit of a fool; or the novel is brilliant and you are an insightful consumer of literature. Dissonance theory pushes you towards the latter interpretation, and so swells the crowd of people praising a novel that would be panned if it was 150 pages long.

Changing your beliefs to be in line with how you acted may not be the most principled approach. But it is certainly easier than changing how you acted.