A Shorter history of psychiatric diagnosis

The Wall Street Journal has an excellent article by historian of psychiatry, Edward Shorter, about the raft of new changes in the proposed revision of the DSM-V ‘psychiatric bible’ and how they reflect our changing ideas about mental illness.

For some reason the piece has been given the stupid of title of ‘Why Psychiatry Needs Therapy’, which is a mystery as it doesn’t mention anything along these lines.

Shorter is one of the most highly-respected historians of the field, well known for his critical approach, and this article has his trademark no-holds-barred criticisms of psychiatry.

In the 1950s and ’60s, when psychiatry was still under the influence of the European scientific tradition, reasonably accurate diagnoses still sat at center stage. If you felt blue, uneasy and generally jumpy, “nerves” was a common diagnosis. For the psychotherapeutically oriented psychiatrists of the day, “psychoneurosis” was the equivalent of nerves. There was no point in breaking these terms down: clinicians and patients alike understood “a case of nerves,” or a “nervous breakdown.”

Our psychopathological lingo today offers little improvement on these sturdy terms. A patient with the same symptoms today might be told he has “social anxiety disorder” or “seasonal affective disorder.” The increased specificity is spurious. There is little risk of misdiagnosis, because the new disorders all respond to the same drugs, so in terms of treatment, the differentiation is meaningless and of benefit mainly to pharmaceutical companies that market drugs for these niches.

Link to WSJ article on Shorter on psychiatric diagnosis.

Area responsible for neuroscience errors located

I liked this funny and recursive brain diagram from tech journalist Quinn Norton that makes fun of our tendency to be wowed by brain scans.

The diagram has a good evidence base. A 2008 study found that adding a picture of a brain scan to a scientific argument about human nature made the general public more likely to be believe it even if brain activity wasn’t relevant to the point being made.

Another study published in the same year found that simply adding an irrelevant sentence about the brain had a similar effect.

Thankfully, Norton has now located the brain area responsible for our problem with understanding bogus neuroscience explanations.

Link to recursive brain diagram.

Brain scan diagnoses misunderstanding of diagnosis

There have been a lot of media stories in the past week about a study from the US military supposedly showing that a new form of brain scan can diagnose post-traumatic stress disorder (PTSD) in army veterans. Although interesting, the study doesn’t show any such thing and this is an example of a common misconception that regularly appears as a form of ‘new biological test diagnoses mental disorder’ story.

The study used a form of brain scan called MEG, essentially a high-tech form of EEG that picks up magnetic fluctuations from the brain’s electrical activity rather than the electrical signals themselves, and found that the coherence of signals across the resting brain was reliably different in vets diagnosed with PTSD by interview, compared to healthy people without mental illness.

Crucially, the scan didn’t pick out cases of PTSD among people with a range of mental illnesses, it just found a difference between people with PTSD and healthy people. But this is not a diagnosis, it’s just a difference.

If you’re not clear on this distinction, imagine that I claimed I found a new way of diagnosing malaria in under 2 seconds – I just measure body temperature and if the person has a fever, I decide they have malaria.

I hope you would point out that this is ridiculous, because people with flu can have fever, as can people with typhoid, mumps, dengue and so on.

My test would genuinely distinguish between people with malaria and healthy people, but in no way is it a diagnosis.

And this is the same situation with this new PTSD study. The difference could be due to levels of anxiety, common in many mental disorders, or to people who’ve experienced life threatening situations, regardless of whether they have PTSD or not, or any other factor I’ve not accounted for.

In other words, like with my fever example, it could be common to many different problems and not specific to the diagnosis I’m studying and I would need to make sure my method made a differential diagnosis – i.e. specifically ‘picked out’ the disorder among many – to be a useful diagnostic tool.

However, this latest PTSD story follows a common format in mental health news. I’ve lost count over how many reports I’ve read on how a ‘new test’ could diagnose schizophrenia based entirely on the fact that a study has found a difference between people with schizophrenia and people who don’t have it.

From reading these stories, I suspect it’s often the researchers who are at fault in describing their research.

When asked about to publicly justify their work, I suspect researchers often go for the easy “it could help diagnose the disorder”, which sounds immediately useful, as compared to the more truthful “it’s a small piece of knowledge in a very large area and we won’t know if it is reliable until it is replicated and if so, we may not fully understand its significance for many years to come. However, these small incremental advances are all useful even if they prove to be dead ends as they help us understand the problem from all angles”.

In this case, the researchers wrongly suggest in their scientific article that their findings “can be used for differential diagnosis” and so we can hardly blame the media for picking up on the hype.

So the next time you read a ‘new test diagnoses mental illness’ story, check to see whether it is genuinely picking out the problem among many others, or whether it’s just reporting a non-diagnostic difference.

Link to PubMed entry for new study.

American madness

The New York Times has a thought-provoking article on culture and mental illness, arguing that the American view of the disordered mind has been exported around the world and has influenced how other cultures actually experience mental distress.

It’s probably worth saying that none of the examples are solely ‘American’, although clearly it has had a huge influence our ideas about mental illness, despite being reined in on several occasions. Indeed, if mental illness had been truly Americanised, we’d all be living in a Freudian world by now.

However, the main thrust of the article to highlight the importance of culture in the shaping of mental illness:

In the end, what cross-cultural psychiatrists and anthropologists have to tell us is that all mental illnesses, including depression, P.T.S.D. and even schizophrenia, can be every bit as influenced by cultural beliefs and expectations today as hysterical-leg paralysis or the vapors or zar or any other mental illness ever experienced in the history of human madness. This does not mean that these illnesses and the pain associated with them are not real, or that sufferers deliberately shape their symptoms to fit a certain cultural niche. It means that a mental illness is an illness of the mind and cannot be understood without understanding the ideas, habits and predispositions — the idiosyncratic cultural trappings — of the mind that is its host.

The essay has some important points (although with a few minor errors – for example, zar is not a Middle Eastern condition – but the name for a group of spirits which are believed to possess people and can lead to both helpful and disordered states) but you can it’s trying to walk a thin line between outlining the influence of culture on mental illness and avoiding suggesting that mental illness is nothing but the product of culture.

With this in mind, some of the explanations are a little one-dimensional: ‘expressed emotion’ accounts for differences in how patients with schizophrenia manage across cultures, Western-style anorexia appeared in Hong Kong due to the popularisation of the American diagnostic criteria, and so on, when the actual explanations are likely to be more complex and involve a range of biological, medical and social factors (Neuroanthropology has a really good take on this and I recommend their commentary).

I am hoping that this is because the article is taken from a much larger book which explores this topic in more detail, but as a quick introduction to some ideas about how out beliefs about illness can shape how we experience the illness itself, it is a good read.

UPDATE: I’ve just noticed Somatosphere also have a good discussion of the article that’s well worth checking out.

Link to NYT on ‘The Americanization of Mental Illness’.
Link to excellent Neuroanthropology culture.

A clarion call for a decade of disorder

This week’s Nature has an excellent editorial calling for a greater focus on the science of mental illness and summarising the challenges facing psychology and neuroscience in tackling these complex conditions.

It’s generally a very well-informed piece, but it does make one widely repeated blunder in the last sentence of this paragraph:

Frustratingly, the effectiveness of medications has stalled. Nobody understands the links between the symptoms of schizophrenia and the crude physiological pathologies that have so far been documented: a decrease in white brain matter, for example, and altered function of the neurotransmitter dopamine. The medications, which are often aimed at the dopamine systems associated with delusions, have advanced over the decades not in their efficacy but in a reduction of their debilitating side effects.

The idea that newer antipsychotic drugs have less side-effects is a myth, albeit one that was widely promoted by drug companies in the early days of the newer ‘atypical antipsychotics’.

The early antipsychotics were notorious for causing a syndrome of Parkinson’s disease-like abnormal movements owing to their long-term effect on the dopamine system.

The popular newer generation drugs do indeed produce fewer of these problems, although the difference is much smaller than was originally thought. But in addition, they tend to cause metabolic syndrome – weight gain, diabetes, heart problems – something which wasn’t such an issue with the older drugs.

In other words, the side-effects aren’t less, they’re just different. While the old drugs were more likely to produce movement problems, the newer are more likely to make you fat and give you diabetes.

Although antipsychotics were one of the most important medical advances of the 20th century, as the Nature editorial notes, there has been no improvement in the ability of these drugs to actually treat psychosis in the last few decades.

One of the main problems is that the most effective antipsychotics seems to have the highest levels of side-effects and a huge advance would simply be the production of a drug that was of equal effectiveness but less damaging to patients’ health.

Apart from this minor error, the Nature piece is an excellent brief summary of where psychiatric research is at, and where it needs to go to better tackle these episodes of mental turmoil, and comes highly recommended.

Link to Nature piece ‘A decade for psychiatric disorders’.

The addiction affliction

Slate has just published an article I’ve written on the over-selling of addiction. It discusses how difficulties with doing some things to excess – shopping, sex, internet use – are being increasingly described as addictions due to a perfect storm of pop medicine, pseudo-neuroscience, and misplaced sympathy for the miserable.

Like a compulsive crack user desperately sucking on a broken pipe, we can’t get enough of addiction. We got hooked on the concept a few centuries back, originally to describe the compulsive intake of alcohol and, later, the excessive use of drugs like heroin and cocaine. Now it seems like we’re using it every chance we can get‚Äîapplying the concept to any behavior that seems troublesome or ill-advised…

This creeping medicalization of everyday life means that almost any problem of excess can now be portrayed as an individual falling foul of a major mental illness. While drug addiction is a serious concern and a well-researched condition, many of the new behavioral addictions lack even the most basic foundations of scientific reliability.

Link to Slate article ‘The Addiction Habit’.

Patricia Churchland on neuroscience

The BBC World Service recently hosted a discussion with philosopher Patricia Churchland, one of the pioneers of a type of philosophy of mind that directly engages with ongoing discoveries in cognitive and neuroscience.

The discussion starts of with the inevitable recap of Cartesian dualism, where mind and brain were thought to be completely separate entities, before launching into an interesting debate on how we can integrate our experience of the self and subjective experience with evidence from brain science.

The discussion took place at London’s Wellcome Collection who also have a brief interview with Churchland who discusses what she’s currently working on.

Churchland has become interested in oxytocin, which must rank alongside dopamine as one of the most misused bits of brain-behaviour evidence in popular discussion.

While she doesn’t entirely avoid the current over-excitement which portrays oxytocin as a form of ’empathy potion’ she tackles the science far more completely than you’ll find in your average mainstream media discussion.

Link to BBC World Service discussion with Patricia Churchland.
Link to brief interview at Wellcome.

Taking the neurotrash out

Neuroscientist Raymond Tallis has a barn-storming and somewhat bad tempered article in The New Humanist where he rails against the increasing tendency to explain everything from beauty to crime in terms of brain function.

He begins by criticising how neuroscience is now appearing as a handy ‘neuro-‘ prefix to more and more areas of human society, leading to the likes of “neuro-jurisprudence, neuro-economics, neuro-aesthetics, neuro-theology” and so on.

This is probably the bad tempered bit. While he makes an excellent point about the over-enthusiastic interpretation over brain activity in relation to these concepts, I don’t have a problem with people researching these areas, even if they do it in a rather vague and cursory way.

This, after all, is the typical pattern of most new areas of scientific investigation. It’s the tried and tested ‘flailing around in the dark and wild theory making’ stage that we will all look back on and laugh at in a century’s time.

It’s quite a necessary stage though, and only 20 years ago, many mainstream scientists would have regarded the neuroscience of consciousness in the same way.

Tallis seems to criticise all attempts to reduce complex social and cultural interactions to biology, but not all are equal in their conceptual distance from the more fundamental functions of the brain.

Who would have guessed that recognising faces would be one of the more specialised brain functions and most closely tied to a specific area whereas universal disorders like psychosis are not? It’s only through studying these things do we know the how well we can relate them to specific patterns or circuits.

Because of this, the ‘patchy reductionism’ approach, where we assume some mental and social concepts will just be more easily tied to clear neurobiological functions than others, is becoming widely accepted in applied areas of medicine such as psychiatry.

Tallis’ subsequent point is right on the mark though: theory and speculation on these matters are being increasingly touted as a basis for legal and public decision making, and indeed, being increasingly offered as a commercial service.

We are not at a stage where even our most detailed of neuroscience theories could be used as a basis for general social rules and it is doubtful they will be in the majority of cases because they attempt to describe human behaviour at a different level of explanation.

It’s like someone trying to create employment laws for actors based on the plot of Romeo and Juliet, and the equivalent is becoming common in discussions of neuroscience.

Tallis is always worth listening to and this is one of the most critical pieces on neuroscience you’re likely to read in a while.

Link to ‘Neurotrash’ article.

Neuroanthropology, a rough guide

There’s a comprehensive and compelling introduction to neuroanthropology over at the blog of the same name that outlines why we can’t fully understand the brain or culture while thinking of them as separate entities.

The Neuroanthropology blog is run by two of the main researchers in the field and this recent article was written to launch their recent conference ‘The Encultured Brain’.

The article is in-depth but accessible and clearly lays out the main ideas in the field, looking at the benefits to both brain science and cultural studies in a combined approach and noting where narrow thinking has dimmed our view of human nature.

The potential gains are enormous: a robust account of brains in the wild, an understanding of how we come to possess our distinctive capacities and the degree to which these might be malleable across our entire species. The applications of this sort of research are myriad in diverse areas such as education, cross-cultural communication, developmental psychology, design, therapy, and information technology, to name just a few. But the first step is the one taken here – by coming together, we can achieve significant advances in understanding how our very humanity relies on the intricate interplay of brain and culture.

Link to ‘Why Neuroanthropology? Why Now?’

A shadow of your former self

Consciousness and the ‘myth of the self’ are tackled in an interesting discussion with philosopher Thomas Metzinger on this week’s edition of ABC Radio National All in the Mind.

Metzinger is one of a relatively new breed of philosopher who actually gets his hands dirty with the business of experimental cognitive science and has co-authored some of the recent widely discussed studies that induced ‘out of body experiences’ in the lab.

The interview focuses on the material from his new book, Ego Tunnel, which seems to be getting quite a bit of attention recently.

I’ve not read it but it was reviewed very positively by Metapsychology, probably the best mind and brain book review site on the net. Nevertheless, I do have to agree with a point in the somewhat snarky New Scientist review that contrary to what the blurb says, this is neither a new nor radical approach and is accepted by most philosophers of mind.

The interview is fascinating though, not least because Metzinger is very articulate, but also because he gets wonderfully side-tracked into discussing his own experiences with altering his consciousness and how this relates to this work in understanding the mind.

I also recommend the extended discussions on the All in Mind blog where he explains his original look at an ethics of consciousness and discusses alien or anarchic hand syndrome.

Link to AITM discussion with Metzinger.
Link to AITM blog post with mp3s of extra discussions.

The insanity epidemic, 1907

I’ve happened upon an interesting snippet from the regular Nature “100 years ago” feature concerning a 1907 debate on whether insanity was really increasing or whether it just seemed that way due to changes in diagnosis and treatment methods.

It made me smile because it is almost exactly the same argument that is being had now about whether cases of autism are genuinely increasing or whether this just reflects changes in diagnosis and treatment methods:

Notwithstanding the much improved statistics recently issued by the Lunacy Commissioners, thoroughly satisfactory materials are still wanting for solving the question whether the prevalence of insanity is or is not increasing. The importance of the problem… imparts special interest to a paper by Mr. Noel A. Humphreys on the alleged increase of insanity… This paper shows in a striking manner the value of scientific statistics in checking crude figures.

The author expresses a decided opinion that there is no absolute proof of actual increase of occurring insanity in England and Wales, and that the continued increase in the number and proportion of the registered and certified insane is due to changes in the degree and nature of mental unsoundness for which asylum treatment is considered necessary, and to the marked decline in the rate of discharge (including deaths) from asylums.

From Nature 18 July 1907.

Link to Nature “100 years ago” snippet.
Link to Wikipedia page on epidemiology of autism.

Human, All Too Human

I’ve just discovered that probably one of the best series ever produced on philosophy is available on Google Video. The BBC series Human All Too Human includes three fantastic programmes on Friedrich Nietzsche, Jean Paul Sartre and Martin Heidegger – a trio of controversial thinkers who massively influenced 20th century philosophy.

It’s an interesting choice as all had fascinating and turbulent lives – Nietzsche ending his life in insanity, Heidegger a unrepentant Nazi defended by a Jewish ex-lover, and Sartre who walked the line between free love and womanising.

All had a huge influence on psychology at various stages, and you can clearly see how many struggled with concepts of mind and society.

The programmes tackle both the characters and their theories and are some of the most engaging and gripping programmes I’ve ever seen on philosophy, an essential subject that usually gets little more than satire or lip service from mainstream media.

They’re an hour each and worth every minute. Put some time aside, find a comfy chair and enjoy.

Link to programme on Jean Paul Sartre.
Link to programme on Martin Heidegger.
Link to programme on Friedrich Nietzsche.

It always seems worse than you think

There is a clich√© in media stories where figures for a disease or condition are quoted followed by a statement that “the true figures may be higher”. Sampling errors mean that initial figures are equally as likely to be under-estimates as over-estimates but we only ever seem to be told that the condition is under-detected.

For example, this is from a recent (actually pretty good) New Scientist article about gender identity disorder (GID) in children, a condition where children who are biologically male feel female and vice versa:

It is unclear how common GID is among children, but many transsexual adults say they felt they were “in the wrong body” from an early age. The incidence of adult transsexualism has been estimated at about 1 in 12,000 for male-to-females, and around in 1 in 30,000 for female-to-males, although transsexual lobby groups say the true figures may be far higher.

These estimates are usually drawn from prevalence studies where a maybe a few hundred or thousand people are tested. The researchers extrapolate from the number of cases to make an estimate of how many people in the population as a whole will have the condition.

These estimates are made with statistical tests which give a margin of error, meaning that within a certain range, typically described by confidence intervals, the real figure is likely to be between a range which equally includes both higher and lower values than the quoted amount.

For any individual study you can validly say that you think the estimate is too low, or indeed, too high, and give reasons for that. For instance, you might say that your sample was mainly young people who tend to be healthier than the general public, or maybe that the diagnostic tools are known to miss some true cases.

But when we look at reporting as a whole, it almost always says the condition is likely to be much more common than the estimate.

For example, have a look at the results of this Google search:

“the true number may be higher” 20,300 hits

“the true number may be lower” 3 hits

You can try variations on the phrasing and see the same sort of pattern emerges. I’m curious as to why this bias occurs, or whether there’s another explanation for it.

Transhuman nature

ABC Radio National’s All in the Mind has just had an excellent programme on ‘the singularity‘, the idea that at some point in the future computer power will outstrip the ability of the human brain and then humanity will be better off in some sort of vague and unspecified way.

The idea, is of course, ludicrous and is based on a naive notion that intelligence can measured as a type of unitary ‘power’ which we can adequately compare between computer and humans. The discussion on All in the Mind is a solid critical exploration of this wildly left-field notion as well as the community from whence it comes.

It’s a popular theme among transhumanists who, despite seeming to have a mortal fear of human limitations, I quite like.

Transhumanists are like the eccentric uncle of the cognitive science community. Not the sort of eccentric uncle who gets drunk at family parties and makes inappropriate comments about your kid sister (that would be drug reps), but the sort that your disapproving parents thinks is a bit peculiar but is full of fascinating stories and interesting ideas.

They occasionally take themselves too seriously and it’s the sort of sci-fi philosophy that has few practical implications but it’s enormously good fun and is great for making you re-evaluate your assumptions.

By the way, there’s loads of extras on the AITM blog, so do check it out.

Link to All in the Mind on ‘the singularity’.
Link to extras on AITM blog.

Seeing the mind amidst the numbers

Photo by Flickr user Koen Vereeken. Click for sourceI’ve just a read a fantastic New York Times article from last year on the ongoing $1,000,000 Netflix challenge to create an algorithm that will predict what unseen films customers will liked based on their past preferences.

As well as an interesting insight into how companies are trying to guess our shopping preferences it is also a great guide to one of the central problems in scientific psychology: how we can reconcile numerical data with human thought and behaviour.

The Netflix prize teams have a bunch of data from customers who have rated films they’ve already seen and they have been challenged to write software that predicts future ratings.

Part of this process is hypothesis testing, essentially an experimental approach to find out what might be important in the decision process. For example, a team might guess that women will rate musicals higher than men. They can then test this prediction out on the data, making further predictions based on past conclusions, theories or even just hunches.

The other approach is to use mathematical techniques that look for patterns in the data. To use the jargon, these procedures look for ‘higher order properties’ – in other words, patterns in the patterns of data.

Think of it like looking at the relationship between different forests rather than thinking of everything as individual trees.

The trouble is, is that these mathematical procedures can sometimes find reliable high level patterns when it isn’t obvious to us what they represent. For example, the article discusses the use of a technique called singular value decomposition (SVD) to categorise movies based on their ratings;

There’s a sort of unsettling, alien quality to their computers’ results. When the teams examine the ways that singular value decomposition is slotting movies into categories, sometimes it makes sense to them — as when the computer highlights what appears to be some essence of nerdiness in a bunch of sci-fi movies. But many categorizations are now so obscure that they cannot see the reasoning behind them. Possibly the algorithms are finding connections so deep and subconscious that customers themselves wouldn’t even recognize them.

At one point, Chabbert showed me a list of movies that his algorithm had discovered share some ineffable similarity; it includes a historical movie, “Joan of Arc,” a wrestling video, “W.W.E.: SummerSlam 2004,” the comedy “It Had to Be You” and a version of Charles Dickens’s “Bleak House.” For the life of me, I can’t figure out what possible connection they have, but Chabbert assures me that this singular value decomposition scored 4 percent higher than Cinematch — so it must be doing something right. As Volinsky surmised, “They’re able to tease out all of these things that we would never, ever think of ourselves.” The machine may be understanding something about us that we do not understand ourselves.

In these cases, it’s tempting to think there’s some deeply psychological property of the film that’s been captured by the analysis. Maybe all trigger a wistful nostalgia, or perhaps each represents the same unconscious fantasy.

It could also be that each is under 90 minutes, or comes with free popcorn. It could even be that the grouping is entirely spurious and represents nothing significant. Importantly, the answer to these questions is not in the data to be discovered, we have to make the interpretation ourselves.

Experimental methods go from meaning to data, while exploratory methods go from data to meaning. Somewhere in the middle is our mind.

The Netflix challenge is this problem on steroids and the NYT piece brilliantly explores the practical problems in making sense of it all.

Link to NYT piece ‘If You Liked This, You‚Äôre Sure to Love That’

Scientists find area responsible for emotion in dead fish

Neuroskeptic covers a hilarious new study that involved brain scanning a dead salmon and finding activation in the brain as it ‘looked’ at photos of human faces.

The authors are not genuinely arguing that dead fish have brain activity but have run the experiment to show that some common statistical methods used in fMRI research will give false positives if they’re not adequately controlled for.

The research, led by neuroscientist Craig Bennett, was presented as a poster at a recent conference and has the brilliant title of “Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for multiple comparisons correction” and is available online as a jpg.

I’d say that this research was justified on comedic grounds alone, but they were also making an important scientific point. The (fish-)bone of contention here is multiple comparisons correction. The “multiple comparisons problem” is simply the fact that if you do a lot of different statistical tests, some of them will, just by chance, give interesting results.

Most statistics used in psychology, and indeed brain imaging, are based on calculating a p value.

Usually, a p value of less than 0.05 is considered significant and this means that if there was genuinely no difference in the things you were comparing, you would get a false positive less than 5% of the time.

But your average fMRI brain scan analysis can involve 40,000 comparisons, so even if there’s nothing going on, some bits of the brain are going to seem active just through falsely detecting noise and measurement error as real effect.

To help prevent this, you can correct for multiple comparisons by reducing the 5% cut-off to a smaller amount. Unfortunately, some of the standard methods of doing this can be so strict as to create false negatives, when genuine differences are dismissed as statistical noise.

There is no hard and fast rule about which methods to use, but our salmon neuroscientists have graphically illustrated how misleading results can occur if we naively assume that not correcting accounting ‘multiple comparisons problem’ will give us an accurate picture of brain function.

Kudos to the Neuroskeptic blog for picking up on this and for some excellent coverage of this study.

Link to Neuroskeptic on dead salmon study.
jpg of conference poster.