The Crux of PTSD under threat of terrorism

I’ve got a piece over at Discover Magazine’s new group blog, The Crux, which looks at whether post-traumatic stress disorder makes sense if it’s applied to people who remain at high risk of terrorist attack.

The Crux is a blog written by a crowd of science folks that aims to taker a deeper look at some of the ‘big ideas in science’ that are currently being thrown around and I’ll be writing some occasional pieces as mind and brain issues surface.

Researchers have noted that “PTSD is classified as an anxiety disorder. Within cognitive models, anxiety is a result of appraisals relating to impending threat. However, PTSD is a disorder in which the problem is a memory for an event that has already happened.” After all, if you feel threatened with good reason, almost by definition, this isn’t a mental illness.

So if someone remains in danger after a life-threatening incident, does the concept of “post-traumatic stress disorder” even make sense?

As the diagnosis relies largely on totalling up symptoms in a checklist-like fashion, it is possible to diagnose someone with the condition in almost any circumstance. But no one knew whether treating it in people who are still in grave danger would be any use.

Until now that is.

You can check out the full article at the link below and pieces by the other fine folks of The Crux here.

Link to ‘Life During Wartime: Can Mental Illness Be a Rational Response?’

The free will rebellion

A popular mantra of modern neuroscience tells us that free will is an illusion. An article in the New York Times makes a lucid challenge to the ‘death of free will’ idea and a prominent neuroscientist has come out to fight the same corner.

Neuroscientists began making preparations for the funeral of free will shortly after Benjamin Libet began publishing his experiments in the 1980s showing a consistent build-up of electrical activity from the brain’s motor cortex before participants were consciously aware of their desire to move.

Since then, many more neuroscience studies have shown that brain activity can precede conscious awareness of specific choices or actions – with the implication that our conscious experience of decision-making is nothing but a secondary effect that plays little role in our actions and reactions.

The idea that ‘free will is an illusion’ is now consistently touted by neuroscientists as an example of how brain science is revealing ‘what really drives us’ and how it explains ‘how we really work’. But philosophers, the conceptual engineers of new ideas, have started to find holes in this popular meme.

Probably the most lucid mainstream analysis of why neuroscience isn’t killing free will has just been published at The New York Times where philosopher of mind Eddy Nahmias takes the mourners to task using a narrow and largely irrelevant definition of free will.

So, does neuroscience mean the death of free will? Well, it could if it somehow demonstrated that conscious deliberation and rational self-control did not really exist or that they worked in a sheltered corner of the brain that has no influence on our actions. But neither of these possibilities is likely. True, the mind sciences will continue to show that consciousness does not work in just the ways we thought, and they already suggest significant limitations on the extent of our rationality, self-knowledge, and self-control. Such discoveries suggest that most of us possess less free will than we tend to think, and they may inform debates about our degrees of responsibility. But they do not show that free will is an illusion.

Nahmais makes the point that the ‘death of free will’ idea makes a fallacy he calls ‘bypassing’ that reduces our decisions to chemical reactions, implying that our conscious thinking is bypassed, and so we must lack free will.

He notes that this is like saying life doesn’t exist because every living thing is made up of non-living molecules, when, in reality, its impossible to understand life or free will without considering the system at the macro level – that is, the actions and interactions of the whole organism.

Interestingly, a similar point is made by legendary neuroscientist Michael Gazzaniga in an interview for Salon where he discusses his new book on free will. He also suggests it’s not possible to understand free will at the level of neurons without making the concept nonsensical.

These contrasting concepts about free will may yet be solved, however, as Nature recently reported on a new $4 million ‘Big Questions in Free Will’ project which brings together philosophers and cognitive scientists to work together to understand how we act in the world.

Link to NYT piece ‘Is Neuroscience the Death of Free Will?’
Link to Salon interview with Michael Gazzaniga.
Link to Nature piece ‘Taking Aim at Free Will’.

The appliance of psychological science

The BPS Research Digest is celebrating its 200th issue with a series of articles from well-known psychologists that describe how psychology has helped them out in everyday life.

There’s a whole stack of people involved who have written on everything from love to scientific thinking to child rearing.

Both myself and Tom have contributed pieces but my favourite is from Ellen Langer who has spent many years studying the effect of stereotypes about old age on older people:

At age 89 my father’s memory was fragile – he was showing his years. One day we were playing cards and I began to think that I should let him win. I soon realized that, if I saw someone else behaving that way, I’d tell her to stop being so condescending. I might even explain how negative prophecies come to be fulfilled, and I’d go on to explain that much of what we take to be memory loss has other explanations.

For instance, as our values change with age, we often don’t care about certain things to the degree we used to, and we therefore don’t pay much attention to them anymore. The “memory problems” of the elderly are often simply due to the fact that they haven’t noted something that they find rather uninteresting. And then, while I was weighing whether to treat him as a child because part of me still felt that he would enjoy winning, he put his cards down and declared that he had gin.

There are many more great pieces at the link below.

Link to BPSRD ‘Psychology to the Rescue’ series.

The hot hand smacks back

The idea of the ‘hot hand’, where a player who makes several successful shots has a higher chance of making some more, is popular with sports fans and team coaches, but has long been considered a classic example of a cognitive fallacy – an illusion of a ‘streak’ caused by our misinterpretation of naturally varying scoring patterns.

But a new study has hard data to show the hot hand really exists and may turn one of the most widely cited ‘cognitive illusions’ on its head.

A famous 1985 study by psychologist Thomas Gilovich and his colleagues looked at the ‘hot hand’ belief in basketball, finding that there was no evidence of any ‘scoring streak’ in thousands of basketball games beyond what you would expect from natural variation in play.

Think of it like tossing a weighted coin. Although the weighting, equivalent to the players skill, makes landing a ‘head’ more likely overall, every toss of the coin is independent. The last result doesn’t effect the next one.

Despite this, sometimes heads or tails will bunch together and this is what people erroneously interpret as the ‘hot hand’ or being on a roll, at least according to the theory. Due to the basketball research, that seemed to show the same effect, the ‘hot hand fallacy’ was born and the idea of ‘scoring streaks’ thought to be sports myth.

Some have suggested that while the ‘hot hand’ may be an illusion, in practical terms, in might be useful on the field.

Better players are more likely to have a higher overall scoring rate and so are more likely to have what seem like streaks. Passing to that guy works out, because the better players have the ball for longer.

But a new study led by Markus Raab suggests that the hot hand does indeed exist. Each shot is not independent and players that hit the mark may raise their chances of scoring the next time. They seem to draw inspiration from their successes.

Crucially, the researchers chose their sport carefully because one of the difficulties with basketball – from a numbers point of view – is that players on the opposing team react to success.

If someone scores, they may find themselves the subject of more defensive attention on the court, damping down any ‘hot hand’ effect if it did exist.

Because of this, the new study looked at volleyball where the players are separated by a net and play from different sides of the court. Additionally, players rotate position after every rally, meaning its more difficult to ‘clamp down’ on players from the opposing team if they seem to be doing well.

The research first established the belief in the ‘hot hand’ was common in volleyball players, coaches and fans, and then looked to see if scoring patterns support it – to see if scoring a point made a player more likely to score another.

It turns out that over half the players in Germany’s first-division volleyball league show the ‘hot hand’ effect – streaks of inspiration were common and points were not scored in an independent ‘coin toss’ manner.

What’s more, players were sensitive to who was on a roll and used the effect to the team’s advantage – more commonly passing to those on a scoring streak.

So it seems the ‘hot hand’ effect exists. But this opens up another, perhaps more interesting, question.

How does it work? Because if teams can understand the essence of on court inspiration, they’ve got a recipe for success.

Link to blocked study. Clearing a losing strategy.
Link to full text which has mysteriously appeared online.

The cutting edge of the easy high

Perhaps the most complete scientific review of what we know about synthetic cannabis or ‘spice’ products has just appeared in Frontiers in Behavioral Neuroscience.

These ‘legal highs’ are typically sold as nudge-nudge wink-wink ‘incense’ but contain synthetic cannabinoids which have a similar effect to smoking dope but are legal in many countries.

We covered the history of these compounds recently and we also discussed the market approach of the neuroscientist-packing ‘legal high industry’ back in 2009.

Essentially, the industry is based on the fact that their psychopharmacologists can churn out new substances faster than governments can regulate against them, with the web providing a distributed marketplace that opens up the customer base.

This new article takes a scientific look at what compounds are actually appearing in ‘synthetic marijuana’ (of which there are many and various) as well as examining the known effects, good and bad.

If you’re not into phrases like “well-characterized aminoalkylindole class of ligands” you may want to skip the neurochemistry and just focus on the availability and effects.

It’s probably the most complete review of these compounds available to date, so definitely worth a look if you’re tracking the ‘synthetic blow’ story.

Link to ‘Beyond THC’ on cannabinoid designer drugs (via @sarcastic_f)

The father of Randle P. McMurphy

An article in the Journal of Medical Humanities has a fascinating look at one of playwright Samuel Beckett’s early novels – an exploration of madness and mental health care that foreshadowed One Flew Over the Cuckoo Nest.

Beckett is best known for Waiting for Godot, but his novel Murphy was previously one of the best known literary treatments of mental ill health until Ken Kesey’s famous work.

It turns out that Kesey gives a knowing nod to Beckett’s earlier work through his character Randle McMurphy.

As far as twentieth-century accounts of mental health nursing and psychiatry go, Beckett’s (1937) tale of Murphy has been much over-shadowed by Ken Kesey’s One Flew Over the Cuckoo Nest. For better or for worse, Kesey’s nurse Ratchet became the epitome of the 20th century asylum attendant. But it was a notable act of approbation by Kesey to name his main protagonist, Randle P. MacMurphy, with due deference to Beckett; ‘MacMurphy’ literally meaning ‘son of Murphy.’

The comparison between the two novels is interesting, because Kesey drew his inspiration from his time working as a staff member on a psychiatric ward while Beckett drew his inspiration from being a patient.

Link to locked article (the humanities are deadly in the wrong hands).

Entertainingly mislead me

A beautifully recursive study has shown that viewing an episode of the psychology of deception TV series Lie To Me makes people worse at distinguishing truth from lies.

The TV series is loosely based on the work of psychologist Paul Ekman who pioneered the study of emotions and developed the Facial Action Coding System or FACS that codes even the slightest of changes in facial expression.

Although in poplar culture Ekman and the FACS are often associated with the detection of lies through changes in ‘micro expressions’, there is actually no good research to show it can help detect falsehoods.

However, the TV series relies heavily on this premise and suggests that there is more of a scientific basis to lie detection than is actually feasible and that it is possible to detect deception through careful observation of specific behaviours.

This, however, is not very accurate. The authors of the study don’t mince their words:

Lie to Me is based on the premise that highly accurate deception detection is possible based on real-time observation of specific behaviors indicative of lying. The preponderance of research demonstrates that the exact opposite is true.

Lie to Me also suggests that certain people are naturally gifted lie detectors. This is also inconsistent with the preponderance of research. Thus, when looking at the evidence generated across several hundred individual studies, the idea of Lie to Me is highly implausible and almost certainly misleading.

Rather shrewdly, this new study, led by psychologist Timothy Levine, decided to test whether this misleading view of lie detection might actually influence the viewer’s ability to detect lies.

They split participants into three groups, one who watched and episode of Lie to Me, another an episode of Numb3rs – in which crimes are solved by a genius math professor, and a final group who didn’t watch anything.

Afterwards, everyone saw a series of 12 interviews – half of which were honest and half which involved lies – and were asked to rate the truthfulness of the interviewee.

Normally, when we do tasks like this where honesty and deception are present in equal numbers, we tend to over-rate how truthful people are – probably due to the fact that in everyday like most people are being genuine with us, so we have a tendency to assume people are telling the truth – even when we know there’s some falsehood to be found.

In the study, those who had just watched Lie To Me didn’t show this truth-accepting bias, they were more skeptical, but crucially, they were actually worse at distinguishing deception than the others.

They applied their skepticism in a blanket fashion and became less accurate as a result.

In other words, not only does the programme misrepresent the psychology of lie detection, but this has an effect on the psychology of the viewers themselves.

Which, by the way, would make a great plot device for Lie To Me.

Link to locked study (via @velascop)

The death of atypical antipsychotics

The British Journal of Psychiatry has just published the latest in a long line of studies to find that the newer ‘atypical’ or ‘second generation’ antipsychotic drugs are barely better than the old style medications and has a stinging editorial that accompanies the piece calling out years of drug company marketing spun as an illusory advance in medical science.

Unfortunately both are locked (after all, you’d just worry yourself with all those facts) but here is the last paragraph of the editorial. It leaves no ass unkicked.

In creating successive new classes of antipsychotics over the years, the industry has helped develop a broader range of different drugs with different side-effect profiles and potencies, and possibly an increased chance of finding a drug to suit each of our patients. But the price of doing this has been considerable – in 2003 the cost of antipsychotics in the USA equalled the cost of paying all their psychiatrists.

The story of the atypicals and the SGAs [‘second-generation antipsychotics’] is not the story of clinical discovery and progress; it is the story of fabricated classes, money and marketing. The study published today is a small but important piece of the jigsaw completing a picture that undermines any clinical or scientific confidence in these classes.

With the industry reputation damaged by evidence of selective publishing and its deleterious effects, and the recent claims that trials of at least one of the new atypicals have been knowingly ‘buried’, it will take a great deal for psychiatrists to be persuaded that the next new discovery of a drug or a class will be anything more than a cynical tactic to generate profit. In the meantime, perhaps we can drop the atypical, second-generation, brand new and very expensive labels: they are all just plain antipsychotics.


Link to locked editorial ‘The rise and fall of the atypical antipsychotics’.

The New York Times wees itself in public

The New York Times has just pissed its neuroscientific pants in public and is now running round the streets announcing the fact in an op-ed that could as easily been titled ‘Smell my wee!’

The piece is written by Martin Lindström, famous for writing the ‘neuromarketing’ best-seller Buyology, but infamous for not making any of his data or studies public.

In fact, despite constantly mentioning the astounding conclusions from numerous brain imaging studies he was run, not one has appeared in the scientific literature.

But even without knowing about the reliability data or the quality of the analysis, it’s easy to see that he’s talking through his hat because the interpretations are so over-the-top that they are actually beyond what is possible with brain imaging science.

The piece is full of nonsense of various sorts.

I carried out an fMRI experiment to find out whether iPhones were really, truly addictive…

In each instance, the results showed activation in both the audio and visual cortices of the subjects’ brains. In other words, when they were exposed to the video, our subjects’ brains didn’t just see the vibrating iPhone, they “heard” it, too; and when they were exposed to the audio, they also “saw” it. This powerful cross-sensory phenomenon is known as synesthesia.

Actually, this is known as bullshit because synesthesia is where a conscious sensory experience in one sensory domain produces a conscious experience in another.

In other words, synesthesia is defined by the experiences that someone has, not where brain activity shows up.

The fact that brain activity occurs in an area previously linked to a different function does not mean it is being used for that function or that the person is having a related conscious experience.

If this is not entirely clear, think of it like this. Imagine, for the first time in your life, you just heard the sound of a guitar being played as part of a pop song. You’d be a bit daft if every time you heard guitar chords you told people that the music must be a pop song. After all, there’s a guitar in it, right?

Clearly, this is ridiculous because the guitar is an instrument that appears in lots of musical styles but Lindström is doing the neuroscience equivalent of over-interpreting guitar sounds throughout his terrible article.

He starts going on about how activation in the insula, detected in his privately conducted otherwise unknown study, means the person is experiencing love for their iPhone because insula activity has previously been linked to love.

The trouble is, as neuroimaging researcher Russ Poldrack just pointed out, it is one of the most common brain areas that turns up in fMRI studies and appears in about a third of imaging studies no matter what is being studied. In other words, it’s linked to just about every experience and behaviour you can think of.

In fact, it is probably most famous, not for its association with love, but for its association with digust, but Lindstrom apparently decided to avoid this particular interpretation.

This is just one example among many and if you want a breakdown of why the article really is full of crap, I recommend neuroscientist Tal Yarkoni’s point-by-point analysis and facepalm jamboree.

In fact, the op-ed has annoyed so many people there is now a letter to the editor signed by just about every big name in fMRI research on its way to the New York Times in an attempt to open the windows and get rid of that uncomfortable smell.

Link to NYT pissing itself in public.
Link to Tal Yarkoni’s excellent mopping up exercise.

Swimming in the tides of war

My recent Beyond Boundaries column for The Psychologist explores how the micro-culture of Colombian paramilitary organisations may have shaped the expression of post-traumatic stress disorder in demobilised fighters.

Dr Ricardo de la Espriella’s office is surprisingly quiet. Buried deep within San Ignacio University Hospital, the growl of the chaotic Bogotá traffic is perceptibly absent. Despite the street-level pandemonium, the capital city of Colombia remains an oasis of relative calm in a troubled country. The five-decade-old conflict has been pushed back from the urban fringes and persists, unabated, in the rural areas where it continues to devastate the country’s diverse cultural landscape. Dr de la Espriella has long promoted an understanding of how psychological distress is filtered through cultural norms. ‘There are difficulties in recognising post-traumatic stress in certain populations, which is why cultural psychiatry is so important’ he stresses, highlighting the surprising variation in response to suffering. In this case, however, he is not talking about the culture of ethnic or racial groups, but the micro-culture of illegal paramilitary organisations.

While working on a project to rehabilitate ex-members of illegal armed groups, he noticed a striking absence of post-traumatic stress disorder in his patients, despite them having experienced extreme violence both as combatants and civilians. Many had taken part in massacres and selective assassinations, and many had lost companions to equally brutal treatment. There were high levels of substance abuse, aggression and social problems, but virtually none showed signs of anxiety. Intrigued, de la Espriella decided to investigate more closely and carefully interviewed the ex-paramilitary patients again, using the Clinician Administered PTSD Scale, which asks specific and detailed questions about post-trauma symptoms. After this more detailed examination, more than half could be diagnosed with the disorder.

The reason for why none of these symptoms presented in day-to-day life seemed to lie in paramilitary subculture. While aggression and drug abuse are tolerated, anxiety is taboo to the point where members showing signs of anxiety can be killed by their compatriots for being ‘weak’. This brutal emotional environment shapes the men to neither show nor spontaneously report any form of fear or nervousness. De la Espriella reported his findings in the Colombian Journal of Psychiatry where he discusses the difficulties in treating people who have been involved in violence and killing. His work also raises the uncomfortable question of who we consider to be a victim of conflict. Can we extend compassion to those who commit the atrocities or do we allow those who swim in the tides of war to drown in its powerful currents?

Thanks to Jon Sutton, editor of The Psychologist who has kindly agreed for me to publish my column on Mind Hacks as long as I include the following text:

“The Psychologist is sent free to all members of the British Psychological Society (you can join here), or you can subscribe as a non-member by going here.

Link to column from The Psychologist (bottom of page).

Game not over

The Guardian covers a new study on how video games can persist in our perception as fleeting hallucinations in an effect labelled ‘game transfer phenomena’.

Unfortunately, the study has been published in an obscure journal which means I’ve not been able to read it in full, although the write-up quotes the lead researcher, Mark Griffiths:

“The academic literature goes back to 1993,” says Griffiths. “There was a case of a woman who had auditory hallucinations; she just couldn’t get the tune of the game she was playing out of her head – it was very intrusive. But what came out of our pilot research were lots of different experiences, some that were auditory, some visual and some were tactile. We had the example of a teacher who dropped his pen and immediately reached for a joypad button to retrieve it, as though he were in a game.

“Most of the experiences were neutral and often quite positive. We distinguished between what we call automatic GTP, which are almost like reflexes or classically conditioned responses, and those where players deliberately take elements out of the game and work them into their day-to-day routines.”

Needless to say, the tabloids got carried away and ran with ‘gamers losing touch with reality’-type stories although it sounds like the authors of the study were probably a little over-enthusiastic with their own descriptions.

Despite this, it sounds like an interesting study describing how conditioned responses and perceptual expectations learnt in video games might be get triggered in other situations.

I knew someone would get round to studying those weird thoughts about Tomb Raider at some point.

Link to Guardian article on ‘game transfer phenomena’

The birth of ‘synthetic marijuana’

Addiction Inbox has an interview with pharmacologist David Kroll where he discusses the origin of the countless synthetic cannabinoids that have recently flooded the market as ‘legal highs’ and ‘incense’.

You may know Kroll better as the author of the long-running top-notch pharmacology blog Terra Sigillata where he has been tracking the ‘synthetic marijuana’ story since its early days.

In this recent interview he gives a fantastic brief description of how these compounds were born and became big business as ‘legal highs’.

Every area of CNS pharmacology has chemists who try to figure out the smallest possible chemical structure that can have a biological effect. In fact, this is a longstanding practice of any area of pharmacology. Huffman was an excellent chemist who in the 1990s was trying to figure out the most important part of the active component of marijuana that might have psychotropic effects. These compounds made by him and his students, surprisingly simple ones, I prefer to call cannabimimetics since they mimic the effect of the more complex cannabinoids in marijuana. These basic chemistry and pharmacology studies are what ultimately lead to new drugs in every field – a facet of chemistry called “structure-activity relationships” or SAR.

But since they are simple, they are relatively easy to make – some of Huffman’s work at Clemson was actually done by undergraduate chemistry majors. So, it was no surprise that they would be picked up by clandestine drug marketers, even though cannabis (UK) and marijuana (US) are freely available. The attraction to users was, until recently, that Huffman compounds (prefixed with “JWH-” for his initials) could not be detected in urine by routine drug testing. Hence, incense products containing these compounds have been called “probationer’s weed.”

In the interview he also discusses drug legality, drug development and prescription. Well worth checking out.

Link to David Kroll interview at Addiction Inbox.

Teenage kicks

National Geographic has an excellent article on teenage risk-taking and adolescent brain development.

It goes some way to explaining both the dangerous mistakes that typically peak in the late teens and, I like to think, the bad fashion sense which seems to follow a similar pattern.

Importantly, the piece goes beyond the usually ‘well the frontal lobes are still developing, aren’t they?’ explanation that gets wheeled out whenever teen neuroscience is discussed and hits on some of the gritty details.

Are these kids just being stupid? That’s the conventional explanation: They’re not thinking, or by the work-in-progress model, their puny developing brains fail them.

Yet these explanations don’t hold up. As Laurence Steinberg, a developmental psychologist specializing in adolescence at Temple University, points out, even 14- to 17-year-olds—the biggest risk takers—use the same basic cognitive strategies that adults do, and they usually reason their way through problems just as well as adults. Contrary to popular belief, they also fully recognize they’re mortal. And, like adults, says Steinberg, “teens actually overestimate risk.”

So if teens think as well as adults do and recognize risk just as well, why do they take more chances? Here, as elsewhere, the problem lies less in what teens lack compared with adults than in what they have more of. Teens take more risks not because they don’t understand the dangers but because they weigh risk versus reward differently: In situations where risk can get them something they want, they value the reward more heavily than adults do.

Probably one of the most comprehensive introductions to teen risk you’ll read in a good while.

Link to National Gerographic on Teenage Brains.

Outside the criminal mind

ABC Radio National’s All in the Mind recently had a fascinating programme on the science behind offender profiling and whether it lives up to its ‘inside the criminal mind’ image.

If you’re not familiar with the debates about criminal profiling you may be surprised to hear that a fair few forensic psychologists think it’s a waste of time.

Even while studies can show a statistical link between certain psychological characteristics and crime features, it’s not clear whether applying this to individual criminals gives us reliable enough results to guide police investigations.

This edition of All in the Mind explores the various types of criminal profiling and the evidence behind their accuracy.

Although it is somewhat annoyingly cut with scenes from Silence of the Lambs (which has about as much to say about criminal profiling as One Flew Over the Cuckoo’s Nest has to say about psychiatric nursing) it is still a fascinating and insightful look into a little understood practice.

And if this isn’t enough criminology for you, a recent Radio 4 documentary (podcast here) discussed the evidence behind ‘miracle’ crime and violence reduction schemes.

Link to All in the Mind on ‘Profiling the Criminal Profilers’.
Link to streamed BBC doco on ‘crime cutting miracles’.
Podcast for same because putting the mp3 on the same page is hard.

A whiff of madness

For a short time, the scientific community was excited about the smell of schizophrenia.

In 1960, a curious article appeared in the Archives of General Psychiatry suggesting not only that people with schizophrenia had a distinctive smell, but that the odour could be experimentally verified.

The paper by psychiatrists Kathleen Smith and Jacob Sines noted that “Many have commented upon the strange odour that pervades the back wards of mental hospitals” and went on to recount numerous anecdotes of the supposedly curious scent associated with the diagnosis.

Having worked on a fair few ‘back wards of mental hospitals’ in my time, my first reaction would be to point out that the ‘strange odour’ is more likely to be the staff than the patients but Smith and Sines were clearly committed to their observations.

They collected the sweat from 14 white male patients with schizophrenia and 14 comparable patients with ‘organic brain syndromes’ and found they could train rats to reliably distinguish the odours while a human panel of sweat sniffers seemed to be able to do the same.

Seemingly backed up by the nasal ninja skills of two different species, science attempted to determine the source of the ‘schizophrenic odour’.

Two years later researchers from Washington suggested the smell might be triggered by the bacteria Pseudomonas aeruginosa but an investigation found it was no more common in people with schizophrenia than those without the diagnosis.

But just before the end of the 60s, the original research team dropped a scientific bombshell. They claimed to have identified the schizophrenia specific scent and got their results published in glittery headline journal Science.

Using gas chromotography they identified the ‘odorous substance’ as trans-3-methyl-2-hexenoic acid, now known as TMHA.

At this point, you may be staring blankly at the screen, batting your eyelids in disinterest at the mention of a seemingly minor chemical associated with the mental illness, but to understand why it got splashed across the scientific equivalent of Vogue magazine you need to understand something about the history, hopes and dreams of psychiatry research.

For a great part of the early 20th century, psychiatry was on the hunt for what was called an ‘endogenous schizotoxin’ – a theorised internal toxin that supposedly triggered the disorder.

A great part of the early scientific interest in psychedelics drew on the same idea as psychiatrists wondered whether reality-bending drugs like LSD and mescaline were affecting the same chemicals, or, in some cases, might actually be the ‘schizotoxins’ themselves.

So a chemical uniquely identified in the sweat of people with schizophrenia was big news. Dreams of Nobel Prizes undoubtedly flashed through the minds of the investigators as they briefly allowed themselves to think about the possibility of finally cracking the ‘mystery of madness’.

As the wave of excitement hit, other scientists quickly hit the labs but just couldn’t confirm the link – the results kept coming in negative. In 1973 the original research team added their own study to the disappointment and concluded that the ‘schizophrenic odour’ was dead.

Looking back, we now know that TMHA is genuinely an important component in sweat odour. Curiously, it turns out it is largely restricted to Caucasian populations but no link to mental illness or psychiatric disorder has ever been confirmed.

The theory seems like an curious anomaly in the history of psychiatry but it occasionally makes a reappearance. In 2005 a study claimed that the odour exists but is “complex and cannot be limited to a single compound, but rather to a global variation of the body odor” but no replications or further investigations followed.

I, on the other hand, am still convinced it was the staff that were the source of the ‘strange odour’, but have yet to get research funding to confirm my pioneering theories.

Now available in Italian L’odore della schizofrenia
(thanks Giuliana!)

God, death, cannabis

An amazing picture that takes pride of place on the cover of this month’s British Journal of Psychiatry by artist George Harding.

The piece has the wonderful name of Everything is Real Except God and Death.

From the description in the BJP:

The artist writes: `I used to get art lessons when I was a small boy from a next-door neighbour who inspired me, and art has been my passion ever since. `I did my foundation at Camberwell and graduated from Chelsea College of Art and Design in fine art in 2007. I now have a studio in Southwark, London, where I work and live.

`Art to me is emotive and is a way of coping with life and my mental illness. It is also something that guides me, something that I can rely upon and excel in. `The painting is explained by its title. The image speaks of the delusions, paranoia and imaginary beliefs that envelop a person when smoking cannabis. The image came together by making a collage of specific symbols and signs, which was then altered through painting. The figure featured in the work is after Henry Wallis’s painting in 1856 called The Death of Chatterton, a poet.

You can see more of artist George Harding’s amazing work as his website.