A multitude of phantoms

A fascinating paper in the neuroscience journal Brain looks at artistic depictions of phantom limbs – the feeling of the physical presence of a limb after it has been damaged or removed – and gives a wonderful insight how the brain perceives non-functioning or non-existent body parts.

In fact, most people who have a limb amputated will experience a phantom limb, although they often fade over time.

However, the feeling is usually not an exact representation of how the actual limb felt before it was removed, but can involve curious and sometimes painful ‘distortions’ in its perceived physical size, shape or location.

The Brain article looks at the diversity of phantom limb ‘shapes’ through their visual depictions.

The image on the left is from a 1952 case report where an amputation involved a ‘Krukenberg procedure‘.

This operation is rarely performed in the modern world but it involves the surgeon splitting the stump to allow pincer movements – and in this case it left the patient with the feeling of divided phantom hand.

In other cases, without any out-of-the-ordinary surgical procedure, patients can be left with a phantom that feels like the middle parts of their limb are missing while they still experience sensations in phantom extremities.

The drawing on the right was completed by a patient in a medical case study to illustrate their experience of a post-arm-amputation phantom limb.

In this case, the person experienced the feeling of a phantom hand on their shoulder stump, but had no experience of an intervening phantom arm.

While phantom limbs are usually associated with amputations, the phenomenon is actually caused by the mismatch between the lack of sensory input from the limb and the fact that the brain’s somatosensory map of the body is still intact and trying to generation sensations.

This means that any sensory disconnection, perhaps through nerve or spinal damage, can cause the experience of a phantom limb, even if the actual limbs are still there.

In the drawing on the left, a patient who suffered spinal damage that caused a loss of sensation in their limbs, illustrated how their phantom legs felt.

Although their own legs were completely ‘numb’ the phantom legs felt like they were bent at the knee, regardless of where their actual legs were positioned.

Normally, feedback from real world actions and sensations keeps the somatosensory map tied to the genuine size and shape of the body, but these sensations can begin to generate distorted sensations when this connection is broken through damage.

However, the stability of our experience of body size, shape and position is remarkably flexible in everyone as the rubber hand illusion shows.
 

Link to locked Brain article on depictions of phantom limbs.

The most accurate psychopaths in cinema

The most accurately depicted psychopaths in cinema have been identified by a study that has just been published in the Journal of Forensic Sciences.

The study specifically excluded films that weren’t intended to be realistic (involving magical powers, fantasy settings and so on) but still examined 126 characters from 20th and 21st century movies.

It’s worth noting that the clinical definition of psychopathy is not what most people think – it’s not necessarily someone who is a knife wielding maniac – but suggests someone who has poor empathy, little remorse, and is impulsive and manipulative.

Needless to say, psychopathy is more common in people who are persistently violent, but you don’t need to be violent to be a psychopath.

After conducting the analysis the authors note which films they feel have most accurately captured the characteristics of the psychopath.

Among the most interesting recent and most realistic idiopathic psychopathic characters is Anton Chigurh in the 2007 Coen brothers’ film, No Country for Old Men. Anton Chigurh is a well-designed prototypical idiopathic / primary psychopath. We lack information concerning his childhood, but there are sufficient arguments and detailed information about his behavior in the film to obtain a diagnosis of active, primary, idiopathic psychopathy, incapacity for love, absence of shame or remorse, lack of psychological insight, inability to learn from past experience, cold-blooded attitude, ruthlessness, total determination, and lack of empathy. He seems to be affectively invulnerable and resistant to any form of emotion or humanity. Having read and studied [serial killer] Richard Kuklinski’s case, Chigurh and Kuklinski have several traits in common. In the case of Chigurh, the description is extreme, but we could realistically almost talk about “an anti-human personality disorder”.

Another realistic interesting example is Henry (inspired from [real life serial killer] Henry Lee Lucas) (Henry-Portrait of a Serial Killer, 1991). In this film, the main, interesting theme is the chaos and instability in the life of the psychopath, Henry’s lack of insight, a powerful lack of empathy, emotional poverty, and a well-illustrated failure to plan ahead. George Harvey is another different and interesting character found in The Lovely Bones, 2009. Harvey is more ‘adapted’ than Chigurh and Henry. He has a house, is socially competent and seems like ‘the average man on the street’. Through the film, we learn that he is in fact an organized paraphilic SVP [sexually violent predator]. Here, the false self is well illustrated.

In terms of a ‘successful psychopath’, Gordon Gekko from Wall Street (1987) is probably one of the most interesting, manipulative, psychopathic fictional characters to date. Manipulative psychopathic characters are increasingly appearing in films and series. Again, we observe the same process, as observed and explained before, with antisocial psychopaths. For the past few years, with the world economic crises and some high-profile trials (such as the Bernard Madoff trial), the attention of the clinicians is more focused on ‘successful psychopaths’, also called corporate psychopaths by Babiak et al. Films and series presenting characters such as brokers, dishonest traders, vicious lawyers, and those engaged in corporate espionage are emerging (e.g., Mad Men, The Wire) and are generally related to the global economy and international business. Again, we see a strong parallelism between what happens in our society and what happens in film.

The paper also has a short section on the how the movies portray psychopathatic mental health professionals, which were apparently more common in cinema from the 1980s.

It describes how psychiatrist Hannibal Lecter is an ‘extraordinarily astute clinician’ who can diagnose Agent Starling’s psychological conflicts by identifying her perfume and assessing her shoes and clothing, while also being invulnerable.

They authors dryly note that these seem to be abilities “that are not generally found in everyday clinical practice”.
 

Link to locked study in Journal of Forensic Sciences.

2013-12-27 Spike activity

Quick links from the past week in mind and brain news:

Mother Jones reports on a new study finding that political beliefs interfere with logical reasoning.

Space in the brain: how the hippocampal formation supports spatial cognition. Excellent video introduction to Royal Society special edition.

The New York Times discusses the science of depression and what we still need to acheive.

The Inaccuracy of National Character Stereotypes. Neuroskeptic covers a wonderful study. British people apparently not universally humorous, dashingly good looking and modest.

The Brain Bank blog collects the best neuroscience images of 2013 for your viewing pleasure.

Remember that neurosurgeon who had a near death experience and claimed he had proved the existence of heaven? OK, Mind Hacks readers probably don’t but lots of other folks will. Esquire has an in-depth expose on him. That heaven thing was apparently unlikely.

Cracking the Enigma discusses the latest research on autism and asks whether ‘autistic brains’ are under- or over-connected.

The mysterious nodding syndrome – a crack of light

Two years ago we discussed a puzzling, sometimes fatal, ‘nodding syndrome‘ that has been affecting children in Uganda and South Sudan. We now know a little more, with epilepsy being confirmed as part of the disorder, although the cause still remains a mystery.

The condition affects children between 5 and 15 years old, who have episodes where they begin nodding or lolling their heads, often in response to cold. A typical but not exclusive pattern is that over time they become cognitively impaired to the point of needing help with simple tasks like feeding. Stunted growth is common.

Global Health News have a video on the condition if you want to see how it affects people.

In terms of our medical understanding, a review article just published in Emerging Infectious Diseases collates what we now know about the condition.

Firstly, it is now clear that epilepsy is part of the picture and the nodding is caused by recurrent seizures in the brain. This is a bit curious because this type of very specific ‘nodding’ behaviour has not been seen as a common effect of epilepsy before.

Also, knowing that it is caused by a seizure just pushes the need for explanation further down the causal chain. Seizures can occur through many different forms of brain disruption, so the question becomes – what is causing this epidemic form of seizure that seems to have a very selective effect?

With this in mind, a lot of the most obvious candidates have been ruled out. The following is from the review article, but if you’re not up on your medical terms, essentially, tests for a lot of poisons or infections have come up negative:

Testing has failed to demonstrate associations with trypanosomiasis, cysticercosis, loiasis, lymphatic filariasis, cerebral malaria, measles, prion disease, or novel pathogens; or deficiencies of folate, cobalamin, pyridoxine, retinol, or zinc; or toxicity from mercury, copper, or homocysteine.

Brain scans have been inconclusive with some showing minor abnormalities while others seem to show no detectable damage.

There have been some curious but not conclusive associations, however. Children with the nodding syndrome are more likely to have signs of infection by the river blindness parasite. But huge swathes of Africa have endemic river blindness and no nodding syndrome, and some children with nodding syndrome have no signs of infection.

Furthermore, the parasite is not thought to invade the nervous system and no trace of it has been found in the cerebrospinal fluid from any of the people with the syndrome. The authors of the review speculate that a new or similar parasite could be involved but hard data is still lacking and the typical signs of infection are missing.

A form of vitamin B6 deficiency is known to cause neural problems and has been found in affected people but it has also been found in just as many people untouched by the mystery illness. One possibility is this could be a risk factor, making people more vulnerable to the condition, rather than a sole cause.

One idea as to why it is so specific relates to the increasing recognition that some neurological conditions are caused by the body’s immune system erroneously attacking very specific parts of the brain.

For example, in Sydenham’s chorea antibodies for the common sore throat bacteria end up attacking the basal ganglia, while in limbic encephalitis the immune system attacks the limbic system.

This sort of autoimmune problem is a reasonable suggestion given the symptoms, but in the end, it is another hypothesis that is awaiting hard data.

Perhaps most mysterious, however, is its most marked feature – the fact that it only seems to affect children. At the current time, we seem no closer to understanding why. Similarly, the fact that it is epidemic and seems to spread also remains unexplained.

If you’re used to scientific articles, do check out the Emerging Infectious Diseases paper because it reads like a as-yet-unsolved detective story.

Either way, keep tabs on the story as it is something that needs to be cracked, not least because the number of cases seems to be slowly increasing.
 

Link to update paper in Emerging Infectious Diseases.

Why Christmas rituals make tasty food

All of us carry out rituals in our daily lives, whether it is shaking hands or clinking glasses before we drink. At this time of year, the performance of customs and traditions is widespread – from sharing crackers, to pulling the wishbone on the turkey and lighting the Christmas pudding.

These rituals might seem like light-hearted traditions, but I’m going to try and persuade you that they are echoes of our evolutionary history, something which can tell us about how humans came to relate to each other before we had language. And the story starts by exploring how rituals can make our food much tastier.

In recent years, studies have suggested that performing small rituals can influence people’s enjoyment of what they eat. In one experiment, Kathleen Vohs from the University of Minnesota and colleagues explored how ritual affected people’s experience of eating a chocolate bar. Half of the people in the study were instructed to relax for a moment and then eat the chocolate bar as they normally would. The other half were given a simple ritual to perform, which involved breaking the chocolate bar in half while it was still inside its wrapper, and then unwrapping each half and eating it in turn.

Something about carefully following these instructions before eating the chocolate bar had a dramatic effect. People who had focused on the ritual said they enjoyed eating the chocolate more, rating the experience 15% higher than the control group. They also spent longer eating the chocolate, savouring the flavour for 50% longer than the control group. Perhaps most persuasively, they also said they would pay almost twice as much for such a chocolate.

This experiment shows that a small act can significantly increase the value we get from a simple food experience. Vohs and colleagues went on to test the next obvious question – how exactly do rituals work this magic? Repeating the experiment, they asked participants to describe and rate the act of eating the chocolate bar. Was it fun? Boring? Interesting? This seemed to be a critical variable – those participants who were made to perform the ritual rated the experience as more fun, less boring and more interesting. Statistical analysis showed that this was the reason they enjoyed the chocolate more, and were more willing to pay extra.

So, rituals appear to make people pay attention to what they are doing, allowing them to concentrate their minds on the positives of a simple pleasure. But could there be more to rituals? Given that they appear in many realms of life that have nothing to do with food –from religious services to presidential inaugurations – could their performance have deeper roots in our evolutionary history? Attempting to answer the question takes us beyond the research I’ve been discussing so far and into the complex and controversial debate about the evolution of human nature.

In his book, The Symbolic Species, Terrance Deacon claims that ritual played a special role in human evolution, in particular, at the transition point where we began to acquire the building blocks of language. Deacon’s argument is that the very first “symbols” we used to communicate, the things that became the roots of human language, can’t have been anything like the words we use so easily and thoughtlessly today. He argues that these first symbols would have been made up of extended, effortful and complex sequences of behaviours performed in a group – in other words, rituals. These symbols were needed because of the way early humans arranged their family groups and, in particular, shared the products of hunting. Early humans needed a way to tell each other who had what responsibilities and which privileges; who was part of the family, and who could share the food, for instance. These ideas are particularly hard to refer to by pointing. Rituals, says Deacon, were the evolutionary answer to the conundrum of connecting human groups and checking they had a shared understanding of how the group worked.

If you buy this evolutionary story – and plenty don’t – it gives you a way to understand why exactly our minds might have a weakness for ritual. A small ritual makes food more enjoyable, but why does it have that effect? Deacon’s answer is that our love of rituals evolved with our need to share food. Early humans who enjoyed rituals had more offspring. I speculate that an easy shortcut for evolution to find to make us enjoy rituals is by connecting our minds to that the rituals make the food more enjoyable.

So, for those sitting down with family this holiday, don’t skip the traditional rituals – sing the songs, pull the crackers, clink the glasses and listen to Uncle Vinnie repeat his funny anecdotes for the hundredth time. The rituals will help you enjoy the food more, and carry with them an echo of our long history as a species, and all the feasts the tribe shared before there even was Christmas.

This is my latest column for BBC Future. You can see the original here. Merry Christmas y’all!

Whatever happened to Hans Eysenck?

Psychologist Hans Eysenck was once one of the most cited and controversial scientists on the planet and a major force in the development of psychology but he now barely merits a mention. Whatever happened to Hans Eysenck?

To start off, it’s probably worth noting that Eysenck did a lot to ensure his legacy would be difficult to maintain. He specifically discouraged an ‘Eysenck school’ of psychology and encouraged people to question all his ideas – an important and humble move considering that history favours the arrogant.

But he also argued for a lot of rubbish and that is what he’s become most remembered for.

He did a lot of work on IQ but took a hard line of its significance. Rather than thinking of it as simply a broad-based psychological test that is useful as a clinical measure of outcome, he persistently championed it as a measure of ‘intelligence’ – a fuzzy social idea that implies someone’s value.

Without any insight into the cultural specificity of these tests Eysenck argued for racial differences in IQ as likely based in genetics, and signed the notorious ‘Mainstream Science on Intelligence’ statement which reads like your drunk grandpa trying to justify why there are no black Nobel science winners.

Eysenck was apparently not racist himself, but believing that science was ‘value free’ he was also incredibly politically naive and took money from clearly racist organisations or published in their journals, thinking that the data would speak for itself.

He also doubted that smoking caused lung cancer and took money from tobacco giant Philip Morris to try and show that the link was mediated by personality, and at one point started espousing that there was some statistical basis behind astrology.

Some of his other main interests have not been rejected, but have just become less popular – not least the psychology of personality and personality tests.

This area is still important but has become a minority sport in contemporary psychology, whereas previously it was central to a field that was still battling fairytale Freudian theories as a way of understanding personal tendencies.

But perhaps his most important contributions to psychology are now so widely accepted that no-one really thinks about their origin.

When he was asked to create the UK’s first training course for clinical psychology he created a scientifically informed approach to understanding which treatments work but extended this philosophy to focus on a hypothesis-testing approach to work with individuals. This is now a core aspect of practice across the world.

His belief that psychologists should consistently look to make links between thoughts, experience, behaviour and biology is something that has been widely taken up by researchers, even if clinical psychologists remain a little neurophobic as a profession.

Because Eysenck loved an academic dust-up, he is most remembered for the IQ debate, on which he took a rigid position which history has, justifiably, not looked kindly on. But as someone who influenced the practice of psychology, his legacy remains important, if largely unappreciated.

A sticking plaster for a shattered world

The last paragraph of this article from the American Journal of Psychiatry on people displaced by the Syrian conflict essentially sums up the entire practice of conflict-related mental health.

Looking at this endless list of horrible stories from a psychiatrist’s perspective, I see only patients suffering from what my profession calls posttraumatic stress disorder. It is a disorder with well-described symptoms and therapeutic options. Looking at this same list from a human being’s perspective, I only see in the looks and attitudes of those patients—as I empathically explain to them their disorder, prescribe a few pills, and orient them to psychotherapy—something that is beyond what contemporary evidence-based medicine can describe scientifically.

In every one of these patients, I see intense, irreversible mistrust and a lack of belief in every principle or rule that is supposed to control our relationships with each other. I see question marks regarding the meaning of their whole existence as well as the meanings behind the most important concepts that seemed unquestionable to them in the past, such as religion, politics, work, family, and finally, last but not least, health. These patients manifest symptoms that justify the wide array of treatment modalities I offer to them, but I am left with a terrible feeling that this management is somehow wanting. All that has been shattered in these patients’ lives cannot be mended by the small treatment that we can offer.

It’s written by Lebanese psychiatrist Rami Bou Khalil, who mentions this little told but often learnt lesson about the effects of war.
 

Link to ‘Where All and Nothing Is About Mental Health’

2013-12-20 Spike activity

Quick links from the past week in mind and brain news:

The New York Times reports that information overlords Google acquire creature-inspired military robot outfit Boston Dynamics. Honestly. It’s like humanity is attached to a big angry dog and someone keeps yanking the chain.

There’s an excellent and extensive MIT Tech Review piece on the development of neuromorphic chips.

Over 60% of people diagnosed with depression do not actually meet the diagnostic criteria, according to an American study covered in The Atlantic.

The New York Times has an interesting piece on how peak violence is in your first few years of life and how persistent adult violence may be a ‘missing dropoff’ from these period.

Long neglected, severe cases of autism get some attention. Excellent piece from the Simons Foundation.

Nature reports that narcolepsy is all but confirmed as an auto-immune disease. Big news.

Hi Kids, I’m Neuro The Clown! Genius comic strip from Saturday Morning Breakfast Cereal.

Sifting The Evidence takes a careful look at dubious claims that aspirin could treat aggression.

There’s an interesting piece over at Nautilus on how your brain twists together emotion and place.

Neurocritic covers a curious case study: When Waking Up Becomes the Nightmare: Hypnopompic Hallucinatory Pain.

Year Four of the Blue Brain documentary

Film-maker Noah Hutton has just released the ‘Year Four’ film of the decade-long series of films about Henry Markram’s massive Blue Brain neuroscience project.

It’s been an interesting year for Markram’s project with additional billion euro funding won to extend and expand on earlier efforts and the USA’s BRAIN Initiative having also made it’s well-funded but currently direction-less debut.

Hutton also tackles Markram on the ‘we’re going to simulate the brain in 10 years’ nonsense he relied on earlier in the project’s PR push although, his answer, it must be said, is somewhat evasive.

Although more of an update on the politics of Big Neuroscience than a piece about new developments in the science of the brain, the latest installation of the Blue Brain documentary series captures how 2013 will define how we make sense of the brain for years to come.
 

Link to ‘Bluebrain: Year Four’ on Vimeo.
Link to the Bluebrain Film website.

Is school performance less heritable in the USA?

CC licensed photo by flickr user Pollbarba. Click for source.A recent twin study looked at educational achievement in the UK and found that genetic factors contribute more than half to the difference in how students perform in their age 16 exams. But this may not apply to other countries.

Twin studies look at the balance between environmental and genetic factors for a given population and a given environment.

They are based on comparing identical and non-identical twins. Identical twins share 100% of their DNA, non-identical twins 50%. They also share a common environment (for example, the family home) and some unique experiences.

By knowing that differences in what you’re measuring in identical twins is likely to be ‘twice as genetic’ or ‘twice as heritable’ in non-identical twins you can work out the likely effect of environment using something called the ACE model.

This relies on various assumptions, for example, that identical twins and non-identical twins will not systematically attract different sorts of experiences, which are not watertight. But as a broad estimate, twin studies work out.

Here’s what the latest study concluded:

In a national twin sample of 11,117 16-year-olds, heritability was substantial for overall GCSE performance for compulsory core subjects (58%) as well as for each of them individually: English (52%), mathematics (55%) and science (58%). In contrast, the overall effects of shared environment, which includes all family and school influences shared by members of twin pairs growing up in the same family and attending the same school, accounts for about 36% of the variance of mean GCSE scores. The significance of these findings is that individual differences in educational achievement at the end of compulsory education are not primarily an index of the quality of teachers or schools: much more of the variance of GCSE scores can be attributed to genetics than to school or family environment.

In other words, the study concluded that over half of the difference in exam results was down to genetic factors.

The most important thing to consider, however, is how well the conclusions apply outside the population and environment being tested.

Because the results give an estimate of the balance between environment and genetic heritability that contribute to the final outcome, the more fixed the environment, the more any differences will be due to genetics and vice versa.

If that’s a bit difficult to get your head round try this example: ask yourself – is difference in height mostly due to genetics or the environment? Most people say genetics – tall parents tend to have tall offspring – but that only applies where everybody has adequate nutrition (i.e. the environmental contribution is fixed to maximum benefit).

In situations where malnutrition is a problem, difference in height is mostly explained by the environment. People who have adequate nutrition during childhood are taller than people who suffered malnutrition. In this situation, genetic factors are a minor player in terms of explaining height differences.

So let’s go back to our education example and think about how genetic and environmental factors balance out.

One of the interesting things about the UK is that it has a National Curriculum where schools have to teach set subjects in a set way.

In other words, the government has fixed part of the environment meaning that differences in exam performance in the UK are that bit more likely to be due to genetic heritability than places where there is no set education programme.

In fact, the same research group speculated in 2007 in a research monograph (pdf, p116) in a similar analysis, that school performance would be less genetically heritable in the USA, because the school environment is more variable.

The U.K. National Curriculum provides similar curricula to all students, thus diminishing a potentially important source of environmental variation across schools, to the extent that the curriculum actually provides a potent source of environmental variation.

In contrast, the educational system in the United States is one of the most decentralized national systems in the world. To the extent that these differences in educational policy affect children’s academic performance, we would expect greater heritability and lower shared environment in the United Kingdom than in the United States.

In other words, all other things being equal, greater equality in educational opportunity should lead to greater heritability.

School performance may be less influenced by genetic heritability in the USA because the educational environment is more variable and therefore accounts for more difference.

Whereas in the UK, the educational environment is more fixed so a greater proportion of the difference in performance is down to genetic heritability.

It’s worth noting that this hasn’t, to my knowledge, been confirmed yet, but it’s a reasonable assumption and demonstrates exactly the question we need to bear in mind when considering studies that estimate heritability – for whom and in what environment?
 

Link to twin study on school performance in PLOS One.
pdf of research monograph on learning and genetics.

The best graphic and gratuitious displays

Forget your end of year run-downs and best of 2013 photo specials, it doesn’t get much better than this: ‘The 15 Best Behavioural Science Graphs of 2010-13’ from the Stirling Behavioural Science Blog.

As to be expected, some are a little better than others (well, Rolling Stone chose a Miley Cyrus video as one of their best of 2013, so, you know, no-one’s perfect) but there are still plenty of classics.

This one, from a study on parole rulings by judges based on the order of cases and when food breaks occur is particularly eye-opening.

This paper examined 1,112 judicial rulings over a 10 month period by eight judges in Israel. These judges presided over 2 parole boards for four major prisons, processing around 40% of all parole requests in the country. They considered 14-35 cases per day for an average of six minutes and they took two daily food breaks (a late morning snack and lunch), dividing the day into three sessions.

The graph looks at the proportion of rulings in favor of parole by ordinal position (so 1st case of the day, then 2nd, then 3rd, etc). The circled points are the first decision in each of the three decision sessions, the tick marks on the x-axis denote every third case and the dotted line denotes a food break. The probability of the judges granting parole falls steadily from around 65% to nearly zero just before the break, before jumping back up again after they return to work.

Moral of the story: don’t get banged up, make sure your judge has been recently fed, or bring snacks to court.

Anyway, plenty more fascinating behavioural science graphs to check out and no Miley Cyrus. At least, until she jumps on that bandwagon.
 

Link to ‘The 15 Best Behavioural Science Graphs of 2010-13’

A disorder of marketing

Photo by Flickr user lance robotson.  Click for source.The New York Times has an important article on how Attention Deficit Disorder, often known as ADHD, has been ‘marketed’ alongside sales of stimulant medication to the point where leading ADHD researchers are becoming alarmed at the scale of diagnosis and drug treatment.

It’s worth noting that although the article focuses on ADHD, it is really a case study in how psychiatric drug marketing often works.

This is the typical pattern: a disorder is defined and a reliable diagnosis is created. A medication is tested and found to be effective – although studies which show negative effects might never be published.

It is worth highlighting that the ‘gold standard’ diagnosis usually describes a set of symptoms that are genuinely linked to significant distress or disability.

Then, marketing money aims to ‘raise awareness’ of the condition to both doctors and the public. This may be through explicit drug company adverts, by sponsoring medical training that promotes a particular drug, or by heavily funding select patient advocacy groups that campaign for wider diagnosis and drug treatment.

This implicitly encourages diagnosis to be made away from the ‘gold standard’ assessment – which often involves an expensive and time-consuming structured assessment by specialists.

This means that much of the diagnosis and prescribing happens by local doctors and is often prompted by patients turning up with newspaper articles, adverts, or the results of supposedly non-diagnostic diagnostic quizzes in their hands. There are many more marketing tricks of the trade which the article goes through in detail.

As the initial market begins to become saturated, drug companies will often aim to ‘expand’ into other areas by sponsoring studies into the same condition in another age group and treating the condition as an ‘add on’ to another disorder – each of which allows them to officially market the drug for these conditions.

However, fines for illegally marketing drugs for non-approved conditions are now commonplace are many think that these are just considered as calculated financial risks by pharmaceutical companies.

The New York Times is particularly important because it tracks the entire web of marketing activity – that aside from the traditional medical slant – also includes pop stars, material for kids, TV presenters, websites and bloggers.

It is a eye-opening guide to the burgeoning world of ADHD promotion but is really just a case study of how psychiatric drug marketing works. By the way, don’t miss the video that analyses the marketing material.

Essential stuff.
 

Link to NYT article ‘The Selling of Attention Deficit Disorder’

2013-12-13 Spike activity

Quick links from the past week in mind and brain news:

Beware the enthusiasm for ‘neuroeducation’ says Steven Rose in Times Higher Education.

Lots of studies use oxytocin nasal sprays. You can buy it from websites. Neuroskeptic asks does it even reach the brain?

Time magazine finds a fascinating AI telemarketer bot that denies it’s a robot when questioned – with some great audio samples of the conversations.

The Tragedy of Common-Sense Morality. Interesting interview with psychologist of moral thinking Joshua Green in Slate.

Brain Watch takes a calm look at the most hyped concept in neuroscience: mirror neurons.

As is traditional the Christmas British Medical Journal has some wonderfully light-hearted science – including a medical review on the beneficial and harmful effects of laughter.

How much do we really know about sleep? asks The Telegraph.

Chemical adventurers: a potent laboratory neurotoxin is being sold as a legal high online. The Dose Makes The Poison has the news.

Not really into kickstarters but this looks cool: open-source Arduino-compatible 8-channel EEG platform.

Did Brain Scans Just Save a Convicted Murderer From the Death Penalty? Wired on a curious neurolaw development.

How the US military used lobotomies on World War II veterans – an excellent multimedia expose from the Wall Street Journal.

New Scientist takes a critical look at the ‘genetics more important than experience in school exam performance’ study that’s been making the headlines.

The Manifestation of Migraine in Wagner’s Ring Cycle. Neurocritic on migraine and opera.

How sleep makes your mind more creative

It’s a tried and tested technique used by writers and poets, but can psychology explain why first moments after waking can be among our most imaginative?

It is 6.06am and I’m typing this in my pyjamas. I awoke at 6.04am, walked from the bedroom to the study, switched on my computer and got to work immediately. This is unusual behaviour for me. However, it’s a tried and tested technique for enhancing creativity, long used by writers, poets and others, including the inventor Benjamin Franklin. And psychology research appears to back this up, providing an explanation for why we might be at our most creative when our minds are still emerging from the realm of sleep.

The best evidence we have of our mental state when we’re asleep is that strange phenomenon called dreaming. Much remains unknown about dreams, but one thing that is certain is that they are weird. Also listening to other people’s dreams can be deadly boring. They go on and on about how they were on a train, but it wasn’t a train, it was a dinner party, and their brother was there, as well as a girl they haven’t spoken to since they were nine, and… yawn. To the dreamer this all seems very important and somehow connected. To the rest of us it sounds like nonsense, and tedious nonsense at that.

Yet these bizarre monologues do highlight an interesting aspect of the dream world: the creation of connections between things that didn’t seem connected before. When you think about it, this isn’t too unlike a description of what creative people do in their work – connecting ideas and concepts that nobody thought to connect before in a way that appears to make sense.

No wonder some people value the immediate, post-sleep, dreamlike mental state – known as sleep inertia or the hypnopompic state – so highly. It allows them to infuse their waking, directed thoughts with a dusting of dreamworld magic. Later in the day, waking consciousness assumes complete control, which is a good thing as it allows us to go about our day evaluating situations, making plans, pursuing goals and dealing rationally with the world. Life would be challenging indeed if we were constantly hallucinating, believing the impossible or losing sense of what we were doing like we do when we’re dreaming. But perhaps the rational grip of daytime consciousness can at times be too strong, especially if your work could benefit from the feckless, distractible, inconsistent, manic, but sometimes inspired nature of its rebellious sleepy twin.

Scientific methods – by necessity methodical and precise – might not seem the best of tools for investigating sleep consciousness. Yet in 2007 Matthew Walker, now of the University of California at Berkeley, and colleagues carried out a study that helps illustrate the power of sleep to foster unusual connections, or “remote associates” as psychologists call them.

Under the inference

Subjects were presented with pairs of six abstract patterns A, B, C, D, E and F. Through trial and error they were taught the basics of a hierarchy, which dictated they should select A over B, B over C, C over D, D over E, and E over F. The researchers called these the “premise pairs”. While participants learnt these during their training period, they were not explicitly taught that because A was better than B, and B better than C, that they should infer A to be better than C, for example. This hidden order implied relationships, described by Walker as “inference pairs”, were designed to mimic the remote associates that drive creativity.

Participants who were tested 20 minutes after training got 90% of premise pairs but only around 50% of inference pairs right – the same fraction you or I would get if we went into the task without any training and just guessed.

Those tested 12 hours after training again got 90% for the premise pairs, but 75% of inference pairs, showing the extra time had allowed the nature of the connections and hidden order to become clearer in their minds.

But the real success of the experiment was a contrast in the performances of one group trained in the morning and then re-tested 12 hours later in the evening, and another group trained in the evening and brought back for testing the following morning after having slept. Both did equally well in tests of the premise pairs. The researchers defined inferences that required understanding of two premise relationships as easy, and those that required three or more as hard. So, for example, A being better than C, was labelled as easy because it required participants to remember that A was better than B and B was better than C. However understanding that A was better than D meant recalling A was better than B, B better than C, and C better than D, and so was defined as hard.

When it came to the harder inferences, people who had a night’s sleep between training and testing got a startling 93% correct, whereas those who’d been busy all day only got 70%.

The experiment illustrates that combining what we know to generate new insights requires time, something that many might have guessed. Perhaps more revealingly it also shows the power of sleep in building remote associations. Making the links between pieces of information that our daytime rational minds see as separate seems to be easiest when we’re offline, drifting through the dreamworld.

It is this function of sleep that might also explain why those first moments upon waking can be among our most creative. Dreams may seem weird, but just because they don’t make sense to your rational waking consciousness doesn’t make them purposeless. I was at my keyboard two minutes after waking up in an effort to harness some dreamworld creativity and help me write this column – memories of dreams involving trying to rob a bank with my old chemistry teacher, and playing tennis with a racket made of spaghetti, still tinging the edges of my consciousness.

This is my BBC Future column from last week. The original is here. I had the idea for the column while drinking coffee with Helen Mort. Caffeine consumption being, of course, another favourite way to encourage creativity!

Where data meets the people

Ben Goldacre might be quite surprised to hear he’s written a sociology book, but for the second in our series on books about how the science of mind, brain and mental health meet society, Bad Pharma is an exemplary example.

The book could essentially be read as a compelling textbook on clinical trial methodology with better jokes, but the crux of the book is not really the methods of testing medical interventions, but how these methods are used and abused for financial ends and what impact this has on professional medicine and, ultimately, our health.

In other words, the book looks at how clinical science is used socially and how social influences affect clinical science.

For example, this is question I often give students: If a trial is badly designed, are the results more likely to suggest the treatment is effective or more likely to suggest the treatment is ineffective?

Most students, naive to the ways of the scientific world, tend to say that badly designed trials would be less likely to show the treatment works but in the real world, badly designed trials are much more likely to give positive results.

There is nothing in the science that makes this happen. This is an entirely social effect. It’s worth saying that that this is rarely due to outright fraud but it’s those little decisions that add up over time, each of which seems completely justifiable to the researcher, that sway the results.

It’s like if your dad was school football coach. You’d probably get picked for the team more often not because your father was making a conscious decision to include you no matter what, but because he would genuinely believe he had recognised talent where others probably wouldn’t.

For scientists, the treatment they are testing is often their ‘baby’, and the same sort of soft biases creep in between the cracks. And the more cracks there are, the more creep occurs.

On the other hand, pharmaceutical companies are often deliberately trying to promote their product by distorting the evidence for its effectiveness. This often happens within the accepted regulations – the unethical but legal realm – but happens surprisingly often outside the law.

Bad Pharma is not specifically about psychiatry but as one of the medical specialities which is most corrupted by the influence of large pharmaceutical companies, it turns up a lot.

It is both an essential guide to understanding how treatments for mental health conditions are tested and has plenty of examples of how psychiatric drugs have been the subject of spin, over-selling and fraud.

Perhaps the only part where I think Goldacre is being too strong is in his criticism of ‘me too drugs’ which are new drugs which are often molecularly similar but no more effective for the target symptoms than the old ones.

At least in psychiatry, one of the big problems is not so much the effectiveness of the drugs, but their side-effects. Having other compounds which although no more effective may be more agreeable or less risky is a genuine benefit.

Goldacre is clear about this being a benefit, but I think he under-values it at times, especially since a lot of mind and brain medicine involves iterating through medications until the patient is happy with the balance between effectiveness and side-effects.

But this is a small point in an excellent book. It is essential if you want to know how medicine works and doubly essential if you have an interest in the mind, brain and mental health where these issues are both a significant battle ground and often under-appreciated.

I suspect Goldacre would prefer to call the book political rather than sociological, but if you are studying psychology, neuroscience or mental health it is a must read to understand how clinical science meets society.

Next and finally in this three-part Mind Hacks series on science and society – Didier Fassin and Richard Rechtman’s The Empire of Trauma.
 

Link to more details of Bad Pharma.

2013-12-06 Spike activity

Quick links from the past week in mind and brain news:

C-List celebrity is photographed with a psychology book in her hand and New York Magazine is all over it like Glenn Greenwald with an encrypted harddrive.

The New York Times covers a Dutch scheme to get alcoholics working by paying them in beer. Scheme to get stoners working by paying them in weed probably not as effective.

The British Medical Journal has an entertaining interview with psychiatrist Simon Wessely.

Soaring dementia rates prompt call for global action, reports New Scientist.

Bloomberg reports that the rate of US teens on psychiatric drugs remains steady at 6%. Hey, it could be worse.

Research on illicit drugs is being hampered by daft drug laws says David Nutt in Scientific America. Clearly not the worst scientific censorship “since the banning of the telescope” but the point remains.

Brain Watch has a good round up of discussions surrounding the ‘men and women’s brains are wired differently therefore stereotypes’ study that has been getting everyone’s unisex underwear in a twist.

Electric brain stimulation triggers eye-of-the-tiger effect. Not Exactly Rocket Science has the power-chords (and maybe the power cords – hard to see from this angle).

NPR has one of the few left-brain / right-brain articles you’ll ever want to read. Neuroscientist Tania Lombroso takes a detailed look at the science behind the concept.

The science of hatred. The Chronicle of Higher Education has an excellent piece on the psychology of genocide and racism.