The labels change, the game remains the same

Today’s New York Times has a huge feature on the illicit use of stimulant drugs like Ritalin and pharmaceutical amphetamines in colleges and schools by kids ‘seeking an academic edge’.

The piece is written like an exposé but if you know a little about the history of amphetamines, it is also incredibly ironic.

The ‘illicit stimulants for study’ situation is a complete replay of what happened with the branded amphetamine benzedrine in the 1930s, as recounted in Nicolas Rasmussen’s brilliant book On Speed: The Many Lives of Amphetamine.

Benzedrine had a legitimate medical use. It acts as a bronchodilater, opening up the airways to the lungs, so it was prescribed for people with asthma.

In the mid-1930s, it was also being tested as a way of increasing intelligence test scores with promising results, both in British adults and in American children.

But, unsurprisingly (it is speed after all) it became popular for party people wanting a recreational high, and students wanting increased focus and energy, who concluded through their own informal tests that it could help with study.

In 1937, none other than the The New York Times ran a story about benzedrine calling it a ‘high octane brain fuel’ and noting that without it the brain ‘does not run on all cylinders’. It was clearly pitched as a cognitive enhancer.

Shortly after Time magazine ran a story specifically on how it was being used by college students to cram for final exams.

Suddenly, there was a boom in students using benzedrine, leading the prestigious Journal of the American Medical Asociation to condemn the press coverage for promoting the widespread use of drug, as previously its use was a niche activity.

The warnings did little good, however, and speed has remained a massively popular study drug ever since.

Here’s an article from the 1948 Harvard Crimson, a full decade later, warning of ‘Benzedrine-Soaked Crammers’. And here’s another from a 1965 edition of same publication, almost two decades later warning of studying with benzedrine ‘pep pills’. Here’s the 2004 version: ‘Students Turn To Drugs To Study’.

So the story isn’t really new but it’s ironic that the New York Times has inadvertently promoted the activity. Again.
 

Link to NYT article Risky Rise of the Good-Grade Pill’

A shot to the head

A couple of online articles have discussed whether you would be conscious of being shot in the head with the general conclusion that it is unlikely because the damage happens faster than the brain can register a conscious sensation.

While this may be true in some instances it ignores that fact that there are many ways of taking a bullet to the head.

This is studied by a field called wound ballistics and, unsurprisingly when you think about it, the wound ballistics of the head are somewhat special.

Firstly, if you get shot in the head, in this day and age, you have, on average, about a 50/50 chance of surviving. In other words, it’s important to note that not everyone dies from their injuries.

But it’s also important to note that not every bullet wound will necessarily damage brain areas essential for consciousness.

The image on the top left of this post charts the position of fatal gunshot wounds recorded in soldiers and was published in a recent study on combat fatalities.

For many reasons, including body armour and confrontation type, head wounds to soldiers are not necessary a good guide to how these will pan out in civilians, but you can see that there are many possibilities with regard to which brain areas could be affected.

In fact, you can see differences in the effect of gunshots to the head more directly from the data from Glasgow Coma Scale (GCS) ratings. A sizeable minority are conscious when they first see someone from the trauma team.

It’s also worth noting that deaths are not necessarily due to brain damage per se, blood loss is also a key factor.

An average male has about 6 litres of blood and his internal carotid artery clears about a quarter of a litre per minute at rest to supply the brain. When in a stressful situation, like, for example, being shot, that output can double.

If we need to lose about 20% of our blood to lose consciousness, our notional male could black out in just over two minutes just through having damage to his carotid. However, that’s two minutes of waiting if he’s not been knocked unconscious by the impact.

But if we’re thinking about brain damage, the extent depends on a whole range of ballistic factors – the velocity, shape, size and make-up of the bullet being key.

As it turns out, the brain needs special consideration, not least because it is encased in the skull.

One of the first things to consider is that the skull can fracture and how the fragments themselves can become missiles. In 42 cases of civilian gunshot wounds to the brain two neurosurgeons were able to find bone chips in 16 patients’ brains simply by “digital palpation” – which is a complicated medical term for sticking your fingers in and wiggling them about.

In other words, a shot to one part of the head may have knock-on effects purely due to skull shattering.

However, the skull also sets up a unique target due to its enclosed nature. If someone gets shot in the leg the pressure of the impact can be released into the surroundings. If a bullet gets into the brain the options are fewer because the pressure waves and, indeed, the brain, are largely trapped inside a solid box of bone.

If you want to get an idea of the sorts of pressures involved, just catch a video or two of bullets being fired into ballistic gel and think what would happen if the gel was trapped inside a personally important life-sustaining box.

In fact, if the shot is powerful enough, from high velocity rifles for example, there is a combination of the initial impact and an ‘explosive’ effect which can do substantial damage through forcing the brain to the side of the skull and fracturing from the inside out.

There is one rare effect, called the Krönlein shot, where a high powered shot messily opens the skull but neatly ejects the whole brain on the ground. You can find pictures on the web from pathology articles but, I warn you, they’re neither child friendly nor particularly good tea-time viewing.

Small low-velocity rounds can do quite local damage, however, and despite the tragedy of being shot, we have learnt a surprising amount from people who have survived such wounds.

As we’ve discussed previously, the use of small bore low-velocity bullets during World War I meant that more than ever before and, perhaps since, soldiers survived with small localised brain injuries.

This meant doctors could do some of the first systematic studies into how specific brain areas related to specific functions, based on tests of what brain-injured soldiers could no longer do.

But while it’s true to say that many people will lose consciousness before they even know they’ve been shot, it’s not guaranteed. Although it will mean that some people will be unfortunately aware of their death, it also means that others are able to save themselves.

A delusional life on film

A curiously recursive case of psychosis, reported in the latest issue of Cognitive Neuropsychiatry, about a person who worked on a reality TV show who had the delusion that they were on a reality TV show.

Mr D. was working on a reality television show when he was hospitalised after causing a public disturbance. While working on the production of the show, he came to believe that he was the one who was actually being broadcast: ‘‘I thought I was a secret contestant on a reality show. I thought I was being filmed. I was convinced I was a contestant and later the TV show would reveal me.’’ He believed his thoughts were being controlled by a film crew paid for by his family. During the 2 weeks prior to admission, he experienced decreased sleep, pressured speech, irritability, paranoia, and hyperreligiosity. The patient carried a diagnosis of bipolar disorder and had had two previous hospitalisations for manic episodes.

The case is from a paper that reports several cases of what the authors call the ‘Truman Show delusion’ where a person believes that they are being featured on a TV show about their life, as in the film of the same name.

Sadly, the article is locked behind a paywall, as it contains a fantastic discussion of how culture and psychosis interact.
 

Link to locked academic paper.

An unplanned post-mortem

My latest Beyond Boundaries column for The Psychologist explores the space between he we study suicide and the experience of families affected by it:

Suicide is often considered a silencing, but for many it is only the beginning of the conversation. A common approach to understand those who have ended their own lives is the ‘psychological autopsy’ – a method that seeks to reconstruct the mental state of the deceased individual shortly before the final act. The testimony of friends and family is filtered through standardised assessments and psychiatric diagnoses. The narrative is ‘stripped down’ to the essential facts. A life is reduced to risk factors.

Psychologists Christabel Owens and Helen Lambert were struck by the contrast between the goal of the professionals in the interviews and how the friends and family of the deceased used the opportunity to tell their story and to make sense of their loss. ‘The flow of narrative’, they note in their recent study, ‘can often be unstoppable’. The researchers returned to the transcripts of a 2003 psychological autopsy study, but instead of using the interview to construct variables, they looked at how the friends and families portrayed their lost companion.

As suicide is both stigmatised and stigmatising the personal accounts often contained portrayals of events that presupposed possible moral conclusions about the deceased. For example, by tradition, those who have cancer are discussed as heroic fighters, facing down death with courage and resolution. The default stories about people who commit suicide are not nearly so generous, however, and to navigate this treacherous moral territory bereaved friends and family often called on other, more acceptable, social stereotypes to make sense of the situation.

The suicides of women were largely portrayed in medical terms, as being so weakened by negative experiences that they were unable to prevent a decline into mental illness. The suicides of men, on the other hand, were barely ever described in terms of mental disorder. Male suicide was typically described either as the end result of having ‘gone of the rails’, a self-directed descent into antisocial behaviour, or as a ‘heroic’ action, demonstrating a final defiant act against an unjust world.

Deaths were filtered through gender stereotypes of agency and accountability, perhaps to make them more acceptable to an unkind world. Owens and Lambert’s study highlights the stark contrast between how researchers and family members interpret the same tragic events. As professionals, we often do surprisingly little to mesh together the bounded worlds of science and subjectivity, but the study demonstrates the power of the personal narrative. It affects us even after death.

Thanks to Jon Sutton, editor of The Psychologist who has kindly agreed for me to publish my column on Mind Hacks as long as I include the following text:

“The Psychologist is sent free to all members of the British Psychological Society (you can join here), or you can subscribe as a non-member by going here.
 

Link to original behind pay wall.

The bathroom of the mind

The latest issue of The Psychologist has hit the shelves and it has a freely available and suprisingly thought-provoking article about bathroom psychology.

If you’re thinking it’s an excuse for cheap jokes you’d be mistaken as takes a genuine and inquisitive look at why so little psychology, Freud excepted, has been concerned with one of our most important bodily functions.

This part, on the history of theories regarding graffiti found in toilets, is as curious is it is bizarre.

Toilet graffiti, dubbed ‘latrinalia’ by one scholar, has drawn attention from many researchers and theorists over the years. Many of them have focused on gender, using public lavatories as laboratories for studying sex differences in the content and form of these scribblings. Alfred Kinsey was one of the first researchers to enter the field, surveying the walls of more than 300 public toilets in the early 1950s and finding more erotic content in men’s and more romantic content in women’s. Later research has found that men’s graffiti also tend to be more scatological, insulting, prejudiced, and image-based, and less likely to offer advice or otherwise respond to previous remarks.

Theorists have struggled to explain differences such as these. True to his time, Kinsey ascribed them to women’s supposedly greater regard for social conventions and lesser sexual responsiveness. Psychoanalytic writers proposed that graffiti writing was a form of ‘phallic expression’ or that men pursued it out of an unconscious envy of women’s capacity for childbirth. Semioticians argued that men’s toilet graffiti signify and express political dominance, whereas women’s respond to their subordination. Social identity theorists proposed that gender differences in latrinalia reflect the salience of gender in segregated public bathrooms: rather than merely revealing their real, underlying differences, women and men polarise their behaviour in these gender-marked settings so as to exaggerate their femaleness or maleness.

The article looks at many other curious episodes in the bashful psychology of the bathroom.
 

Link to The Psychologit on ‘toilet psychology’

A bridge over troubled waters for fMRI?

Yesterday’s ‘troubles with fMRI’ article has caused lots of debate so I thought I’d post the original answers given to me by neuroimagers Russ Poldrack and Tal Yarkoni from which I quoted.

Poldrack and Yarkoni have been at the forefront of finding, fixing and fine-tuning fMRI and its difficulties. I asked them about current challenges but could only include small quotes in The Observer article. Their full answers, included below with their permission, are important and revealing, so well worth checking out.

First, however, a quick note about the reactions the piece has received from the neuroimaging community. They tend to be split into “well said” and “why are you saying fMRI is flawed?”

Because of this, it’s worth saying that I don’t think fMRI or other imaging methods are flawed in themselves. However, it is true that we have discovered that a significant proportion of the past research has been based on potentially misleading methods.

Although it is true that these methods have largely been abandoned there still remain some important and ongoing uncertainties around how we should interpret neuroimaging data.

As a result of these issues, and genuinely due to the fact that brain scans are often enchantingly beautiful, I think neuroimaging results are currently given too much weight as we are trying to understand the brain but that we shouldn’t undervalue neuroimaging as a science.

Despite having our confidence shaken in past studies, neuroimaging will clearly come out better and stronger as a result of current debates about problems with analysis and interpretation.

At the moment, the science is at a fascinating point of transition, so it’s a great time to be interested in cognitive neuroscience and I think this is made crystal clear from Russ and Tal’s answers below.

Russ Poldrack from the University of Texas Austin

What’s the most pressing problem fMRI research needs to address at the moment?

I think that biggest fundamental problem is the great flexibility of analytic methods that one can bring to bear on any particular dataset; the ironic thing is that this is also one of fMRI’s greatest strengths, i.e., that it allows us to ask so many different questions in many different ways. The problem comes about when researchers search across many different analysis approaches for a result, without the realization that this induces an increase in the ultimate likelihood of finding a false positive. I think that another problem that interacts with this is the prevalence of relatively underpowered studies, which are often analyzed using methods that are not stringent enough to control the level of false positives. The flexibility that I mentioned above also includes methods that are known by experts to be invalid, but unfortunately these still get into top journals, which only helps perpetuate them further.

Someone online asked the question “How Much of the Neuroimaging Literature Should We Discard?” How do you think should we consider past fMRI studies that used problematic methodology?

I think that replication is the ultimate answer. For example, the methods that we used in our 1999 Neuroimage paper that examined semantic versus phonological processing seem pretty abominable by today’s standards, but the general finding of that paper has been replicated many times since then. There are many other findings from the early days that have stood the test of time, while others have failed to replicate. So I would say that if a published study used problematic methods, then one really wants to see some kind of replication before buying the result.

Tal Yarkoni from the University of Colorado at Boulder

What’s the most pressing problem fMRI research needs to address at the moment?

My own feeling (which I’m sure many people would disagree with) is that the biggest problem isn’t methodological laxness so much as skewed incentives. As in most areas of science, researchers have a big incentive to come up with exciting new findings that make a splash. What’s particularly problematic about fMRI research–as opposed to, say, cognitive psychology–is the amount of flexibility researchers have when performing their analyses. There simply isn’t any single standard way of analyzing fMRI data (and it’s not clear there should there be); as a result, it’s virtually impossible to assess the plausibility of many if not most fMRI findings simply because you have no idea how many things the researchers tried before they got something to work.

The other very serious and closely related problem is what I’ve talked about in my critique of Friston’s paper [on methods in fMRI analysis] as well as other papers (e.g., I wrote a commentary on the Vul et al “voodoo correlations” paper to the same effect): in the real world, most effects are weak and diffuse. In other words, we expect complicated psychological states or processes–e.g., decoding speech, experiencing love, or maintaining multiple pieces of information in mind–to depend on neural circuitry widely distributed throughout the brain, most of which are probably going to play a relatively minor role. The problem is that when we conduct fMRI studies with small samples at very stringent statistical thresholds, we’re strongly biased to detect only a small fraction of the ‘true’ effects, and because of the bias, the effects we do detect will seem much stronger than they actually are in the real world. The result is that fMRI studies will paradoxically tend to produce *less* interesting results as the sample size gets bigger. Which means your odds of getting a paper into a journal like Science or Nature are, in many cases, much higher if you only collect data from 20 subjects than if you collect data from 200.

The net result is that we have hundreds of very small studies in the literature that report very exciting results but are unlikely to ever be directly replicated, because researchers don’t have much of an incentive to collect the large samples needed to get a really good picture of what’s going on.

Someone online asked the question “How Much of the Neuroimaging Literature Should We Discard?” How do you think should we consider past fMRI studies that used problematic methodology?

This is a very difficult question to answer in a paragraph or two. I guess my most general feeling is that our default attitude to any new and interesting fMRI finding should be skepticism–instead of accepting findings at face value until we discover a good reason to discount them, we should incline toward disbelief until a finding has been replicated and extended. Personally I’d say I don’t really believe about 95% of what gets published. That’s not to say I think 95% of the literature is flat-out wrong; I think there’s probably a kernel of truth to most findings that get published. But the real problem in my view is a disconnect between what we should really conclude from any given finding and what researchers take license to say in their papers. To take just one example, I think claims of “selective” activation are almost without exception completely baseless (because very few studies really have the statistical power to confidently claim that absence of evidence is evidence of absence).

For example, suppose someone publishes a paper reporting that romantic love selectively activates region X, and that activation in that region explains a very large proportion of the variance in some behavior (this kind of thing happens all the time). My view is that the appropriate response is to say, “well, look, there probably is a real effect in region X, but if you had had a much larger sample, you would realize that the effect in region X is much smaller than you think it is, and moreover, there are literally dozens of other regions that show similarly-sized effects.” The argument is basically that much of the novelty of fMRI findings stems directly from the fact that most studies are grossly underpowered. So really I think the root problem is not that researchers aren’t careful to guard against methodological problems X, Y, and Z when doing their analyses; it’s that our mental model of what most fMRI studies can tell us is fundamentally wrong in most cases. A statistical map of brain activity is *not* in any sense an accurate window into how the brain supports cognition; it’s more like a funhouse mirror that heavily distorts the true image, and to understand the underlying reality, you also have to take into account the distortion introduced by the measurement. The latter part is where I think we have a systemic problem in fMRI research.

The trouble with fMRI

I’ve written a piece for The Observer about ‘the trouble with brain scans’ that discusses how past fMRI studies may have been based on problematic assumptions.

For years the media has misrepresented brain scan studies (“Brain centre for liking cheese discovered!”) but we are now at an interesting point where neuroscientists are starting to seriously look for problems in their own methods of analysis.

In fact, many of these problems have now been corrected, but we still have 100s or 1000s of previous studies that have been based on methods that have now been abandoned.

In part, the piece was inspired by a post on the Neurocritic blog entitled “How Much of the Neuroimaging Literature Should We Discard?” that was prompted by growing concerns among neuroscientists.

The fact is, fMRI is a relatively new science – it just celebrated it’s 20th birthday – and it is still evolving.

I suspect it will be revised and reconsidered many times yet.

 
Link to Observer article ‘The Trouble With Brain Scans’

What is the DSM supposed to do?

I’ve written an article for the Discover Magazine’s blog The Crux on what the DSM diagnostic manual is supposed to do.

This is quite an interesting question when you think about it. In other words, it asks – how do we define mental illness – both in theory and in practice?

The article tackles how you decide what a mental illness is in the first place and then how you go about classifying mental states that, by definition, can only be experienced by one person. It turns out, classifying mental illness is a lot like classifying literature.

It also discusses the old and possibly futile quest for ‘biological tests for mental illness’ as if there is a perfect mapping between how we classify mental states and how the brain actually works at the neurobiological level.

So if you want to know the thinking and, indeed, problems behind one of the central and often unquestioned assumptions of psychiatry, this should be a good place to start.
 

Link to ‘What Is the “Bible of Psychiatry” Supposed to Do?’

Sigman and the skewed screen of death

The media is buzzing this morning with the shocking news that children spend ‘more than six hours in front of screens’. The news is shocking, however, because it’s wrong.

The sound bite stems from an upcoming talk on ‘Alcohol and electronic media: units of consumption’ by evidence-ambivalent psychologist Aric Sigman who is doing a guest lecture at a special interest group meeting at the Royal College of Paediatrics and Child Health annual conference.

Sigman has a track record of being economical with evidence for the purpose of promoting his ‘traditional family values’ and this is another classic example.

The ‘six hour a day in front of the screen’ figure comes from a commercial research organisation called Childwise. It was the headline finding that made all the papers, which is quite convenient if you’re selling the report for £1800 a copy.

But why would you rely on a commercial report when you have so many non-commercial scientific studies to choose from?

A 2006 meta-analysis looked at 90, yes 90, studies on media use in young people from Europe and North America and here’s what it found.

Youth watch an average of 1.8–2.8h TV a day. This has not changed for 50 years. Boys and girls spend approx 60 and 23 min day on computer games. Computers account for an additional 30 min day. TV viewing tends to decrease during adolescence.

Now, that’s not to say that there aren’t risks to children if they spend large amounts of their time sat on their arse. Time spent watching television has genuinely been linked to poor health. However, it’s better to inform people of the details rather than the panic inducing headlines.

For example, talking about ‘screen time’ is probably not helpful. For example, TV viewing seems increase the risk of obesity more than video games.

It’s also worth noting that researchers are now making a distinction between ‘passive screen time’ (i.e. being sat on your arse) and ‘active screen time’ (i.e. body movement-based video games) with the latter being found to be a likely intervention for obesity.

The devil is in the details, rather than behind the screen.

Legal highs making the drug war obsolete

If you want any evidence that drugs have won the drug war, you just need to read the scientific studies on legal highs.

If you’re not keeping track of the ‘legal high’ scene it’s important to remember that the first examples, synthetic cannabinoids sold as ‘Spice’ and ‘K2’ incense, were only detected in 2009.

Shortly after amphetamine-a-like stimulant drugs, largely based on variations on pipradrol and the cathinones appeared, and now ketamine-like drugs such as methoxetamine have become widespread.

Since 1997, 150 new psychoactive substances were reported. Almost a third of those appeared in 2010.

Last year, the US government banned several of these drugs although the effect has been minimal as the legal high laboratories have over-run the trenches of the drug warriors.

A new study just published in the Journal of Analytical Toxicology tracked the chemical composition of legal highs as the bans were introduced.

A key question was whether the legal high firms would just try and use the same banned chemicals and sell them under a different name.

The research team found that since the ban only 4.9% of the products contained any trace of the recently banned drugs. The remaining 95.1% of products contained drugs not covered by the law.

The chemicals in legal highs have fundamentally changed since the 2011 ban and the labs have outrun the authorities in less than a year.

Another new study has looked at legal highs derived from pipradrol – a drug developed in 1940s for treating obesity, depression, ADHD and narcolepsy.

It was made illegal in many countries during the 70s due to its potential for abuse because it gives an amphetamine-like high.

The study found that legal high labs have just been running through variations of the banned drug using simple modifications of the original molecule to make new unregulated versions.

The following paragraph is from this study and even if you’re not a chemist, you can get an impression of how the drug is been tweaked in the most minor ways to create new legal versions.

Modifications include: addition of halogen, alkyl or alkoxy groups on one or both of the phenyl rings or addition of alkyl, alkenyl, haloalkyl and hydroxyalkyl groups on the nitrogen atom. Other modifications that have been reported include the substitution of a piperidine ring with an azepane ring (7-membered ring), a morpholine ring or a pyridine ring or the fusion of a piperidine ring with a benzene ring. These molecules, producing amphetamine-like effects, increase the choice of new stimulants to be used as legal highs in the coming years.

New, unknown and poorly understood psychoactive chemicals are appearing faster than they can be regulated.

The market is being driven by a demand for drugs that have the same effects as existing legal highs but won’t get you thrown in prison.

The drug war isn’t only being lost, it’s being made obsolete.

Uploaded to the Life network

A fantastic short film about what you might see when your mind is uploaded to an online storage cloud in 2052. It’s subtitled “the Singularity, ruined by lawyers”.

The piece is by futurist Tom Scott who obviously sees the consciousness uploading business far more pessimistically than me.

Personally, I’m going to get uploaded to a linux server. It’s be completely free but won’t support all my mental states.

Yes, I’ll be doing software jokes in the afterlife. No, you won’t have to humour me.
 

Link to fantastic video ‘Welcome to Life’ (via @SebastianSeung)

A history of human sacrifice

A video on the history of human sacrifice is available from Science magazine as part of their special issue on human conflict.

Sadly, all the articles are locked behind a paywall but the video is free to view and has science writer Ann Gibbons discussing how the practice evolved through the ages and how archaeologists have been uncovering the evidence.

If you can’t stump up the cash for what looks like a genuinely fascinating issue there’s more discussion from the latest edition on the podcast where the science of racism and prejudice is explored.
 

Link to locked special issue.
Link to video.
Link to podcast

Psychology and the one-hit wonder

Don’t miss an important article in this week’s Nature about how psychologists are facing up to problems with unreplicated studies in the wake of several high profiles controversies.

Positive results in psychology can behave like rumours: easy to release but hard to dispel. They dominate most journals, which strive to present new, exciting research. Meanwhile, attempts to replicate those studies, especially when the findings are negative, go unpublished, languishing in personal file drawers or circulating in conversations around the water cooler…

One reason for the excess in positive results for psychology is an emphasis on “slightly freak-show-ish” results, says Chris Chambers, an experimental psychologist at Cardiff University, UK. “High-impact journals often regard psychology as a sort of parlour-trick area,” he says. Results need to be exciting, eye-catching, even implausible. Simmons says that the blame lies partly in the review process. “When we review papers, we’re often making authors prove that their findings are novel or interesting,” he says. “We’re not often making them prove that their findings are true.”

It’s perhaps worth noting that clinical psychology suffers somewhat less from this problem, as treatment studies tend to get replicated by competing groups and negative studies are valued just as highly.

However, it would be interesting to see whether the “freak-show-ish” performing pony studies are less likely to replicate than specialist and not very catchy cognitive science (dual-process theory of recognition, I’m looking at you).

As a great complement to the Nature article, this month’s The Psychologist has an extended look at the problem of replication [pdf] and talks to a whole range of people affected by the problem, from journalists to research experts.

But I honestly don’t know where this ‘conceptual replication’ thing came from – where you test the general conclusion of a study in another form – as this just seems to be a test of the theory with another study.

It’s like saying your kebab is a ‘conceptual replication’ of the pizza you made last night. Close, but no neopolitana.
 

Link to Nature article on psychology and replication.
pdf of Psychologist article ‘Replication, replication, replication’

She’s lost control

An article in Slate claims to have detectected a ‘logic hole’ in how much sympathy we feel for people with mental illness as both psychopathy and autism are ‘biological disorders’ that people ‘can’t help’ but we feel quite differently about people affected by them.

The ‘logic hole’, however, doesn’t exist because it is based on misunderstanding of the role of neuroscience in understanding behaviour and a caricature of what it means to have ‘no control’ over a condition.

Here’s what the article claims:

In the piece [recently published in The New York Times], Kahn compares psychopathy to autism, not because the two disorders are similar in their manifestation, but because psychologists believe they’re both neurological disorders, i.e. based in the brain and really something that the sufferer can’t help.

This caused me to note on Twitter that even though the conditions are similar in this way, autism garners sympathy and psychopathy doesn’t. In fact, most social discourse around psychopathy is still demonizing and utterly unsympathetic to the parents, who are often blamed for the condition. It struck me as an interesting logic hole in our cultural narrative around mental illness, since the usual assumption is that sympathy for mental illness is directly correlated with inability to control your problems.

Clearly the author has good intentions and aims to reduce the stigma associated with mental illness but in terms of behavioural problems, everything is a ‘biological disorder’ because all your behaviour originates in the brain.

The idea that because a disorder is ‘based in the brain’ it therefore follows that ‘really something that the sufferer can’t help’ is a complete fallacy.

Psychopathy, autism, depression, over-eating, persistently losing your keys and constantly getting annoyed at X Factor are all ‘based in the brain’ and this fact has nothing to do with how much control you have over the behaviour.

Putting this misunderstanding aside, however, there is also the unhelpful implication that someone ‘has’ or ‘has not’ control over their thoughts, behaviour, emotions and propensities, especially if they have a psychiatric diagnosis.

Conscious control varies between individuals, is affected by genetics, is amenable to change and training, and depends on the specific task, situation or action.

This does not mean that everyone with autism, psychopathy or any other diagnosis can just decide not to react in a certain way, but it would be equally stigmatising and simply wrong to assume that current difficulties are forever ‘fixed’.

The article finishes “I was just interested in the fact that there’s no relationship between how much we care about those with a mental disorder and how much those with it can help having it.”

In reality, sympathy for people with disorders is a complex phenomenon and the perception of ‘how much control the person has’ over the condition is only one of the factors. The (often equally bogus) moral associations also play a part as does the seriousness of the condition and the medical speciality that treats it.

Nevertheless, we need to get away from the idea that ‘biology means poor control’ because it is both a fallacy, and, ironically, known to be particularly stigmatising in itself.
 

Link to somewhat confused Slate article (via @ejwillingham)

A look inside digital humanity

BBC Radio 4 has just started an excellent series called The Digital Human that looks at how we use technology and how it affects our relationship to the social world.

It’s written and presented by psychologist Aleks Krotoski and the first two episodes are already online.

The first discusses the tendency to capture and display personal media through sites like Flickr and YouTube but, so far, the stand-out episode has been the second which discusses the presentation of self online and how much control we have over it.

I think it’s going to be a six-part series so there should be plenty more great stuff on the way.
 

Link to podcasts of Digital Human series.