Do we suffer ‘behavioural fatigue’ for pandemic prevention measures?

The Guardian recently published an article saying “People won’t get ‘tired’ of social distancing – and it’s unscientific to suggest otherwise”. “Behavioural fatigue” the piece said, “has no basis in science”.

‘Behavioural fatigue’ became a hot topic because it was part of the UK Government’s justification for delaying the introduction of stricter public health measures. They quickly reversed this position and we’re now in the “empty streets” stage of infection control.

But it’s an important topic and is relevant to all of us as we try to maintain important behavioural changes that benefit others.

For me, one key point is that, actually, there are many relevant scientific studies that tackle this. And I have to say, I’m a little disappointed that there were some public pronouncements that ‘there is no evidence’ in the mainstream media without anyone making the effort to seek it out.

The reaction to epidemics has actually been quite well studied although it’s not clear that ‘fatigue’ is the right way of understanding any potential decline in people’s compliance. This phrase doesn’t seem to be used in the medical literature in this context and it may well have been simply a convenient, albeit confusing, metaphor for ‘decline’ used in interviews.

In fact, most studies of changes in compliance focus on the effect of changing risk perception, and it turns out that this often poorly tracks the actual risk. Below is a graph from a recent paper illustrating a widely used model of how risk perception tracks epidemics.

Notably, this model was first published in the 1990s based on data available even then. It suggests that increases in risk tend to make us over-estimate the danger, particularly for surprising events, but then as the risk objectively increases we start to get used to living in the ‘new normal’ and our perception of risk decreases, sometimes unhelpfully so.

What this doesn’t tell us is whether people’s behaviour changes over time. However, lots of studies have been done since then, including on the 2009 H1N1 flu pandemic – where a lot of this research was conducted.

To cut a long story short, many, but not all, of these studies find that people tend to reduce their use of at least some preventative measures (like hand washing, social distancing) as the epidemic increases, and this has been looked at in various ways.

When asking people to report their own behaviours, several studies found evidence for a reduction in at least some preventative measures (usually alongside evidence for good compliance with others).

This was found was found in one study in Italy, two studies in Hong Kong, and one study in Malaysia.

In Holland during the 2006 bird flu outbreak, one study did seven follow-ups and found a fluctuating pattern of compliance with prevention measures. People ramped up their prevention efforts, then their was a dip, then they increased again.

Some studies have looked for objective evidence of behaviour change and one of the most interesting looked at changes in social distancing during the 2009 outbreak in Mexico by measuring television viewing as a proxy for time spent in the home. This study found that, consistent with an increase in social distancing at the beginning of the outbreak, television viewing greatly increased, but as time went on, and the outbreak grew, television viewing dropped. To try and double-check their conclusions, they showed that television viewing predicted infection rates.

One study looked at airline passengers’ missed flights during the 2009 outbreak – given that flying with a bunch of people in an enclosed space is likely to spread flu. There was a massive spike of missed flights at the beginning of the pandemic but this quickly dropped off as the infection rate climbed, although later, missed flights did begin to track infection rates more closely.

There are also some relevant qualitative studies. These are where people are free-form interviewed and the themes of what they say are reported. These studies reported that people resist some behavioural measures during outbreaks as they increasingly start to conflict with family demands, economic pressures, and so on.

Rather than measuring people’s compliance with health behaviours, several studies looked at how epidemics change and used mathematical models to test out ideas about what could account for their course.

One well recognised finding is that epidemics often come in waves. A surge, a quieter period, a surge, a quieter period, and so on.

Several mathematical modelling studies have suggested that people’s declining compliance with preventative measures could account for this. This has been found with simulated epidemics but also when looking at real data, such as that from the 1918 flu pandemic. The 1918 epidemic was an interesting example because there was no vaccine and so behavioural changes were pretty much the only preventative measure.

And some studies showed no evidence of ‘behavioural fatigue’ at all.

One study in the Netherlands showed a stable increase in people taking preventative measures with no evidence of decline at any point.

Another study conducted in Beijing found that people tended to maintain compliance with low effort measures (ventilating rooms, catching coughs and sneezes, washing hands) and tended to increase the level of high effort measures (stockpiling, buying face masks).

This improved compliance was also seen in a study that looked at an outbreak of the mosquito-borne disease chikungunya.

This is not meant to be a complete review of these studies (do add any others below) but I’m presenting them here to show that actually, there is lots of relevant evidence about ‘behavioural fatigue’ despite the fact that mainstream articles can get published by people declaring it ‘has no basis in science’.

In fact, this topic is almost a sub-field in some disciplines. Epidemiologists have been trying to incorporate behavioural dynamics into their models. Economists have been trying to model the ‘prevalence elasticity’ of preventative behaviours as epidemics progress. Game theorists have been creating models of behaviour change in terms of individuals’ strategic decision-making.

The lessons here are two fold I think.

The first is for scientists to be cautious when taking public positions. This is particularly important in times of crisis. Most scientific fields are complex and can be opaque even to other scientists in closely related fields. Your voice has influence so please consider (and indeed research) what you say.

The second is for all of us. We are currently in the middle of a pandemic and we have been asked to take essential measures.

In past pandemics, people started to drop their life-saving behavioural changes as the risk seemed to become routine, even as the actual danger increased.

This is not inevitable, because in some places, and in some outbreaks, people managed to stick with them.

We can be like the folks who stuck with these strange new rituals, who didn’t let their guard down, and who saved the lives of countless people they never met.

Why we need to get better at critiquing psychiatric diagnosis

This piece is based on my talk to the UCL conference ‘The Role of Diagnosis in Clinical Psychology’. It was aimed at an audience of clinical psychologists but should be of interest more widely.

I’ve been a longterm critic of psychiatric diagnoses but I’ve become increasingly frustrated by the myths and over-generalisations that get repeated and recycled in the diagnosis debate.

So, in this post, I want to tackle some of these before going on to suggest how we can critique diagnosis more effectively. I’m going to be referencing the DSM-5 but the examples I mention apply more widely.

“There are no biological tests for psychiatric diagnoses”

“The failure of decades of basic science research to reveal any specific biological or psychological marker that identifies a psychiatric diagnosis is well recognised” wrote Sami Timini in the International Journal of Clinical and Health Psychology. “Scientists have not identified a biological cause of, or even a reliable biomarker for, any mental disorder” claimed Brett Deacon in Clinical Psychology Review. “Indeed”, he continued “not one biological test appears as a diagnostic criterion in the current DSM-IV-TR or in the proposed criteria sets for the forthcoming DSM-5”. Jay Watts writing in The Guardian states that “These categories cannot be verified with objective tests”.

Actually there are very few DSM diagnoses for which biological tests are entirely irrelevant. Most use medical tests for differential diagnosis (excluding other causes), some DSM diagnoses require them as one of a number of criteria, and a handful are entirely based on biological tests. You can see this for yourself if you take the radical scientific step of opening the DSM-5 and reading what it actually says.

There are some DSM diagnoses (the minority) for which biological tests are entirely irrelevant. Body dysmorphic disorder (p242), for example, a diagnosis that describes where people become overwhelmed with the idea that a part of their body is misshapen or unattractive, is purely based on reported experiences and behaviour. No other criteria are required or relevant.

For most common DSM diagnoses, biological tests are relevant but for the purpose of excluding other causes. For example, in many DSM diagnoses there is a general exclusion that the symptoms must be not attributable to the physiological effects of a substance or another medical condition (this appears in schizophrenia, OCD, generalized anxiety disorder and many many others). On occasion, very specific biological tests are mentioned. For example, to make a confident diagnosis of panic disorder (p208), the DSM-5 recommends testing serum calcium levels to exclude hyperparathyroidism – which can produce similar symptoms.

Additionally, there are a range of DSM diagnoses for which biomedical tests make up one or more of the formally listed criteria but aren’t essential to make the diagnosis. The DSM diagnosis of narcolepsy (p372) is one example, which has two such criteria: “Hypocretin deficiency, as measured by cerebrospinal fluid (CSF) hypocretin-1 immunoreactivity values of one-third or less of those obtained in healthy subjects using the same assay, or 110 pg/mL or less” and polysomnography showing REM sleep latency of 15 minutes or less. Several other diagnoses work along these lines – where a biomedical tests results are listed but are not necessary to make the diagnosis: the substance/medication-induced mental disorders, delirium, neuroleptic malignant syndrome, neurocognitive disorders, and so on.

There are also a range of DSM diagnoses that are not solely based on biomedical tests but for which positive test results are necessary for the diagnosis. Anorexia nervosa (p338) is the most obvious, which requires the person to have a BMI of less than 17, but this applies to various sleep disorders (e.g. REM sleep disorder which requires a positive polysomnography or actigraphy finding) and some disorders due to other medical conditions. For example, neurocognitive disorder due to prion disease (p634) requires a brain scan or blood test.

There are some DSM diagnoses which are based exclusively on biological test results. These are a number of sleep disorders (obstructive sleep apnea hypopnea, central sleep apnea and sleep-related hypoventilation, all diagnosed with polysomnography).

“Psychiatric diagnoses ‘label distress'”

The DSM, wrote Peter Kinderman and colleagues in Evidence-Based Mental Health is a “franchise for the classification and diagnosis of human distress”. The “ICD is based on exactly the same principles as the DSM” argued Lucy Johnstone, “Both systems are about describing people’s distress in terms of medical diagnosis”

In reality, some psychiatric diagnoses do classify distress, some don’t.

Here is a common criterion in many DSM diagnoses: “The symptoms cause clinical significant distress or impairment in social, occupational or other important areas of functioning”

The theory behind this is that some experiences or behaviours are not considered of medical interest unless they cause you problems, which is defined as distress or impairment. Note however, that it is one or the other. It is still possible to be diagnosed if you’re not distressed but still find these experiences or behaviours get in the way of everyday life.

However, there are a whole range of DSM diagnoses for which distress plays no part in making the diagnosis.

Here is a non-exhaustive list: Schizophrenia, Tic Disorders, Delusional Disorder, Developmental Coordination Disorder, Brief Psychotic Disorder, Schizophreniform Disorder, Manic Episode, Hypomanic Episode, Schizoid Personality Disorder, Antisocial Personality Disorder, and so on. There are many more.

Does the DSM ‘label distress’? Sometimes. Do all psychiatric diagnoses? No they don’t.

“Psychiatric diagnoses are not reliable”

The graph below shows the inter-rater reliability results from the DSM-5 field trial study. They use a statistical test called Cohen’s Kappa to test how well two independent psychiatrists, assessing the same individual through an open interview, agree on a particular diagnosis. A score above 0.8 is usually considered gold standard, they rate anything above 0.6 in the acceptable range.

The results are atrocious. This graph is often touted as evidence that psychiatric diagnoses can’t be made reliably.

However, here are the results from a study that tested diagnostic agreement on a range of DSM-5 diagnoses when psychiatrists used a structured interview assessment. Look down the ‘κ’ column for the reliability results. Suddenly they are much better and are all within the acceptable to excellent range.

This is well-known in mental health and medicine as a whole. If you want consistency, you have to use a structured assessment method.

While we’re here, let’s tackle an implicit assumption that underlies many of these critiques: supposedly, psychiatric diagnoses are fuzzy and unreliable, whereas the rest of medicine makes cut-and-dry diagnoses based on unequivocal medical test results.

This is a myth based on ignorance about how medical diagnoses are made – almost all involve human judgement. Just look at the between-doctor agreement results for some diagnoses in the rest of medicine (which include the use of biomedical tests):

Diagnosis of infection at the site of surgery (0.44), features of spinal tumours (0.19 – 0.59), bone fractures in children (0.71), rectal bleeding (0.42), paediatric stroke (0.61), osteoarthritis in the hand (0.60 – 0.82). There are many more examples in the medical literature which you can see for yourself.

The reliability of DSM-5 diagnoses is typically poor for ‘off the top of the head’ diagnosis but this can be markedly improved by using a formal diagnostic assessment. This doesn’t seem to be any different from the rest of medicine.

“Psychiatric diagnoses are not valid because they are decided by a committee”

I’m sorry to break it to you, but all medical diagnoses are decided by committee.

These committees shift the boundaries, revise, reject and resurrect diagnoses across medicine. The European Society of Cardiology revise the diagnostic criteria for heart failure and related problems on a yearly basis. The International League Against Epilepsy revise their diagnoses of different epilepsies frequently – they just published their revised manual earlier this year. In 2014 they broadened the diagnostic criteria for epilepsy meaning more people are now classified as having epilepsy. Nothing changed in people’s brains, they just made a group decision.

In fact, if you look at the medical literature, it’s abuzz with committees deciding, revising and rejecting diagnostic criteria for medical problems across the board.

Humans are not cut-and-dry. Neither are most illnesses, diseases and injuries, and decisions about what a particular diagnosis should include is always a trade-off between measurement accuracy, suffering, outcome, and the potential benefits of intervention. This gets revised by a committee who examine the best evidence and come to a consensus on what should count as a medically-relevant problem.

These committees aren’t perfect. They sometimes suffer from fads and group think, and pharmaceutical industry conflicts of interest are a constant concern, but the fact that a committee decides a diagnosis does not make it invalid. I would argue that psychiatry is more prone to fads and pressure from pharmaceutical company interests than some other areas of medicine although it’s probably not the worst (surgery is notoriously bad in this regard). However, having a diagnosis decided by committee doesn’t make it invalid. Actually, on balance, it’s probably the least worst way of doing it.

“Psychiatric diagnoses are not valid because they’re based on experience, behaviour or value judgements”

We’ve discussed above how DSM diagnoses rely on medical tests to varying degrees. But the flip side of this, is that there are many non-psychiatric diagnoses which are also only based on classifying experience and/or behaviour. If you think this makes a diagnosis invalid or ‘not a real illness’ I look forward to your forthcoming campaigning to remove the diagnoses of tinnitus, sensory loss, many pain syndromes, headache, vertigo and the primary dystonias, for example.

To complicate things further, we know some diseases have a clear basis in terms of tissue damage but the diagnosis is purely based on experience and/or behaviour. The diagnosis of Parkinson’s disease, for example, is made this way and there are no biomedical tests that confirm the condition, despite the fact that studies have shown it occurs due to a breakdown of dopamine neurons in the nigrostriatal pathway of the brain.

At this point, someone usually says “but no one doubts that HIV or tuberculosis are diseases, whereas psychiatric diagnosis involves arbitrary decisions about what is considered pathological”. Cranks aside, the first part is true. It’s widely accepted – rightly so – that HIV and tuberculosis are diseases. However, it’s interesting how many critics of psychiatric diagnosis seem to have infectious diseases as their comparison for what constitutes a ‘genuine medical condition’ when infectious diseases are only a small minority of the diagnoses in medicine.

Even here though, subjectivity still plays a part. Rather than focusing on a single viral or bacterial infection, think of all viruses and bacteria. Now ask, which should be classified as diseases? This is not as cut-and-dry as you might think because humans are awash with viruses and bacteria, some helpful, some unhelpful, some irrelevant to our well-being. Ed Yong’s book I Contain Multitudes is brilliant on this if you want to know more about the massive complexity of our microbiome and how it relates to our well-being.

So the question for infectious disease experts is at what point does an unhelpful virus or bacteria become a disease? This involves making judgements about what should be considered a ‘negative effect’. Some are easy calls to make – mortality statistics are a fairly good yardstick. No one’s argued over the status of Ebola as a disease. But some cases are not so clear. In fact, the criteria for what constitutes a disease, formally discussed as how to classify the pathogenicity of microorganisms, can be found as a lively debate in the medical literature.

So all diagnoses in medicine involve a consensus judgement about what counts as ‘bad for us’. There is no biological test that which can answer this question in all cases. Value judgements are certainly more common in psychiatry than infectious diseases but probably less so than in plastic surgery, but no diagnosis is value-free.

“Psychiatric diagnosis isn’t valid because of the following reasons…”

Debating the validity of diagnoses is a good thing. In fact, it’s essential we do it. Lots of DSM diagnoses, as I’ve argued before, poorly predict outcome, and sometimes barely hang together conceptually. But there is no general criticism that applies to all psychiatric diagnoses. Rather than going through all the diagnoses in detail, look at the following list of DSM-5 diagnoses and ask yourself whether the same commonly made criticisms about ‘psychiatric diagnosis’ could be applied to them all:

Tourette’s syndrome, Insomnia, Erectile Disorder, Schizophrenia, Bipolar, Autism, Dyslexia, Stuttering, Enuerisis, Catatonia, PTSD, Pica, Sleep Apnea, Pyromania, Medication-Induced Acute Dystonia, Intermittent Explosive Disorder

Does psychiatric diagnosis medicalise distress arising from social hardship? Hard to see how this applies to stuttering and Tourette’s syndrome. Is psychiatric diagnosis used to oppress people who behave differently? If this applies to sleep apnea, I must have missed the protests. Does psychiatric diagnosis privilege biomedical explanations? I’m not sure this applies to PTSD.

There are many good critiques on the validity of specific psychiatric diagnoses, it’s impossible to see how they apply to all diagnoses.

How can we criticise psychiatric diagnosis better?

I want to make clear here that I’m not a ‘defender’ of psychiatric diagnosis. On a personal basis, I’m happy for people to use whatever framework they find useful to understand their own experiences. On a scientific basis, some diagnoses seem reasonable but many are a really poor guide to human nature and its challenges. For example, I would agree with other psychosis researchers that the days of schizophrenia being a useful diagnosis are numbered. By the way, this is not a particularly radical position – it has been one of the major pillars of the science of cognitive neuropsychiatry since it was founded.

However, I would like to think I am a defender of actually engaging with what you’re criticising. So here’s how I think we could move the diagnosis debate on.

Firstly, RTFM. Read the fucking manual. I’m sorry, but I’ve got no time for criticisms that can be refuted simply by looking at the thing you’re criticising. Saying there are no biological tests for DSM diagnoses is embarrassing when some are listed in the manual. Saying the DSM is about ‘labelling distress’ when many DSM diagnoses do not will get nothing more than an eye roll from me.

Secondly, we need be explicit about what we’re criticising. If someone is criticising ‘psychiatric diagnosis’ as a whole, they’re almost certainly talking nonsense because it’s a massively diverse field. Our criticisms about medicalisation, poor predictive validity and biomedical privilege may apply very well to schizophrenia, but they make little sense when we’re talking about sleep apnea or stuttering. Diagnosis can really only be coherently criticised on a case by case basis or where you have demonstrated that a particular group of diagnoses share particular characteristics – but you have to establish this first.

As an aside, restricting our criticisms to ‘functional psychiatric diagnosis’ will not suddenly make these arguments coherent. ‘Functional psychiatric diagnoses’ include Tourette’s syndrome, stuttering, dyslexia, erectile disorder, enuerisis, pica and insomnia to name but a few. Throwing them in front of the same critical cross-hairs as borderline personality disorder makes no sense. I did a whole talk on this if you want to check it out.

Thirdly, let’s stop pretending this isn’t about power and inter-professional rivalries. Many people have written very lucidly about how diagnosis is one of the supporting pillars in the power structure of psychiatry. This is true. The whole point of structural analysis is that concept, practice and power are intertwined. We criticise diagnosis, we are attacking the social power of psychiatry. This is not a reason to avoid it, and doesn’t mean this is the primary motivation, but we need to be aware of what we’re doing. Pretending we’re criticising diagnosis but not taking a swing at psychiatry is like calling someone ugly but saying it’s nothing against them personally. We should be working for a better and more equitable approach to mental health – and that includes respectful and conscious awareness of the wider implications of our actions.

Also, let’s not pretend psychology isn’t full of classifications. Just because they’re not published by the APA, doesn’t mean they’re any more valid or have the potential to be any more damaging (or indeed, the potential to be any more liberating). And if you are really against classifying experience and behaviour in any way, I recommend you stop using language, because it relies on exactly this.

Most importantly though, this really isn’t about us as professionals. The people most affected by these debates are ultimately people with mental health problems, often with the least power to make a difference to what’s happening. This needs to change and we need to respect and include a diversity of opinion and lived experience concerning the value of diagnosis. Some people say that having a psychiatric diagnosis is like someone holding their head below water, others say it’s the only thing that keeps their head above water. We need a system that supports everyone.

Finally, I think we’d be better off if we treated diagnoses more like tools, and less like ideologies. They may be more or less helpful in different situations, and at different times, and for different people, and we should strive to ensure a range of options are available to people who need them, both diagnostic and non-diagnostic. Each tested and refined with science, meaning, lived experience, and ethics.

Should we stop saying ‘commit’ suicide?

There is a movement in mental health to avoid the phrase ‘commit suicide’. It is claimed that the word ‘commit’ refers to a crime and this increases the stigma for what’s often an act of desperation that deserves compassion, rather than condemnation.

The Samaritans’ media guidelines discourage using the phrase, advising: “Avoid labelling a death as someone having ‘committed suicide’. The word ‘commit’ in the context of suicide is factually incorrect because it is no longer illegal”. An article in the Australian Psychological Society’s InPsych magazine recommended against it because the word ‘commit’ signifies not only a crime but a religious sin. There are many more such claims.

However, on the surface level, claims that the word ‘commit’ necessarily indicates a crime are clearly wrong. We can ‘commit money’ or ‘commit errors’, for instance, where no crime is implied. The dictionary entry for ‘commit’ (e.g. see the definition at the OED) has entries related to ‘committing a crime’ as only a few of its many meanings.

But we can probably do a little better when considering the potentially stigmatising effects of language than simply comparing examples.

One approach is to see how the word is actually used by examining a corpus of the English language – a database of written and transcribed spoken language – and using a technique called collocation analysis that looks at which words appear together.

I’ve used the Corpus of Contemporary American English collocation analysis for the results below and you can do the analysis yourself if you want to see what it looks like.

So here are the top 30 words that follow the word ‘commit’, in order of frequency in the corpus.

Some of the words are clearly parts of phrases (‘commit ourselves…’) rather than directly referring to actions but you can see that most common two word phrase is ‘commit suicide’ by a very large margin.

If we take this example, the argument for not using ‘commit suicide’ gets a bit circular but if we look at the other named actions as a whole, they’re all crimes or potential crimes. Essentially, they’re all fairly nasty.

If you do the analysis yourself (and you’ll have to go to the website and type in the details, you can’t link directly) you’ll see that non-criminal actions don’t appear until fairly low down the list, way past the 30 listed here.

So ‘commit’ typically refers to antisocial and criminal acts. Saying ‘commit suicide’ probably brings some of that baggage with it and we’re likely to be better off moving away from it.

It’s worth saying, I’m not a fan of prohibitions on words or phrases, as it tends to silence people who have only colloquial language at their disposal to advocate for themselves.

As this probably includes most people with mental health problems, only a minority of which will be plugged into debates around language, perhaps we are better off thinking about moving language forward rather than punishing the non-conforming.

Is psychosis an ‘immune disorder’?

A fascinating new study has just been published which found evidence for the immune system attacking a neuroreceptor in the brain in a small proportion of people with psychosis. It’s an interesting study that probably reflects what’s going to be a cultural tipping point for the idea of ‘immune system mental health problems’ or ‘madness as inflammation disorder’ but it’s worth being a little wary of the coming hype.

This new study, published in The Lancet Psychiatry, did blood tests on people who presented with their first episode of psychosis and looked for antibodies that attack specific receptors in the brain. Receptors are what receive neurotransmitters – the brain’s chemical signals – and allow information to be transferred around the nervous system, so disruption to these can cause brain disturbances.

The most scientifically interesting finding is that the research team found a type of antibody that attacks NMDA receptors in 7 patients (3%) out of 228, but zero controls.

The study found markers for other neuroreceptors that the immune system was attacking, but the reason the NMDA finding is so crucial is because it shows evidence of a condition called anti-NMDA receptor encephalitis which is known to cause episodes of psychosis that can be indistinguishable from ‘regular’ psychosis but for which the best treatment is dealing with the autoimmune problem.

It was only discovered in 2007 but there has been a long-running suspicion that it may be the best explanation for a small minority of cases of psychosis which can be easily misdiagnosed as schizophrenia.

Importantly, the findings from this research have been supported by another independent study that has just been published online. The two studies used different ranges for the concentration of NMDA antibodies they measured, but they came up with roughly the same figures.

It also chimes with a growing debate about the role of the immune system in mental health. A lot of this evidence is circumstantial but suggestive. For example, many of the genes associated (albeit weakly) with the diagnosis of schizophrenia are involved in the immune system – particularly in coding proteins for the major histocompatibility complex.

However, it’s worth being a little circumspect about this new enthusiasm for thinking of psychosis as an ‘immune disorder’.

Importantly, these new studies did blood tests, rather than checking cerebrospinal fluid – the fluid that your brain floats around in which lies on the other side of the blood-brain barrier, so we can’t be sure that these antibodies were actually affecting the brain in everyone found to have them. It’s likely, but not certain.

Also, we’re not sure to what extent anti-NMDA antibodies contribute to the chance of developing psychosis in everyone. Certainly there are some cases where it seems to be the main cause, but we’re not sure how that holds for all.

It’s also worth bearing in mind that the science over the role of the genes associated with the schizophrenia diagnosis in the immune system is certainly not settled. A recent large study compared the role of these genes in schizophrenia to known autoimmune disorders and concluded that the genes just don’t look like they’re actually impacting on the immune system.

There’s also a constant background of cultural enthusiasm in psychiatry to identify ‘biomarkers’ and anything that looks like a clear common biological pathway even for a small number of cases of ‘psychiatric’ problem gets a lot of airtime.

Curiously, in this case, Hollywood may also play a part.

A film called Brain On Fire has just been shown to film festivals and is being tested for a possible big release. It’s based on the (excellent) book of the same name by journalist Susannah Cahalan and describes her experience of developing psychosis only for it later to be discovered that she had anti-NMDA receptor encephalitis.

Hollywood has historically had a big effect on discussions about mental health and you can be sure that if the movie becomes a hit, popular media will be alive with discussions on ‘whether your mental health problems are really an immune problem’.

But taking a less glitzy view, in terms of these new studies, they probably reflect that a small percentage of people with psychosis, maybe 1-2%, have NMDA receptor-related immune problems that play an important role in the generation of their mental health difficulties.

It’s important not to underestimate the importance of these findings. It could potentially translate into more effective treatment for millions of people a year globally.

But in terms of psychosis as a whole, for which we know social adversity in its many forms plays a massive role, it’s just a small piece of the puzzle.
 

Link to locked Lancet Psychiatry study.

Making the personal, geospatial

CC licensed photo by Flickr user Paul Townsend. Click for origin.There is an old story in London, and it goes like this. Following extensive rioting, there is an impassioned debate about the state of society with some saying it shows moral decay while others claim it demonstrates the desperation of poverty.

In 1886, London hosted one of its regular retellings when thousands of unemployed people trashed London’s West End during two days of violent disturbances.

In the weeks of consternation that followed, the press stumbled on the work of wealthy ship owner Charles Booth who had begun an unprecedented project – mapping poverty across the entire city.

He started the project because he thought Henry Hyndman was bullshitting.

Hyndman, a rather too earnest social campaigner, claimed that 1 in 4 Londoners lived in poverty, a figure Booth scoffed at as a gross exaggeration.

So Booth paid for an impressive team of researchers and sent to them out to interview officials who assessed families for compulsory schooling and he created a map, initially of the East End, and eventually as far west as Hammersmith, of every house and the social state of the families within it.

Each dwelling was classified into seven gradations – from “Wealthy; upper middle and upper classes” to “Lowest class; vicious, semi-criminal”. For the first time, deprivation could be seen etched into London’s social landscape.

I suspect that the term ‘vicious’ referred to its older meaning: ‘of given to vice’- rather than cruel. But what Booth created, for the first time and in exceptional detail, was a map of social environments.

The map is amazingly detailed. Literally, a house by house mapping of the whole of London.

The results showed that Hyndman was indeed wrong, but not in the direction Booth assumed. He found 1 in 3 Londoners lived below the poverty line.

If you know a bit about the capital today, you can see how many of the deprived areas from 1886 are still some of the most deprived in 2016.

So I was fascinated when I read about a new study that allows poverty to be mapped from the air, using machine learning to analyse satellite images Nigeria, Tanzania, Uganda, Malawi, and Rwanda.

But rather than pre-defining what counts as an image of a wealthy area (swimming pools perhaps?) compared to an impoverished one (unpaved roads maybe), they trained a neural network learn its own associations between image properties and income on an initial set of training data before trying it out on new data sets.

The neural network could explain up to 75% of the variation in the local economy.

Knowing both the extent and geography of poverty is massively important. It allows a macro view of something that manifests in very local ways, mapping it to street corners, housing blocks and small settlements.

It makes the vast forces of the economy, personal.
 

Link to Booth’s poverty map.
Link to Science reporting of satellite mapping study.

The science of urban paranoia

CC Licensed Image by Flickr user 01steven. Click for source.I’ve got an article in The Atlantic on how paranoia and psychosis are more common in cities and why the quest to explain the ‘urban psychosis effect’ is reshaping psychiatry.

The more urban your neighbourhood, the higher the rate of diagnosed schizophrenia and you are more likely to experience what are broadly known as ‘non-affective psychosis-psychoses’ – that is, delusions, hallucinations, and paranoia not primarily caused by mood problems.

This has led to a long and ongoing debate about why this is, with some arguing that it is an effect of city-living on the mind, while others arguing the association is better explained by a complex interaction between genetic risk factors and limited life chances.

The article discusses the science behind exactly this debate, partly a judgement on the value of the city itself, and notes how it’s pushing psychiatry to re-examine how it deals with what is often euphemistically called ‘the environment’.
 

Link to ‘The Mystery of Urban Psychosis’ in The Atlantic.

Cultures of mental distress

BBC Radio 4 is currently running a fascinating four-part series called The Borders of Sanity on the interaction between culture and mental illness.

It’s been put together by cultural historian Christopher Harding and takes an in-depth look at four particular instances where culture and mental health interact, perhaps in seemingly curious ways if you weren’t familiar with the culture.

It includes episodes on Depression in Japan, Sweden’s Adolescents, Hearing Voices in the UK, and the one to be broadcast next week Healing in Ghana.

The only downside is it’s one of BBC Radio’s occasional programmes that they only make available as streamed audio from their website – presumably to give it an early 2000s internet feel.

However, well-worth a listen. Genuinely fascinating stuff so far.
 

Link to BBC Radio 4’s The Borders of Sanity.

A new wave of interrogation

Wired has an excellent article that tracks the development of police interrogation techniques from the dark days of physical violence, to the largely hand-me-down techniques depicted in classic cop shows, to a new era of interrogation developed and researched in secret.

It’s probably one of the best pieces you’ll read on interrogation psychology for, well, a very long time, because they don’t come around very often. This one is brilliantly written.

One key part tracks the influence of still-secret interrogation techniques from the US Government’s High-Value Detainee Interrogation Group or HIG as they have filtered through from the ‘war on terror’ to civilian law enforcement.

In 2010, to make good on a campaign promise that he would end the use of torture in US terror investigations, President Obama announced the formation of the High-Value Detainee Interrogation Group, a joint effort of the FBI, the CIA, and the Pentagon. In place of the waterboarding and coercion that took place at facilities like Abu Ghraib during the Bush years, the HIG was created to conduct noncoercive interrogations. Much of that work is top secret. HIG-trained interrogators, for instance, are said to have questioned would-be Times Square bomber Faisal Shahzad and convicted Boston Marathon bomber Dzhokhar Tsarnaev. The public knows nothing about how those interrogations, or the dozen or so others the HIG is said to have conducted, unfolded. Even the specific training methods the HIG employs—and that it has introduced to investigators in the Air Force, Navy, and elsewhere—have never been divulged.

At the same time, however, the HIG has become one of the most powerful funders of public research on interrogations in America.

A fascinating and compelling read.

 
Linked to Wired article on the new wave of interrogation.

Reconstructing through altered states

Yesterday, I had the pleasure of doing a post-screening Q&A with the film-makers of an amazing documentary called My Beautiful Broken Brain.

One of the many remarkable things about the documentary is that one of the film-makers is also the subject, as she began making the film a few days after her life-threatening brain injury.

The documentary follows Lotje Sodderland who experienced a major brain haemorrhage at the age of 34.

She started filming herself a few days afterwards on her iPhone, initially to make sense of her suddenly fragmented life, but soon contacted film-maker Sophie Robinson to get an external perspective.

It’s interesting both as a record of an emotional journey through recovery, but also because Lotje spent a lot of time working with a special effects designer to capture her altered experience of the world and make it available to the audience.

I also really recommend a long-form article Lotje wrote about her experience of brain injury for The Guardian.

It’s notable because it’s written so beautifully. But Lotje told me she while she had regained the ability to write and type after her injury, she has been left unable to read. So the whole article was written through a process of typing text and getting Siri on her iPhone to read it back to her.

The documentary is available on Netflix.
 

Link to My Beautiful Broken Brain on Wikipedia.
Link to full documentary on Netflix.
Link to long-form article in The Guardian.

Is there a child mental health crisis?

CC Licensed Image from Wikimedia Commons. Click for source.It is now common for media reports to mention a ‘child mental health crisis’ with claims that anxiety and depression in children are rising to catastrophic levels. The evidence behind these claims can be a little hard to track down and when you do find it there seems little evidence for a ‘crisis’ but there are still reasons for us to be concerned.

The commonest claim is something to the effect that ‘current children show a 70% increase in rates of mental illness’ and this is usually sourced to the website of the UK child mental health charity Young Minds which states that “Among teenagers, rates of depression and anxiety have increased by 70% in the past 25 years, particularly since the mid 1980’s”

This is referenced to a pdf report by the Mental Health Foundation which references a “paper presented by Dr Lynne Friedli”, which probably means this pdf report which finally references this 2004 study by epidemiologist Stephan Collishaw.

Does this study show convincing evidence for a 70% increase in teenage mental health problems in the last 25 years? In short, no, for two important reasons.

The first is that the data is quite mixed – with both flatlines and increases at different times and in different groups – and the few statistically significant results may well be false positives because the study doesn’t control for running lots of analyses.

The second reason is because it looked at a 25-year period but only up to 1999 – so it is now 17 years out-of-date.

Lots of studies have been published since then, which we’ll look at in a minute, but these findings prompted the Nuffield Foundation to collect another phase of data in 2008 in exactly the same way as this original study, and they found that “the overall level of teenage mental health problems is no longer on the increase and may even be in decline.”

Putting both these studies together, this is typical of the sort of mixed picture that is common in these studies, making it hard to say whether there genuinely is an increase in child mental health problems or not.

This is reflected in data reported by three recent review papers on the area. Two articles focused on data from rating scales – questionnaires given to parents, teachers and occasionally children, and one paper focused on population studies that use diagnosis.

The first thing to say, is that there is no stand-out clear finding that child mental health problems are increasing in general, because the results are so mixed. It’s also worth saying that even where there is evidence of an increase, the effects are small to moderate. And because there is not a lot of data, the conclusions are quite provisional.

So is there evidence for a ‘child mental health crisis’? Probably not. Are there things to be concerned about – yes, there are.

Here’s perhaps what we can make out in terms of rough trends from the data.

It doesn’t seem there is an increase in child mental health problems for young children, that is, those below about 12. If anything, their mental health has been improving over the since the early 2000s. Here, however, the data is most scarce.

Globally, and lumping all children together, there is no convincing evidence for an increase in child mental health problems. One review of rating scale data suggests there is an increase, the other paper using the more rigorous systematic review approach suggests not – in line with the data from the review of diagnostic studies.

However, there does seem to be a trend for an increase in anxiety and depression in teenage girls. And data from the UK particularly does seem to show a mild-moderate upward trend for mental health problems in adolescents in general, in comparison to other countries where the data is much more mixed. Again, though, the data isn’t as solid as it needs to be.

This leaves open some important questions though. If we’re talking about a crisis – maybe the levels were already too high so even a drop means we’re still at ‘crisis level’. So one of the most important questions is – what would be an acceptable level of mental health problems in children?

The first answer that comes to mind is ‘zero’ and not unreasonably – but considering that some mental health problems arise from largely unavoidable life stresses, bereavements, natural disasters and accidents, it would be unrealistic to expect that no child suffered periods of disabling anxiety or depression.

This also raises the question of where we decide to make the cut-off for ’emotional problems’ or ’emotional disorders’ in comparison to ‘healthy emotions’. We need anxiety, sadness and anger but they can also become disabling. Deciding where we draw the line is key in answering questions about child mental health.

So there is no way of answering the question about ‘acceptable levels of mental health problems’ without raising the question of the appropriateness of how we define problems.

Similarly, a very common finding is huge variation between countries and cultures. Concepts, reporting, and the experience of emotions can vary greatly between different cultural groups, making it difficult to make direct comparisons across the globe.

For example, the broadly Western understanding of anxiety as a distinct psychological and emotional experience which can be understood separately from its bodily effects is not one shared by many cultures.

It’s worth saying that cultural changes occur not only between peoples but also over times. Are children more likely to report emotional distress in 2016 compared to 1974 even if they feel the same? Really, we don’t know.

All of which brings us to the question- why is there so much talk about a ‘mental health crisis’ in young people if there is no strong data that there is one?

Partly this is because the mental health of children is often a way of expressing concerns about societal changes. It’s “won’t someone think of the children” given a clinical sheen. But it is also important to realise that consultations and treatment for child mental health problems have genuinely rocketed, probably because of greater awareness and better treatment.

In the UK at least, it’s also clear that talk of a ‘child mental health crisis’ can refer to two things: concerns about rising levels of mental problems, but also concerns about the ragged state of child mental health services in Britain. There is a crisis in that more children are being referred for treatment and the underfunded services are barely keeping their head above water.

So talk of a ‘crisis in rising levels of child mental health problems’ is, on balance, an exaggeration, but we shouldn’t dismiss the trends that the data do suggest.

One of the strongest is the rise in anxiety and depression in teenage girls. We clearly have a long way to go, but the world has never been safer, more equal and more full of opportunities for our soon-to-be-women. Yet there seems to be a growing minority of girls affected by anxiety and depression.

At the very least, it should make us think about whether the society we are building is appropriately supporting the future 50% of the adult population.

An echo of your former self

CC Licensed Image by Flickr user Karen Axelrad. Click for source.The journal Neurology has a brief case study reporting an intriguing form of auditory hallucination – hearing someone speaking in the voice of the last person you spoke to.

The phenomenon is called palinacousis and it usually takes the form of hallucinating an echo or repetition of the voice you’re listening to and it’s particularly associated with problems with the temporal lobes.

This case is a little different, however.

A 70-year-old right-handed white man was brought by his wife to the emergency room due to odd behavior for 2 days… According to the patient, he could not explain why people talking to him sounded strange, speaking in different voices which he heard before. For example, he would talk to a man and would hear him as talking with the voice of the woman he previously talked to. He thought it was funny and he could not concentrate on what the other person was saying because he would be laughing…

On occasion, he complained of hearing a very low-pitched intonation in people’s voices, including his own. At other times, he would hear a cyclical pattern of sounds that transitioned from noisy to silent. His most disturbing auditory symptoms persisted for several days and presented in 2 distinct forms. At first, he described hearing his deceased mother’s voice speaking to him through other people’s speech. Later on, he mentioned that after talking to one person, he would hear a second person speaking to him in the first person’s voice. He would also sometimes hear his voice as if it was the voice of the person he just spoke to. During physical therapy, the patient reported that therapist voices would suddenly change to those of people he had heard on television, which provoked uncontrollable fits of laughter.

In this case, the gentleman didn’t have damage to his temporal lobes, but a bleed that affected his right parietal lobe, which may have led to the atypical form of this hallucination.

In a recent paper, Sam Wilkinson and I noted that palinacousis is one example of an auditory hallucination that typically isn’t experienced as if you’re being communicated to by an external, illusory agent – which are perhaps the least common as most people hear hallucinated voices that appear as if they have some social characteristics.

However, it seems as if there’s even a social version of palinacousis where the echo is of someone’s voice form transposed on to the current speaker.
 

Link to PubMed entry for case study.

Critical mental health has a brain problem

A common critical refrain in mental health is that explaining mental health problems in terms of a ‘brain disorder’ strips meaning from the experience, humanity from the individual, and is potentially demeaning.

But this only holds true if you actually believe that having a brain disorder is somehow dehumanising and this constant attempt to distance people with ‘mental health problems’ from those with ‘brain disorders’ reveals an implicit and disquieting prejudice.

It’s perhaps worth noting that there are soft and hard versions of this argument.

The soft version just highlights a correlation and says that neurobiological explanations of mental health problems are associated with seeing people in less humane ways. In fact, there is good evidence for this in that biomedical explanations of mental health problems have been reliably associated with slightly to moderately more stigmatising attitudes.

This doesn’t imply that neurobiological explanations are necessarily wrong, nor suggests that they should be avoided, because fighting stigma, regardless of the source, is central to mental health. This just means we have work to do.

This work is necessary because all experience, thought and behaviour must involve the biology of the body and brain, and mental health problems are no different. Contrary to how it is sometimes portrayed, this approach doesn’t exclude social, interpersonal, life history or behavioural explanations. In fact, we can think of every type of explanation as a tool for understanding ourselves, rather than a mutually exclusive explanation of which only one must be true.

On the other hand, the strong version of this critical argument says that there is ‘no evidence’ that mental health problems are biological and that saying that someone has ‘something wrong with their brain’ is demeaning or dehumanising in some way.

For example:

“such approaches, by introducing the language of ‘disorder’, undermine a humane response by implying that these experiences indicate an underlying defect.”

“The idea of schizophrenia as a brain disorder might offer further comfort by distancing ‘normal’ from disturbing people. It may do this by placing disturbing people in a separate category and by suggesting uncommon process to account for their behaviour…”

“The fifth category… consists people suffering from conditions of definitely physical origin… where psychiatric symptoms turn out to be indications of an underlying organic disease… medical science has very little to offer most victims of head injury or dementia, since there is no known cure…”

“To be sure, these brain diseases significantly affect mental status, causing depression, psychosis, and dementia, particularly in the latter stages of the illness. But Andreasen asks us to believe that these neurological disorders are “mental illnesses” in the same way that anxiety, depression, bipolar disorder, and schizophrenia­ are mental illnesses. This kind of thinking starts us sliding down a slippery slope, blurring distinctions that must be maintained if we are to learn more about why people are anxious, depressed, have severe mood swings, and lose contact with reality.”

There are many more examples but they almost all involve, as above, making a sharp distinction between mental health difficulties and ‘biological’ disorders, presumably based on the belief that being associated with the latter would be dehumanising in some way. But who is doing the dehumanising here?

These critical approaches suggest that common mental health problems are best understood in terms of life history and meaning but those that occur alongside neurological disorders are irrelevant to these concerns.

Ironically, this line of reasoning implies that people without clearly diagnosable neurological problems can’t be reduced to their biology, but people with these difficulties clearly can be, to the point where they are excluded from any arguments about the nature of mental health.

Another common critical claim is that there is ‘no evidence’ for the causal role of biology in mental health problems but this relies on a conceptual sleight of hand.

There is indeed no evidence for consistent causal factors – conceptualised in either social, psychological or biological terms – that would explain all mental health problems of a certain type, or more narrowly, all cases of people diagnosed with say, schizophrenia or bipolar disorder.

But this does not mean that if you take any particular change conceptualised at the neurobiological level that it won’t reliably lead to mental health problems, and this is true whether you have faith in the psychiatric diagnostic categories or not.

For example, Huntingdon’s disease, dementia, 22q11.2 deletion syndrome, Parkinson’s disease, brain injury, high and chronic doses of certain drugs, certain types of epilepsy, thyroid problems, stroke and many others will all either reliably lead to mental health problems or massively raise the risk of developing them.

Critical mental health advocates typically deal with these examples by excluding them from what they consider under their umbrella of relevant concerns.

The British Psychological Society’s report Understanding Psychosis simply doesn’t discuss anyone who might have psychosis associated with brain injury, epilepsy, dementia or any other alteration to the brain as if they don’t exist – despite the fact we know these neurological changes can be a clear causal factor in developing psychotic experiences. In fact, dementia is likely to be the single biggest cause of psychosis.

In a recent critical mental health manifesto, the first statement is “Mental health problems are fundamentally social and psychological issues”.

This must ring hollow to someone who has developed, for example, psychosis in the context of 22q11.2 deletion syndrome (25% of people affected) or depression after brain injury (40% of people affected).

It’s important to note that these problems are also clearly social and psychological, but to say mental health problems are ‘fundamentally’ social and psychological, immediately excludes people who either clearly have changes to the brain that even critical mental health advocates would accept as causal, or who feel that neurobiology is also a useful way of understanding their difficulties.

All mental health problems are important. Why segregate people on the basis of their brain state?

The ‘not interested in mental health problems associated with brain changes’ approach tells us who critical mental health advocates exclude from their zone of concern: people with acquired neurological problems, people with intellectual disabilities, older adults with dementia, children with neurodevelopmental problems, and people with genetic disorders, among many others.

I’ve spent a lot of time working with people with brain injury, epilepsy, degenerative brain disorders, and related conditions.

Humanity is not defined by a normal brain scan or EEG.

Mental health problems in people with neurological diagnoses are just as personally meaningful.

Social and psychological approaches can be just as valuable.

If your approach to ‘destigmatising’ mental health problems involves an attempt to distance one set of people from another, I want no part of it.

What a more inclusive approach shows, is that there are many causal pathways to mental health problems. In some people, the causal pathway may be more weighted to problems understood in social and emotional terms – trauma, disadvantage, unhelpful coping – in others, the best understanding may more strongly involve neurobiological changes – brain pathology, drug use, rare genetic changes. For many, both are important and intertwine.

Unfortunately, much of this debate has been sidetracked by years of pharmaceutical-funded attempts to convince people with mental health difficulties that they have a ‘brain disease’ – which often feels like adding insult to injury to people who may have suffered years of abuse and exclusion.

But what’s under-appreciated is the over-simplified ‘brain disease’ framework also rarely helps people with recognisable brain changes. Their mental health difficulties reflect and incorporate their life history, hopes and emotional response to the world – as it would with any of us.

So let’s work for a more inclusive approach to mental health that accepts and supports everyone regardless of their measurable brain state, and that aims for a scientific understanding that recognises there are many pathways to mental health difficulties, and many pathways to a better future.

Psychotherapies and the space between us

Public domain image from pixabay. Click for source.There’s an in-depth article at The Guardian revisiting an old debate about cognitive behavioural therapy (CBT) versus psychoanalysis that falls into the trap of asking some rather clichéd questions.

For those not familiar with the world of psychotherapy, CBT is a time-limited treatment based on understanding how interpretations, behaviour and emotions become unhelpfully connected to maintain psychological problems while psychoanalysis is a Freudian psychotherapy based on the exploration and interpretation of unhelpful processes in the unconscious mind that remain from unresolved conflicts in earlier life.

I won’t go into the comparisons the article makes about the evidence for CBT vs psychoanalysis except to say that in comparing the impact of treatments, both the amount and quality of evidence are key. Like when comparing teams using football matches, pointing to individual ‘wins’ will tell us little. In terms of randomised controlled trials or RCTs, psychoanalysis has simply played far fewer matches at the highest level of competition.

But the treatments are often compared due to them aiming to treat some of the same problems. However, the comparison is usually unhelpfully shallow.

Here’s how the cliché goes: CBT is evidence-based but superficial, the scientific method applied for a quick fix that promises happiness but brings only light relief. The flip-side of this cliché says that psychoanalysis is based on apprenticeship and practice, handed down through generations. It lacks a scientific seal of approval but examines the root of life’s struggles through a form of deep artisanal self-examination.

Pitching these two clichés against each other, and suggesting the ‘old style craftsmanship is now being recognised as superior’ is one of the great tropes in mental health – and, as it happens, 21st Century consumerism – and there is more than a touch of marketing about this debate.

Which do you think is portrayed as commercial, mass produced, and popular, and which is expensive, individually tailored, and only available to an exclusive clientèle? Even mental health has its luxury goods.

But more widely discussed (or perhaps, admitted to) are the differing models of the mind that each therapy is based on. But even here simple comparisons fall flat because many of the concepts don’t easily translate.

One of the central tropes is that psychoanalysis deals with the ‘root’ of the psychological problem while CBT only deals with its surface effects. The problem with this contrast is that psychoanalysis can only be seen to deal with the ‘root of the problem’ if you buy into to the psychoanalytic view of where problems are rooted.

Is your social anxiety caused by the projection of unacceptable feelings of hatred based in unresolved conflicts from your earliest childhood relationships – as psychoanalysis might claim? Or is your social anxiety caused by the continuation of a normal fear response to a difficult situation that has been maintained due to maladaptive coping – as CBT might posit?

These views of the internal world, are, in many ways, the non-overlapping magisteria of psychology.

Another common claim is that psychoanalysis assumes an unconscious whereas CBT does not. This assertion collapses on simple examination but the models of the unconscious are so radically different that it is hard to see how they easily translate.

Psychoanalysis suggests that the unconscious can be understood in terms of objects, drives, conflicts and defence mechanisms that, despite being masked in symbolism, can ultimately be understood at the level of personal meaning. In contrast, CBT draws on its endowment from cognitive psychology and claims that the unconscious can often only be understood at the sub-personal level because meaning as we would understand it consciously is unevenly distributed across actions, reactions and interpretations rather than being embedded within them.

But despite this, there are also some areas of shared common ground that most critics miss. CBT equally cites deep structures of meaning acquired through early experience that lie below the surface to influence conscious experience – but calls them core beliefs or schemas – rather than complexes.

Perhaps the most annoying aspect of the CBT vs psychoanalysis debate is it tends to ask ‘which is best’ in a general and over-vague manner rather than examining the strengths and weaknesses of each approach for specific problems.

For example, one of the central areas that psychoanalysis excels at is in conceptualising the therapeutic relationship as being a dynamic interplay between the perception and emotions of therapist and patient – something that can be a source of insight and change in itself.

Notably, this is the core aspect that’s maintained in its less purist and, quite frankly, more sensible version, psychodynamic psychotherapy.

CBT’s approach to the therapeutic relationship is essentially ‘be friendly and aim for cooperation’ – the civil service model of psychotherapy if you will – which works wonderfully except for people whose central problem is itself cooperation and the management of personal interactions.

It’s no accident that most extensions of CBT (schema therapy, DBT and so on) add value by paying additional attention to the therapeutic relationship as a tool for change for people with complex interpersonal difficulties.

Because each therapy assumes a slightly different model of the mind, it’s easy to think that they are somehow battling over the ‘what it means to be human’ and this is where the dramatic tension from most of these debates comes from.

Mostly though, models of the mind are just maps that help us get places. All are necessarily stylised in some way to accentuate different aspects of human nature. As long as they sufficiently reflect the territory, this highlighting helps us focus on what we most need to change.

Alzheimer’s from the inside

There’s an excellent short-film, featuring journalist Greg O’Brien, who describes the experience of Alzheimer’s disease as it affects him.

It’s both moving and brilliantly made, skilfully combining the neuroscience of Alzheimer’s with the raw experience of experiencing dementia.

I found it in this Nautilus article, also by O’Brien, who has taken the rare step of writing a book about the experience of Alzheimer’s disease before it affected his ability to write.
 

Link to short film Inside Alzheimer’s on vimeo.
Link to Nautilus article.

The real history of the ‘safe space’

There’s much debate in the media about a culture of demanding ‘safe spaces’ at university campuses in the US, a culture which has been accused of restricting free speech by defining contrary opinions as harmful.

The history of safe spaces is an interesting one and a recent article in Fusion cited the concept as originating in the feminist and gay liberation movements of the 1960s.

But the concept of the ‘safe space’ didn’t start with these movements, it started in a much more unlikely place – corporate America – largely thanks to the work of psychologist Kurt Lewin.

Like so many great psychologists of the early 20th Century, Lewin was a Jewish academic who left Europe after the rise of Nazism and moved to the United States.

Although originally a behaviourist, he became deeply involved in social psychology at the level of small group interactions and eventually became director of the Center for Group Dynamics at MIT.

Lewin’s work was massively influential and lots of our everyday phrases come from his ideas. The fact we talk about ‘social dynamics’ at all, is due to him, and the fact we give ‘feedback’ to our colleagues is because Lewin took the term from engineering and applied it to social situations.

In the late 1940s, Lewin was asked to help develop leadership training for corporate bosses and out of this work came the foundation of the National Training Laboratories and the invention of sensitivity training which was a form of group discussion where members could give honest feedback to each other to allow people to become aware of their unhelpful assumptions, implicit biases, and behaviours that were holding them back as effective leaders.

Lewin drew on ideas from group psychotherapy that had been around for years but formalised them into a specific and brief focused group activity.

One of the ideas behind sensitivity training, was that honesty and change would only occur if people could be frank and challenge others in an environment of psychological safety. In other words, without judgement.

Practically, this means that there is an explicit rule that everyone agrees to at the start of the group. A ‘safe space’ is created, confidential and free of judgement but precisely to allow people to mention concerns without fear of being condemned for them, on the understanding that they’re hoping to change.

It could be anything related to being an effective leader, but if we’re thinking about race, participants might discuss how, even though they try to be non-racist, they tend to feel fearful when they see a group of black youths, or that they often think white people are stuck up, and other group members, perhaps those affected by these fears, could give alternative angles.

The use of sensitivity groups began to gain currency in corporate America and the idea was taken up by psychologists such as the humanistic therapist Carl Rogers who, by the 1960s, developed the idea into encounter groups which were more aimed at self-actualisation and social change, in line with the spirit of the times, but based on the same ‘safe space’ environment. As you can imagine, they were popular in California.

It’s worth saying that although the ideal was non-judgement, the reality could be a fairly rocky emotional experience, as described by a famous 1971 study on ‘encounter group casualties’.

From here, the idea of safe space was taken up by feminist and gay liberation groups, but with a slightly different slant, in that sexist or homophobic behaviour was banned by mutual agreement but individuals could be pulled up if it occurred, with the understanding that people would make an honest attempt to recognise it and change.

And finally we get to the recent campus movements, where the safe space has become a public political act. Rather than individuals opting in, it is championed or imposed (depending on which side you take) as something that should define acceptable public behaviour.

In other words, creating a safe space is considered to be a social responsibility and you can opt out, but only by leaving.

What do children know of their own mortality?

CC Licensed Image by Flickr user DAVID MELCHOR DIAZ. Click for source.We are born immortal, as far as we know at the time, and slowly we learn that we are going to die. For most children, death is not fully understood until after the first decade of life – a remarkable amount of time to comprehend the most basic truth of our existence.

There are poetic ways of making sense of this difficulty: perhaps an understanding of our limited time on Earth is too difficult for the fragile infant mind to handle, maybe it’s evolution’s way of instilling us with hope; but these seductive theories tend to forget that death is more complex than we often assume.

To completely understand the significance of death, researchers – mortality psychologists if you will – have identified four primary concepts we need to grasp: universality (all living things die), irreversibility (once dead, dead forever), nonfunctionality (all functions of the body stop) and causality (what causes death).

In a recent review of studies on children’s understanding of death, medics Alan Bates and Julia Kearney describe how:

Partial understanding of universality, irreversibility, and nonfunctionality usually develops between the ages of 5 and 7 years, but a more complete understanding of death concepts, including causality, is not generally seen until around age 10. Prior to understanding nonfunctionality, children may have concrete questions such as how a dead person is going to breathe underground. Less frequently studied is the concept of personal mortality, which most children have some under standing of by age 6 with more complete understanding around age 8–11.

But this is a general guide, rather than a life plan. We know that children vary a great deal in their understanding of death and they tend to acquire these concepts at different times.

Although interesting from a developmental perspective these studies also have clear, practical implications.

Most children will know someone who dies and helping children deal with these situations often involves explaining death and dying in a way they can understand while addressing any frightening misconceptions they might have. No, your grandparent hasn’t abandoned you. Don’t worry, they won’t get lonely.

But there is a starker situation which brings the emerging ability to understand mortality into very sharp relief. Children who are themselves dying.

The understanding of death by terminally ill children has been studied by a small but dedicated research community, largely motivated by the needs of child cancer services.

One of the most remarkable studies, and perhaps, one of the most remarkable studies in the whole of palliative care, was completed by the anthropologist Myra Bluebond-Langner and was published as the book The Private Worlds of Dying Children.

Bluebond-Langner spent the mid 1970’s in an American child cancer ward and began to look at what the children knew about their own terminal prognosis, how this knowledge affected social interactions, and how social interactions were conducted to manage public awareness of this knowledge.

Her findings were nothing short of stunning: although adults, parents, and medical professionals, regularly talked in a way to deliberately obscure knowledge of the child’s forthcoming death, children often knew they were dying. But despite knowing they were dying, children often talked in a way to avoid revealing their awareness of this fact to the adults around them.

Bluebond-Langner describes how this mutual pretence allowed everyone to support each other through their typical roles and interactions despite knowing that they were redundant. Adults could ask children what they wanted for Christmas, knowing that they would never see it. Children could discuss what they wanted to be when they grew up, knowing that they would never get the chance. Those same conversations, through which compassion flows in everyday life, could continue.

This form of emotional support was built on fragile foundations, however, as it depended on actively ignoring the inevitable. When cracks sometimes appeared during social situations they had to be quickly and painfully papered over.

When children’s hospices first began to appear, one of their innovations was to provide a space where emotional support did not depend on mutual pretence.

Instead, dying can be discussed with children, alongside their families, in a way that makes sense to them. Studying what children understand about death is a way of helping this take place. It is knowledge in the service of compassion.