What do children know of their own mortality?

CC Licensed Image by Flickr user DAVID MELCHOR DIAZ. Click for source.We are born immortal, as far as we know at the time, and slowly we learn that we are going to die. For most children, death is not fully understood until after the first decade of life – a remarkable amount of time to comprehend the most basic truth of our existence.

There are poetic ways of making sense of this difficulty: perhaps an understanding of our limited time on Earth is too difficult for the fragile infant mind to handle, maybe it’s evolution’s way of instilling us with hope; but these seductive theories tend to forget that death is more complex than we often assume.

To completely understand the significance of death, researchers – mortality psychologists if you will – have identified four primary concepts we need to grasp: universality (all living things die), irreversibility (once dead, dead forever), nonfunctionality (all functions of the body stop) and causality (what causes death).

In a recent review of studies on children’s understanding of death, medics Alan Bates and Julia Kearney describe how:

Partial understanding of universality, irreversibility, and nonfunctionality usually develops between the ages of 5 and 7 years, but a more complete understanding of death concepts, including causality, is not generally seen until around age 10. Prior to understanding nonfunctionality, children may have concrete questions such as how a dead person is going to breathe underground. Less frequently studied is the concept of personal mortality, which most children have some under standing of by age 6 with more complete understanding around age 8–11.

But this is a general guide, rather than a life plan. We know that children vary a great deal in their understanding of death and they tend to acquire these concepts at different times.

Although interesting from a developmental perspective these studies also have clear, practical implications.

Most children will know someone who dies and helping children deal with these situations often involves explaining death and dying in a way they can understand while addressing any frightening misconceptions they might have. No, your grandparent hasn’t abandoned you. Don’t worry, they won’t get lonely.

But there is a starker situation which brings the emerging ability to understand mortality into very sharp relief. Children who are themselves dying.

The understanding of death by terminally ill children has been studied by a small but dedicated research community, largely motivated by the needs of child cancer services.

One of the most remarkable studies, and perhaps, one of the most remarkable studies in the whole of palliative care, was completed by the anthropologist Myra Bluebond-Langner and was published as the book The Private Worlds of Dying Children.

Bluebond-Langner spent the mid 1970’s in an American child cancer ward and began to look at what the children knew about their own terminal prognosis, how this knowledge affected social interactions, and how social interactions were conducted to manage public awareness of this knowledge.

Her findings were nothing short of stunning: although adults, parents, and medical professionals, regularly talked in a way to deliberately obscure knowledge of the child’s forthcoming death, children often knew they were dying. But despite knowing they were dying, children often talked in a way to avoid revealing their awareness of this fact to the adults around them.

Bluebond-Langner describes how this mutual pretence allowed everyone to support each other through their typical roles and interactions despite knowing that they were redundant. Adults could ask children what they wanted for Christmas, knowing that they would never see it. Children could discuss what they wanted to be when they grew up, knowing that they would never get the chance. Those same conversations, through which compassion flows in everyday life, could continue.

This form of emotional support was built on fragile foundations, however, as it depended on actively ignoring the inevitable. When cracks sometimes appeared during social situations they had to be quickly and painfully papered over.

When children’s hospices first began to appear, one of their innovations was to provide a space where emotional support did not depend on mutual pretence.

Instead, dying can be discussed with children, alongside their families, in a way that makes sense to them. Studying what children understand about death is a way of helping this take place. It is knowledge in the service of compassion.

Jeb Bush has misthought

According to the Washington Examiner, republican presidential candidate Jeb Bush has said that doing a psychology major will mean “you’re going to be working a Chick-fil-A” and has encouraged students to choose college degrees with better employment prospects.

If you’re not American, Chik-fil-A turns out be a fast food restaurant, presumably of dubious quality.

Bush continued:

“The number one degree program for students in this country … is psychology,” Bush said. “I don’t think we should dictate majors. But I just don’t think people are getting jobs as psych majors.

Firstly, he’s wrong about psychology being the most popular degree in the US. The official statistics shows it’s actually business related subjects that are the most studied, with psychology coming in at fifth.

He’s also wrong about the employment prospects of psych majors. I initially mused on Twitter as to why US psych majors have such poor employment prospects when, in the UK, psychology graduates are typically the most likely to be employed.

But I was wrong about US job prospects for psych majors, because I was misled by lots of US media articles suggesting exactly this.

There is actually decent research on this, and it says something quite different. Georgetown University’s Centre on Education and the Workforce published reports in 2010 and 2013, called ‘Hard Times: College Majors, Unemployment and Earnings’ where they looked at exactly this issue.

They found on both occasions that doing a psych major gives you employment prospects that are about mid-table in comparison to other degrees.

Below is the graph from the 2013 report. Click for a bigger version.

Essentially psychology is slightly below average in terms of employability. Tenth out of sixteen but still a college major where more than 9 out of 10 (91.2%) find jobs as recent graduates.

If you look at median income, the picture is much the same: somewhat below average but clearly not in the Chik-fil-A range.

What’s not factored into these reports, however, is gender difference. According to the statistics, almost 80% of psychology degrees in the US are earned by women.

Women earn less than men on average, are more likely to take voluntary career breaks, are more likely to be suspend work to have children, and so on. So it’s worth remembering that these figures don’t control for gender effects.

So when Bush says “I just don’t think people are getting jobs as psych majors” it seems he misthought.

Specifically, it looks like his thinking was biased by the availability heuristic which, if you know about it, can help you avoid embarrassing errors when making factual claims.

I’ll leave that irony for Jeb Bush to ponder, along with Allie Brandenburger, Kaitlin Zurdowsky and Josh Venable – three psychology majors he employed as senior members of his campaign team.

Spike activity 23-10-2015

Quick links from the past week in mind and brain news:

MP tricked into condemning a fake drug called ‘Cake’ on Brass Eye has been put in charge of scrutinising drugs policy in the UK Parliament, reports The Independent. What starts as satire is so often reborn as policy.

Narratively takes a look at the human stories behind the alarming rates of prescription opioid addiction in Appalachia.

Mental health research makes good economic sense, argues The Economist.

American Civil Liberties Union are suing the psychologists who developed the CIA torture programme.

Before 6 months, babies don’t relate touch to an event outside of themselves. We’re calling this “tactile solipsism”. Interesting Brain Decoder piece.

Mashable reports that Sesame Street debuts its first autistic Muppet. And try watching that What My Family Wants You to Know About Autism video without welling up.

‘Mental patient’ Halloween costumes: a scientific guide to dressing accurately. Important evidence-based Halloween advice on Brain Flapping.

The Scientist looks back at Camillo Golgi’s first drawings of neurons from the 1870s.

A social vanishing

CC Licensed Photo by Flickr user Jonathan Jordan. Click for source,A fantastic eight-part podcast series called Missing has just concluded and it’s a brilliant look at the psychology and forensic science of missing people.

It’s been put together by the novelist Tim Weaver who is renowned for his crime thrillers that feature missing persons investigator David Raker.

He uses the series to investigate the phenomenon of missing people and the result is a wonderfully engrossing, diverse documentary series that talks to everyone from forensic psychiatrists, to homicide investigators, to commercial companies that help you disappear without trace.

Missing people, by their absence, turn out to reveal a lot about the tension between social structures and individual behaviour in modern society. Highly recommended.
 

Link to Missing podcast series with iTunes / direct download links.

Web of illusion: how the internet affects our confidence in what we know

The internet can give us the illusion of knowledge, making us think we are smarter than we really are. Fortunately, there may be a cure for our arrogance, writes psychologist Tom Stafford.

The internet has a reputation for harbouring know-it-alls. Commenters on articles, bloggers, even your old school friends on Facebook all seem to swell with confidence in their understanding of exactly how the world works (and they are eager to share that understanding with everyone and anyone who will listen). Now, new research reveals that just having access to the world’s information can induce an illusion of overconfidence in our own wisdom. Fortunately the research also shares clues as to how that overconfidence can be corrected.

Specifically, we are looking at how the internet affects our thinking about what we know, a topic psychologists call metacognition. When you know you are boasting, you are being dishonest, but you haven’t made any actual error in estimating your ability. If you sincerely believe you know more than you do then you have made an error. The research suggests that an illusion of understanding may actually be incredibly common, and that this metacognitive error emerges in new ways in the age of the internet.

In a new paper, Matt Fisher of Yale University, considers a particular type of thinking known as transactive memory, which is the idea that we rely on other people and other parts of the world – books, objects – to remember things for us. If you’ve ever left something you needed for work by the door the night before, then you’ve been using transactive memory.

Part of this phenomenon is the tendency to then confuse what we really know in our personal memories, with what we have easy access to, the knowledge that is readily available in the world, or with which we are merely familiar without actually understanding in depth. It can feel like we understand how a car works, the argument goes, when in fact we are merely familiar with making it work. I press the accelerator and it goes forward, neglecting to realise that I don’t really know how it goes forward.

Fisher and colleagues were interested in how this tendency interacts with the internet age. They asked people to provide answers to factual questions, such as “Why are there time zones?”. Half of the participants were instructed to look up the answers on the internet before answering, half were told not to look up the answers on the internet. Next, all participants were asked how confidently they could explain the answers to a second series of questions (seperate, but also factual, questions such as “Why are cloudy nights warmer?” or “How is vinegar made?”).

Sure enough, people who had just been searching the internet for information were significantly more confident about their understanding of the second set of questions. Follow up studies confirmed that these people really did think the knowledge was theirs: they were still more confident if asked to indicate their response on a scale representing different levels of understanding with pictures of brain-scan activity (a ploy that was meant to emphasise that the information was there, in their heads). The confidence effect even persisted when the control group were provided answer material and the internet-search group were instructed to search for a site containing the exact same answer material. Something about actively searching for information on the internet specifically generated an illusion that the  knowledge was in the participants’ own heads.

If the feeling of controlling information generates overconfidence in our own wisdom, it might seem that the internet is an engine for turning us all into bores. Fortunately another study, also published this year, suggests a partial cure.

Amanda Ferguson of the University of Toronto and colleagues ran a similar study, except the set-up was in reverse: they asked participants to provide answers first and, if they didn’t know them, search the internet afterwards for the correct information (in the control condition participants who said “I don’t know” were let off the hook and just moved on to the next question). In this set up, people with access to the internet were actually less willing to give answers in the first place than people in the no internet condition. For these guys, access to the internet shut them up, rather than encouraging them to claim that they knew it all. Looking more closely at their judgements, it seems the effect wasn’t simply that the fact-checking had undermined their confidence. Those that knew they could fall back on the web to check the correct answer didn’t report feeling less confident within themselves, yet they were still less likely to share the information and show off their knowledge.

So, putting people in a position where they could be fact-checked made them more cautious in their initial claims. The implication I draw from this is that one way of fighting a know-it-all, if you have the energy, is to let them know that they are going to be thoroughly checked on whether they are right or wrong. It might not stop them researching a long answer with the internet, but it should slow them down, and diminish the feeling that just because the internet knows some information, they do to.

It is frequently asked if the internet is changing how we think. The answer, this research shows, is that the internet is giving new fuel to the way we’ve always thought. It can be both a cause of overconfidence,  when we mistake the boundary between what we know and what is available to us over the web, and it can be a cause of uncertainty, when we anticipate that we’ll be fact-checked using the web on the claims we make. Our tendencies to overestimate what we know, to use information that is readily available as a substitute for our own knowledge, and to worry about being caught out are all constants on how we think. The internet slots into this tangled cognitive ecosystem, from which endless new forms evolve.

This is my BBC Future column from earlier this week. The original is here

From school shootings to everyday counter-terrorism

CC Licensed Image from Secretive Ireland. Click for source.Mother Jones has a fascinating article on how America is attempting to stop school shootings by using community detection and behavioural intervention programmes for people identified as potential killers – before a crime has ever been committed.

It is a gripping read in itself but it is also interesting because it describes an approach that is now been rolled out to millions as part of community counter-terrorism strategies across the world, which puts a psychological model of mass-violence perpetration at its core.

The Mother Jones article describes a threat assessment model for school shootings that sits at an evolutionary mid-point: first developed to protect the US President, then to preventing school shootings, and now as mass deployment domestic counter-terrorism programmes.

You can see exactly this in the UK Government’s Prevent programme (part of the wider CONTEST counter-terrorism strategy). Many people will recognise this in the UK because if you work for a public body, like a school or the health service, you will have been trained in it.

The idea behind Prevent is that workers are trained to be alert to signs of radicalisation and extremism and can pass on potential cases to a multi-disciplinary panel, made up of social workers, mental health specialists, staff members and the police, who analyse the case in more detail and get more information as it’s needed.

If they decide the person is vulnerable to becoming dangerously radicalised or violent, they refer the case on the Channel programme, which aims to manage the risk by a combination of support from social services and heightened monitoring by security services.

A central concept is that the person may be made vulnerable to extremism due to unmet needs (poor mental health, housing, lack of opportunity, poor social support, spiritual emptiness, social conflict) which may convert into real world violence when mixed with certain ideologies or beliefs about the world that they are recruited into, or persuaded by, and so violence prevention includes both a needs-based and a threat-based approach.

This approach came from work by the US Secret Service in the 1990s, who were mainly concerned with protecting key government officials, and it was a radical departure from the idea that threat management was about physical security.

They began to try and understand why people might want to attempt to kill important officials and worked on figuring out how to identify risks and intervene before violence was ever used.

The Mother Jones article also mentions the LAPD Threat Management Unit (LAPDTMU) which was formed to deal with cases of violent stalking of celebrities, and the FBI had been developing a data-driven approach since the National Center for the Analysis of Violent Crime (NCAVC) launched in 1985.

By the time the Secret Service founded the National Threat Assessment Center in 1998, the approach was well established. When the Columbine massacre occurred the following year, the same thinking was applied to school shootings.

After Columbine, reports were produced by both the FBI (pdf) and the Secret Service (pdf) which outline some of the evolution of this approach and how it applies to preventing school shootings. The Mother Jones article illustrates what this looks like, more than 15 years later, as shootings are now more common and often directly inspired by Columbine or other more recent attacks.

It’s harder to find anything written on the formal design of the UK Goverment’s Prevent and Channel programmes but the approach is clearly taken from the work in the United States.

The difference is that it has been deployed on a mass scale. Literally, millions of public workers have been trained in Prevent, and Channel programmes exist all over the country to receive and evaluate referrals.

It may be one of the largest psychological interventions ever deployed.
 

Link to Mother Jones article on preventing the next mass shooting.

The echoes of the Prozac revolution

The Lancet Psychiatry has a fantastic article giving a much needed cultural retrospective on the wave of antidepressants like Prozac – which first made us worry we would no longer be our true selves through ‘cosmetic pharmacology,’ to the dawning realisation that they are unreliably useful but side-effect-ridden tools that can help manage difficult moods.

From their first appearance in the late 1980s until recently, SSRIs were an A-list topic of debate in the culture wars, and the rhetoric, whether pro or con, was red hot. Antidepressants were going to heal, or destroy, the world as we knew it.

Those discussions now feel dated. While antidepressants themselves are here to stay, they just don’t pulse with meaning the way they once did. Like the automobile or the telephone before them, SSRIs are a one-time miracle technology that have since become a familiar—even frumpy—part of the furniture of modern life.

At some point recently, they’ve slid into the final act of Mickey Smith’s wonder-drug drama. And in the aftermath of that change, many of the things that people used to say about them have come to sound completely absurd.

It’s a wonderful piece that perfectly captures the current place of antidepressants in modern society.

It’s by author Katherine Sharpe who wrote the highly acclaimed book Coming of Age on Zoloft which I haven’t read but have just ordered.
 

Link to ‘The silence of prozac’ in The Lancet Psychiatry.

Spike activity 09-10-2015

Quick links from the past week in mind and brain news:

How much can you really learn while you’re asleep? Interesting piece that looks at what the research genuinely tells us in The Guardian.

Comedian John Oliver takes on mental health in America with a segment which is both funny and sharp.

Neuroecology has an excellent post looking at the latest mega-paper from the Blue Brain Project.

There’s a good piece on how cognitive biases affect the practice of doing scientific research in Nature. Thankfully, my training has made me immune to these effects, unlike my colleagues.

Braindecoder has some striking artistic renditions of neuroanatomy from artist Greg Dunn.

Is a Liberal Bias Hurting Social Psychology? Excellent piece in Pacific Standard.

BBC News has a good piece on the evidence behind the school shooting ‘contagion’ effect.

“A tumor stole every memory I had. This is what happened when it all came back.” Great piece in Quartz. Don’t get distracted by the inaccurate use of the term dementia. Recommended.

Statistical fallacy impairs post-publication mood

banksyNo scientific paper is perfect, but a recent result on the affect of mood on colour perception is getting a particularly rough ride post-publication. Thorstenson and colleagues published their paper this summer in Psychological Science, claiming that people who were sad had impaired colour perception along the blue-yellow colour axis but not along the red-green colour axis. Pubpeer – a site where scholars can anonymously discuss papers after publication – has a critique of the paper, which observes that the paper commits a known flaw in its analysis.

The flaw, anonymous comments suggest, is that a difference between the two types of colour perception is claimed, but this isn’t actually tested by the paper – instead it shows that mood significantly affects blue-yellow perception, but does not significantly affect red-green perception. If there is enough evidence that one effect is significant, but not enough evidence for the second being significant, that doesn’t mean that the two effects are different from each other. Analogously, if you can prove that one suspect was present at a crime scene, but can’t prove the other was, that doesn’t mean that you have proved that the two suspects were in different places.

This mistake in analysis  – which is far from unique to this paper – is discussed in a classic 2011 paper by Nieuwenhuis and colleagues: Erroneous analyses of interactions in neuroscience: a problem of significance. At the time of writing the sentiment on Pubpeer is that the paper should be retracted – in effect striking it from the scientific record.

With commentary like this, you can see why Pubpeer has previously been the target of legal action by aggrieved researchers who feel the site unfairly maligns their work.

(h/t to Daniël Lakens and jjodx on twitter)

UPDATE 5/11/15: It’s been retracted

Spike activity 02-10-2015

Quick links from the past week in mind and brain news:

The madness of Charlie Brown. The Lancet has a wonderful article on Lucy, Charlie Brown’s local psychiatrist.

The Atlantic has an excellent piece on new research showing neurons have different genomes.

Mexico’s 13-year-old psychologist is amazing, reports USA Today. Sí, es.

PLOS Neuro has an excellent in-depth piece about the neuroscience of sleep deprivation.

Boring cityscapes increase sadness, addiction and disease-related stress. Is urban design a matter of public health? asks Aeon.

The Wall Street Journal on why a new paper may show that the ‘hot hand’ effect in basketball may be real after all.

Pioneering dubstep DJ and producer Benga was diagnosed with bipolar disorder and schizophrenia last year. He speaks to The Guardian on mental health and his comeback.

The Psychologist has an excellent piece on whether the media be restricted in their reporting of mass shootings to prevent copycat killings.

There’s a good piece in Nature about the state of connectome research in neuroscience.

The Quiet Room

This month’s British Journal of Psychiatry has a brief but fascinating article about a 1979 Marvel comic featuring and written by rock legend Alice Cooper which depicts his real-life admission to a psychiatric ward.

The comic was timed to coincide with the release of his concept album From The Inside which describes his experiences as a psychiatric patient being treated for severe alcoholism and depression.

He was there for 3 months and in the comic he depicts the patients, doctors and nurses he met during his admission. Alice has often commented in interviews that treatment in hospital and recovering from his substance misuse saved his life, when many similar artists at that time, such as Jim Morrison and Janis Joplin, were not as fortunate, succumbing to their addictions. The lead single from the album was ‘How You Gonna See Me Now’, a song describing the anxiety the singer felt coming back home to his wife after his stay in hospital and facing the stigma of being treated for his mental illness. It went on to become a well-known successful ballad. The comic can still be found in comic shops or through online auction sites.

 

Link to brief British Journal of Psychiatry article.