Why do we forget names?

A reader, Dan, asks “Why do we forget people’s names when we first meet them? I can remember all kinds of other details about a person but completely forget their name. Even after a lengthy, in-depth conversation. It’s really embarrassing.”

Fortunately the answer involves learning something fundamental about the nature of memory. It also provides a solution that can help you to avoid the embarrassing social situation of having spoken to someone for an hour, only to have forgotten their name.

To know why this happens you have to recognise that our memories aren’t a simple filing system, with separate folders for each kind of information and a really brightly coloured folder labelled “Names”.

Rather, our minds are associative. They are built out of patterns of interconnected information. This is why we daydream: you notice that the book you’re reading was printed in Paris, and that Paris is home to the Eiffel Tower, that your cousin Mary visited last summer, and Mary loves pistachio ice-cream. Say, I wonder if she ate a pistachio ice cream while up the Tower? It goes on and on like that, each item connected to every other, not by logic but by coincidence of time, place, how you learnt the information and what it means.

The same associative network means you can guess a question from the answer. Answer: “Eiffel Tower?” Question: “Paris’s most famous landmark.” This makes memory useful, because you can often go as easily from the content to the label as vice versa: “what is in the top drawer?” isn’t a very interesting question, but it becomes so when you want the answer “where are my keys?”.

So memory is built like this on purpose, and now we can see the reason why we forget names. Our memories are amazing, but they respond to how many associations we make with new information, not with how badly we want to remember it.

When you meet someone for the first time you learn their name, but for your memory it is probably an arbitrary piece of information unconnected to anything else you know, and unconnected to all the other things you later learn about them. After your conversation, in which you probably learn about their job, and their hobbies, and their family or whatever, all this information becomes linked in your memory. Imagine you are talking to a guy with a blue shirt who likes fishing and works selling cars, but would rather give it up to sell fishing gear. Now if you can remember one bit of information (“sell cars”) you can follow the chain to the others (“sells cars but wants to give it up”, “wants to give it up to sell fishing gear”, “loves fishing” and so on). The trouble is that your new friend’s name doesn’t get a look in because it is simply a piece of arbitrary information you didn’t connect to anything else about the conversation.

Fortunately, there are ways to strengthen those links so it does become entrenched with the other memories. Here’s how to remember the name, using some basic principles of memory.

First, you should repeat any name said to you. Practice is one of the golden rules of learning: more practice makes stronger memories. In addition, when you use someone’s name you are linking it to yourself, in the physical act of saying it, but also to the current topic of the conversation in your memory (“So, James, just what is it about fishing that makes you love it so much?”).

Second, you should try to link the name you have just learnt to something you already know. It doesn’t matter if the link is completely silly, it is just important that you find some connection to help the name stick in memory. For example, maybe the guy is called James, and your high school buddy was called James, and although this guy is wearing a blue shirt, high school James only ever wore black, so he’d never wear blue. It’s a silly made up association, but it can help you remember.

Finally, you need to try to link their name to something else about them. If it was me I’d grab the first thing to come to mind to bridge between the name and something I’ve learnt about them. For example, James is a sort of biblical name, you get the King James bible after all, and James begins with J, just like Jonah in the bible who was swallowed by the whale, and this James likes fishing, but I bet he prefers catching them to being caught by them.

It doesn’t matter if the links you make are outlandish or weird. You don’t have to tell anyone. In fact, probably it is best if you don’t tell anyone, especially your new friend! But the links will help create a web of association in your memory, and that web will stop their name falling out of your mind when it is time to introduce them to someone else.

And if you’re sceptical, try this quick test. I’ve mentioned three names during this article. I bet you can remember James, who isn’t Jonah. And probably you can remember cousin Mary (or at least what kind of ice cream she likes). But you can you remember the name of the reader who asked the question? That’s the only one I introduced without elaborating some connections around the name, and that’s why I’ll bet it is the only one you’ve forgotten.

This is my BBC Future column from last week. The original is here

Spike activity 20-11-2015

Quick links from the past week in mind and brain news:

Wired has a good brief piece on the history of biodigital brain implants.

Why are conspiracy theories so attractive? Good discussion on the Science Weekly podcast.

The Wilson Quarterly has a piece on the mystery behind Japan’s high child suicide rate.

The Dream Life of Driverless Cars. Wonderful piece in The New York Times. Don’t miss the video.

The New Yorker has an extended profile on the people who run the legendary Erowid website on psychedelic drugs.

Allen Institute scientists identify human brain’s most common genetic patterns. Story in Geekwire.

BoingBoing covers a fascinating game where you play a blind girl and the game world is dynamically constructed through other senses and memory and shifts with new sensory information.

Excellent article on the real science behind the hype of neuroplasticity in Mosaic Science. Not to be missed.

No more Type I/II error confusion

Type I and Type II errors are, respectively, when you allow a statistical test to convinces you of a false effect, and when you allow a statistical test to convince you to dismiss a true effect. Despite being fundamentally important concepts, they are terribly named. Who can ever remember which way around the two errors go? Well now I can, thanks to a comment from a friend I thought so useful I made it into a picture:


Spike activity 13-11-2015

Quick links from the past week in mind and brain news:

The Weak Science Behind the Wrongly Named Moral Molecule. The Atlantic has some home truths about oxytocin.

Neurophilosophy reports on some half a billion year old brains found preserved in fool’s gold.

An Illuminated, 5,000-Pound Neuron Sculpture Is Coming to Boston. Boston magazine has some pictures.

Guardian Science Weekly podcast has neuroscientist David Eagleman discussing his new book.

A neurologist frustrated by the obstacles to his work on brain-machine interfaces paid a surgeon in Central America $25,000 to implant electrodes into his brain. MIT Tech Review reports.

Business Insider reports on Google’s troubled robotics division. It’s called Replicant, so I’m guessing incept dates may be a point of contention.

The real history of the ‘safe space’

There’s much debate in the media about a culture of demanding ‘safe spaces’ at university campuses in the US, a culture which has been accused of restricting free speech by defining contrary opinions as harmful.

The history of safe spaces is an interesting one and a recent article in Fusion cited the concept as originating in the feminist and gay liberation movements of the 1960s.

But the concept of the ‘safe space’ didn’t start with these movements, it started in a much more unlikely place – corporate America – largely thanks to the work of psychologist Kurt Lewin.

Like so many great psychologists of the early 20th Century, Lewin was a Jewish academic who left Europe after the rise of Nazism and moved to the United States.

Although originally a behaviourist, he became deeply involved in social psychology at the level of small group interactions and eventually became director of the Center for Group Dynamics at MIT.

Lewin’s work was massively influential and lots of our everyday phrases come from his ideas. The fact we talk about ‘social dynamics’ at all, is due to him, and the fact we give ‘feedback’ to our colleagues is because Lewin took the term from engineering and applied it to social situations.

In the late 1940s, Lewin was asked to help develop leadership training for corporate bosses and out of this work came the foundation of the National Training Laboratories and the invention of sensitivity training which was a form of group discussion where members could give honest feedback to each other to allow people to become aware of their unhelpful assumptions, implicit biases, and behaviours that were holding them back as effective leaders.

Lewin drew on ideas from group psychotherapy that had been around for years but formalised them into a specific and brief focused group activity.

One of the ideas behind sensitivity training, was that honesty and change would only occur if people could be frank and challenge others in an environment of psychological safety. In other words, without judgement.

Practically, this means that there is an explicit rule that everyone agrees to at the start of the group. A ‘safe space’ is created, confidential and free of judgement but precisely to allow people to mention concerns without fear of being condemned for them, on the understanding that they’re hoping to change.

It could be anything related to being an effective leader, but if we’re thinking about race, participants might discuss how, even though they try to be non-racist, they tend to feel fearful when they see a group of black youths, or that they often think white people are stuck up, and other group members, perhaps those affected by these fears, could give alternative angles.

The use of sensitivity groups began to gain currency in corporate America and the idea was taken up by psychologists such as the humanistic therapist Carl Rogers who, by the 1960s, developed the idea into encounter groups which were more aimed at self-actualisation and social change, in line with the spirit of the times, but based on the same ‘safe space’ environment. As you can imagine, they were popular in California.

It’s worth saying that although the ideal was non-judgement, the reality could be a fairly rocky emotional experience, as described by a famous 1971 study on ‘encounter group casualties’.

From here, the idea of safe space was taken up by feminist and gay liberation groups, but with a slightly different slant, in that sexist or homophobic behaviour was banned by mutual agreement but individuals could be pulled up if it occurred, with the understanding that people would make an honest attempt to recognise it and change.

And finally we get to the recent campus movements, where the safe space has become a public political act. Rather than individuals opting in, it is championed or imposed (depending on which side you take) as something that should define acceptable public behaviour.

In other words, creating a safe space is considered to be a social responsibility and you can opt out, but only by leaving.

Extremes of self-experimentation with brain electrodes

MIT Technology Review has jaw dropping article about brain-computer interface research Phil Kennedy. In the face of diminishing funding and increasing regulation he “paid a surgeon in Central America $25,000 to implant electrodes into his brain in order to establish a connection between his motor cortex and a computer”.

Both ethically dubious and interesting, it discusses what led Kennedy to this rather drastic decision:

Kennedy’s scientific aim has been to build a speech decoder—software that can translate the neuronal signals produced by imagined speech into words coming out of a speech synthesizer. But this work, carried out by his small Georgia company Neural Signals, had stalled, Kennedy says. He could no longer find research subjects, had little funding, and had lost the support of the U.S. Food and Drug Administration.

That is why in June 2014, he found himself sitting in a distant hospital contemplating the image of his own shaved scalp in a mirror. “This whole research effort of 29 years so far was going to die if I didn’t do something,” he says. “I didn’t want it to die on the vine. That is why I took the risk.”


Link to MIT Tech Review article.

A medieval attitude to suicide

I had always thought that suicide was made illegal in medieval times due to religious disapproval until suicidal people were finally freed from the risk of prosecution by the 1961 Suicide Act.

It turns out the history is a little more nuanced, as noted in this 1904 article from the Columbia Law Review entitled “Is Suicide Murder?” that explores the rather convoluted legal approach to suicide in centuries past.

In the UK, the legal status of suicide was first mentioned in a landmark 13th Century legal document attributed to Henry de Bracton.

But contrary to popular belief about medieval attitudes, suicide by ‘insane’ people was not considered a crime and was entirely blame free. Suicide by people who were motivated by “weariness of life or impatience of pain” received only a light punishment (their goods were forfeited but their family could still inherit their lands).

The most serious punishment of forfeiting everything to the Crown was restricted to those who were thought to have killed themselves “without any cause, through anger or ill will, as when he wished to hurt another”.

There are some examples of exactly these sorts of considerations in a British Journal of Psychiatry article that looks at these cases in the Middle Ages. This is a 1292 case from Hereford:

William la Emeyse of this vill, suffering from an acute fever which took away his senses, got up at night, entered the water of Kentford and drowned himself. The jury was asked if he did this feloniously and said no, he did it through his illness. The verdict was an accident.

We tend to think that the medieval world had a very simplistic view of the experiences and behaviour that we might now classify as mental illness but this often wasn’t the case.

Even the common assumption that all these experiences were put down to ‘demonic possession’ turns out to be a myth, as possession was considered to be a possible but rare explanation and was only accepted after psychological and physical disturbances were ruled out.

Spike activity 06-11-2015

Quick links from the past week in mind and brain news:

If you only read one thing this week, make it the excellent critical piece on the concept of an ‘autism spectrum’ in The Atlantic.

Nature reports that the controversial big bucks Human Brain Project has secured another three years’ funding. Giant all-knowing neurotron brain simulation coming “any day now”.

The psychological power of narrative. Good piece in Nautilus.

There’s an excellent in-depth piece on London’s BabyLab – a research centre for baby cognitive neuroscience – in Nature.

New Scientist has a fascinating piece on how a leading theory of consciousness has been rocked by oddball study.

Human language may be shaped by climate and terrain. Fascinating study covered in the newsy bit of Science.

Brain Flapping has a great piece on Robin Williams and Lewy-body dementia.

When it comes to the brain, blood also seems to be more than a travelling storyteller. In some cases, the blood may be writing the script. Interesting piece in Science News.

The Atlantic has a wonderful piece on why most languages have so few words for smells but why do these two hunter-gatherer groups have lots.

What is you mind doing during resting state fMRI scans? Interesting study covered by Neuroskeptic.

a gold-standard study on brain training

The headlines

The Telegraph: Alzheimer’s disease: Online brain training “improves daily lives of over-60s”

Daily Mail: The quiz that makes over-60s better cooks: Computer brain games ‘stave off mental decline’

Yorkshire Post: Brain training study is “truly significant”

The story

A new trial shows the benefits of online ‘brain training’ exercises including improvements in everyday tasks, such as shopping, cooking and managing home finances.

What they actually did

A team led by Clive Ballard of King’s College London recruited people to a trial of online “brain training” exercises. Nearly 7,000 people over the age of 50 took part, and they were randomly assigned to one of three groups. One group did reasoning and problem solving tasks. A second group practised cognitive skills tasks, such as memory and attention training, and a third control group did a task which involved looking for information on the internet.

After six months, the reasoning and cognitive skills groups showed benefits compared with the control group. The main measure of the study was participants’ own reports of their ability to cope with daily activities. This was measured using something called the instrumental activities of daily living scale. (To give an example, you get a point if you are able to prepare your meals without assistance, and no points if you need help). The participants also showed benefits in short-term memory, judgements of grammatical accuracy and ability to learn new words.

Many of these benefits looked as if they accrued after just three months of regular practice, completing an average of five sessions a week. The benefits also seemed to affect those who went into the trial with the lowest performance, suggesting that such exercises may benefit those who are at risk of mild cognitive impairment (a precursor to dementia).

How plausible is this?

This is gold-standard research. The study was designed to the highest standards, as would be required if you were testing a new drug: a double-blind randomised control trial in which participants were assigned at random to the different treatment groups, and weren’t told which group they were in (nor what the researcher’s theory was). Large numbers of people took part, meaning that the study had a reasonable chance of detecting an effect of the treatment if it was there. The study design was also pre-registered on a database of clinical trials, meaning that the results couldn’t be buried if they turned out to be different from what the researchers (or funders) wanted, and the researchers declared in advance what their analysis would focus on.

So, overall, this is serious evidence that cognitive training exercises may bring some benefits, not just on similar cognitive tasks, but also on the everyday activities that are important for independent living among the older population.

Tom’s take

This kind of research is what “brain training” needs. Too many people – including those who just want to make some money – have leapt on the idea without the evidence that these kind of tasks can benefit anything other than performance on similar tasks. Because the evidence for broad benefits of cognitive training exercises is sparse, this study makes an important contribution to the supporters’ camp, although it far from settles the matter.

Why might you still be sceptical? Well there are some potential flaws in this study. It is useful to speculate on the effect these flaws might have had, even if only as an exercise to draw out the general lessons for interpreting this kind of research.

First up is the choice of control task. The benefits of the exercises tested in this research are only relative benefits compared with the scores of those who carried out the control task. If a different control task had been chosen maybe the benefits wouldn’t look so large. For example, we know that physical exercise has long-term and profound benefits for cognitive function. If the control group had been going for a brisk walk everyday, maybe the relative benefits of these computerised exercises would have vanished.

Or just go for a walk

Another possible distortion of the figures could have arisen as a result of people dropping out during the course of the trial. If people who were likely to score well were more likely to drop out of the control group (perhaps because it wasn’t challenging enough), then this would leave poor performers in the control group and so artificially inflate the relative benefits of being in the cognitive exercises group. More people did drop out of the control group, but it isn’t clear from reading the paper if the researchers’ analysis took steps to account for the effect this might have had on the results.

And finally, the really impressive result from this study is the benefit for the activities of daily living scale (the benefit for other cognitive abilities perhaps isn’t too surprising). This suggests a broad benefit of the cognitive exercises, something which other studies have had difficulty showing. However, it is important to note that this outcome was based on a self-report by the participants. There wasn’t any independent or objective verification, meaning that something as simple as people feeling more confident about themselves after having competed the study could skew the results.

None of these three possible flaws mean we should ignore this result, but questions like these mean that we will need follow up research before we can be certain that cognitive training brings benefits on mental function in older adults.

For now, the implications of the current state of brain training research are:

Don’t pay money for any “brain training” programme. There isn’t any evidence that commercially available exercises have any benefit over the kinds of tasks and problems you can access for free.

Do exercise. Your brain is a machine that runs on blood, and it is never too late to improve the blood supply to the brain through increased physical activity. How long have you been on the computer? Could it be time for a brisk walk round the garden or to the shops? (Younger people, take note, exercise in youth benefits mental function in older age)

A key feature of this study was that the exercises in the treatment group got progressively more difficult as the participants practised. The real benefit may not be from these exercises as such, but from continually facing new mental challenges. So, whatever your hobbies, perhaps – just perhaps – make sure you are learning something new as well as enjoying whatever you already know.

Read more

The original study: The Effect of an Online Cognitive Training Package in Healthy Older Adults: An Online Randomized Controlled Trial

Oliver Burkeman writes: http://www.theguardian.com/science/2014/jan/04/can-i-increase-my-brain-power

The New Yorker (2013): http://www.newyorker.com/tech/elements/brain-games-are-bogus

The Conversation

This article was originally published on The Conversation. Read the original article.

What do children know of their own mortality?

CC Licensed Image by Flickr user DAVID MELCHOR DIAZ. Click for source.We are born immortal, as far as we know at the time, and slowly we learn that we are going to die. For most children, death is not fully understood until after the first decade of life – a remarkable amount of time to comprehend the most basic truth of our existence.

There are poetic ways of making sense of this difficulty: perhaps an understanding of our limited time on Earth is too difficult for the fragile infant mind to handle, maybe it’s evolution’s way of instilling us with hope; but these seductive theories tend to forget that death is more complex than we often assume.

To completely understand the significance of death, researchers – mortality psychologists if you will – have identified four primary concepts we need to grasp: universality (all living things die), irreversibility (once dead, dead forever), nonfunctionality (all functions of the body stop) and causality (what causes death).

In a recent review of studies on children’s understanding of death, medics Alan Bates and Julia Kearney describe how:

Partial understanding of universality, irreversibility, and nonfunctionality usually develops between the ages of 5 and 7 years, but a more complete understanding of death concepts, including causality, is not generally seen until around age 10. Prior to understanding nonfunctionality, children may have concrete questions such as how a dead person is going to breathe underground. Less frequently studied is the concept of personal mortality, which most children have some under standing of by age 6 with more complete understanding around age 8–11.

But this is a general guide, rather than a life plan. We know that children vary a great deal in their understanding of death and they tend to acquire these concepts at different times.

Although interesting from a developmental perspective these studies also have clear, practical implications.

Most children will know someone who dies and helping children deal with these situations often involves explaining death and dying in a way they can understand while addressing any frightening misconceptions they might have. No, your grandparent hasn’t abandoned you. Don’t worry, they won’t get lonely.

But there is a starker situation which brings the emerging ability to understand mortality into very sharp relief. Children who are themselves dying.

The understanding of death by terminally ill children has been studied by a small but dedicated research community, largely motivated by the needs of child cancer services.

One of the most remarkable studies, and perhaps, one of the most remarkable studies in the whole of palliative care, was completed by the anthropologist Myra Bluebond-Langner and was published as the book The Private Worlds of Dying Children.

Bluebond-Langner spent the mid 1970’s in an American child cancer ward and began to look at what the children knew about their own terminal prognosis, how this knowledge affected social interactions, and how social interactions were conducted to manage public awareness of this knowledge.

Her findings were nothing short of stunning: although adults, parents, and medical professionals, regularly talked in a way to deliberately obscure knowledge of the child’s forthcoming death, children often knew they were dying. But despite knowing they were dying, children often talked in a way to avoid revealing their awareness of this fact to the adults around them.

Bluebond-Langner describes how this mutual pretence allowed everyone to support each other through their typical roles and interactions despite knowing that they were redundant. Adults could ask children what they wanted for Christmas, knowing that they would never see it. Children could discuss what they wanted to be when they grew up, knowing that they would never get the chance. Those same conversations, through which compassion flows in everyday life, could continue.

This form of emotional support was built on fragile foundations, however, as it depended on actively ignoring the inevitable. When cracks sometimes appeared during social situations they had to be quickly and painfully papered over.

When children’s hospices first began to appear, one of their innovations was to provide a space where emotional support did not depend on mutual pretence.

Instead, dying can be discussed with children, alongside their families, in a way that makes sense to them. Studying what children understand about death is a way of helping this take place. It is knowledge in the service of compassion.

Jeb Bush has misthought

According to the Washington Examiner, republican presidential candidate Jeb Bush has said that doing a psychology major will mean “you’re going to be working a Chick-fil-A” and has encouraged students to choose college degrees with better employment prospects.

If you’re not American, Chik-fil-A turns out be a fast food restaurant, presumably of dubious quality.

Bush continued:

“The number one degree program for students in this country … is psychology,” Bush said. “I don’t think we should dictate majors. But I just don’t think people are getting jobs as psych majors.

Firstly, he’s wrong about psychology being the most popular degree in the US. The official statistics shows it’s actually business related subjects that are the most studied, with psychology coming in at fifth.

He’s also wrong about the employment prospects of psych majors. I initially mused on Twitter as to why US psych majors have such poor employment prospects when, in the UK, psychology graduates are typically the most likely to be employed.

But I was wrong about US job prospects for psych majors, because I was misled by lots of US media articles suggesting exactly this.

There is actually decent research on this, and it says something quite different. Georgetown University’s Centre on Education and the Workforce published reports in 2010 and 2013, called ‘Hard Times: College Majors, Unemployment and Earnings’ where they looked at exactly this issue.

They found on both occasions that doing a psych major gives you employment prospects that are about mid-table in comparison to other degrees.

Below is the graph from the 2013 report. Click for a bigger version.

Essentially psychology is slightly below average in terms of employability. Tenth out of sixteen but still a college major where more than 9 out of 10 (91.2%) find jobs as recent graduates.

If you look at median income, the picture is much the same: somewhat below average but clearly not in the Chik-fil-A range.

What’s not factored into these reports, however, is gender difference. According to the statistics, almost 80% of psychology degrees in the US are earned by women.

Women earn less than men on average, are more likely to take voluntary career breaks, are more likely to be suspend work to have children, and so on. So it’s worth remembering that these figures don’t control for gender effects.

So when Bush says “I just don’t think people are getting jobs as psych majors” it seems he misthought.

Specifically, it looks like his thinking was biased by the availability heuristic which, if you know about it, can help you avoid embarrassing errors when making factual claims.

I’ll leave that irony for Jeb Bush to ponder, along with Allie Brandenburger, Kaitlin Zurdowsky and Josh Venable – three psychology majors he employed as senior members of his campaign team.

Spike activity 23-10-2015

Quick links from the past week in mind and brain news:

MP tricked into condemning a fake drug called ‘Cake’ on Brass Eye has been put in charge of scrutinising drugs policy in the UK Parliament, reports The Independent. What starts as satire is so often reborn as policy.

Narratively takes a look at the human stories behind the alarming rates of prescription opioid addiction in Appalachia.

Mental health research makes good economic sense, argues The Economist.

American Civil Liberties Union are suing the psychologists who developed the CIA torture programme.

Before 6 months, babies don’t relate touch to an event outside of themselves. We’re calling this “tactile solipsism”. Interesting Brain Decoder piece.

Mashable reports that Sesame Street debuts its first autistic Muppet. And try watching that What My Family Wants You to Know About Autism video without welling up.

‘Mental patient’ Halloween costumes: a scientific guide to dressing accurately. Important evidence-based Halloween advice on Brain Flapping.

The Scientist looks back at Camillo Golgi’s first drawings of neurons from the 1870s.

A social vanishing

CC Licensed Photo by Flickr user Jonathan Jordan. Click for source,A fantastic eight-part podcast series called Missing has just concluded and it’s a brilliant look at the psychology and forensic science of missing people.

It’s been put together by the novelist Tim Weaver who is renowned for his crime thrillers that feature missing persons investigator David Raker.

He uses the series to investigate the phenomenon of missing people and the result is a wonderfully engrossing, diverse documentary series that talks to everyone from forensic psychiatrists, to homicide investigators, to commercial companies that help you disappear without trace.

Missing people, by their absence, turn out to reveal a lot about the tension between social structures and individual behaviour in modern society. Highly recommended.

Link to Missing podcast series with iTunes / direct download links.

Web of illusion: how the internet affects our confidence in what we know

The internet can give us the illusion of knowledge, making us think we are smarter than we really are. Fortunately, there may be a cure for our arrogance, writes psychologist Tom Stafford.

The internet has a reputation for harbouring know-it-alls. Commenters on articles, bloggers, even your old school friends on Facebook all seem to swell with confidence in their understanding of exactly how the world works (and they are eager to share that understanding with everyone and anyone who will listen). Now, new research reveals that just having access to the world’s information can induce an illusion of overconfidence in our own wisdom. Fortunately the research also shares clues as to how that overconfidence can be corrected.

Specifically, we are looking at how the internet affects our thinking about what we know, a topic psychologists call metacognition. When you know you are boasting, you are being dishonest, but you haven’t made any actual error in estimating your ability. If you sincerely believe you know more than you do then you have made an error. The research suggests that an illusion of understanding may actually be incredibly common, and that this metacognitive error emerges in new ways in the age of the internet.

In a new paper, Matt Fisher of Yale University, considers a particular type of thinking known as transactive memory, which is the idea that we rely on other people and other parts of the world – books, objects – to remember things for us. If you’ve ever left something you needed for work by the door the night before, then you’ve been using transactive memory.

Part of this phenomenon is the tendency to then confuse what we really know in our personal memories, with what we have easy access to, the knowledge that is readily available in the world, or with which we are merely familiar without actually understanding in depth. It can feel like we understand how a car works, the argument goes, when in fact we are merely familiar with making it work. I press the accelerator and it goes forward, neglecting to realise that I don’t really know how it goes forward.

Fisher and colleagues were interested in how this tendency interacts with the internet age. They asked people to provide answers to factual questions, such as “Why are there time zones?”. Half of the participants were instructed to look up the answers on the internet before answering, half were told not to look up the answers on the internet. Next, all participants were asked how confidently they could explain the answers to a second series of questions (seperate, but also factual, questions such as “Why are cloudy nights warmer?” or “How is vinegar made?”).

Sure enough, people who had just been searching the internet for information were significantly more confident about their understanding of the second set of questions. Follow up studies confirmed that these people really did think the knowledge was theirs: they were still more confident if asked to indicate their response on a scale representing different levels of understanding with pictures of brain-scan activity (a ploy that was meant to emphasise that the information was there, in their heads). The confidence effect even persisted when the control group were provided answer material and the internet-search group were instructed to search for a site containing the exact same answer material. Something about actively searching for information on the internet specifically generated an illusion that the  knowledge was in the participants’ own heads.

If the feeling of controlling information generates overconfidence in our own wisdom, it might seem that the internet is an engine for turning us all into bores. Fortunately another study, also published this year, suggests a partial cure.

Amanda Ferguson of the University of Toronto and colleagues ran a similar study, except the set-up was in reverse: they asked participants to provide answers first and, if they didn’t know them, search the internet afterwards for the correct information (in the control condition participants who said “I don’t know” were let off the hook and just moved on to the next question). In this set up, people with access to the internet were actually less willing to give answers in the first place than people in the no internet condition. For these guys, access to the internet shut them up, rather than encouraging them to claim that they knew it all. Looking more closely at their judgements, it seems the effect wasn’t simply that the fact-checking had undermined their confidence. Those that knew they could fall back on the web to check the correct answer didn’t report feeling less confident within themselves, yet they were still less likely to share the information and show off their knowledge.

So, putting people in a position where they could be fact-checked made them more cautious in their initial claims. The implication I draw from this is that one way of fighting a know-it-all, if you have the energy, is to let them know that they are going to be thoroughly checked on whether they are right or wrong. It might not stop them researching a long answer with the internet, but it should slow them down, and diminish the feeling that just because the internet knows some information, they do to.

It is frequently asked if the internet is changing how we think. The answer, this research shows, is that the internet is giving new fuel to the way we’ve always thought. It can be both a cause of overconfidence,  when we mistake the boundary between what we know and what is available to us over the web, and it can be a cause of uncertainty, when we anticipate that we’ll be fact-checked using the web on the claims we make. Our tendencies to overestimate what we know, to use information that is readily available as a substitute for our own knowledge, and to worry about being caught out are all constants on how we think. The internet slots into this tangled cognitive ecosystem, from which endless new forms evolve.

This is my BBC Future column from earlier this week. The original is here

From school shootings to everyday counter-terrorism

CC Licensed Image from Secretive Ireland. Click for source.Mother Jones has a fascinating article on how America is attempting to stop school shootings by using community detection and behavioural intervention programmes for people identified as potential killers – before a crime has ever been committed.

It is a gripping read in itself but it is also interesting because it describes an approach that is now been rolled out to millions as part of community counter-terrorism strategies across the world, which puts a psychological model of mass-violence perpetration at its core.

The Mother Jones article describes a threat assessment model for school shootings that sits at an evolutionary mid-point: first developed to protect the US President, then to preventing school shootings, and now as mass deployment domestic counter-terrorism programmes.

You can see exactly this in the UK Government’s Prevent programme (part of the wider CONTEST counter-terrorism strategy). Many people will recognise this in the UK because if you work for a public body, like a school or the health service, you will have been trained in it.

The idea behind Prevent is that workers are trained to be alert to signs of radicalisation and extremism and can pass on potential cases to a multi-disciplinary panel, made up of social workers, mental health specialists, staff members and the police, who analyse the case in more detail and get more information as it’s needed.

If they decide the person is vulnerable to becoming dangerously radicalised or violent, they refer the case on the Channel programme, which aims to manage the risk by a combination of support from social services and heightened monitoring by security services.

A central concept is that the person may be made vulnerable to extremism due to unmet needs (poor mental health, housing, lack of opportunity, poor social support, spiritual emptiness, social conflict) which may convert into real world violence when mixed with certain ideologies or beliefs about the world that they are recruited into, or persuaded by, and so violence prevention includes both a needs-based and a threat-based approach.

This approach came from work by the US Secret Service in the 1990s, who were mainly concerned with protecting key government officials, and it was a radical departure from the idea that threat management was about physical security.

They began to try and understand why people might want to attempt to kill important officials and worked on figuring out how to identify risks and intervene before violence was ever used.

The Mother Jones article also mentions the LAPD Threat Management Unit (LAPDTMU) which was formed to deal with cases of violent stalking of celebrities, and the FBI had been developing a data-driven approach since the National Center for the Analysis of Violent Crime (NCAVC) launched in 1985.

By the time the Secret Service founded the National Threat Assessment Center in 1998, the approach was well established. When the Columbine massacre occurred the following year, the same thinking was applied to school shootings.

After Columbine, reports were produced by both the FBI (pdf) and the Secret Service (pdf) which outline some of the evolution of this approach and how it applies to preventing school shootings. The Mother Jones article illustrates what this looks like, more than 15 years later, as shootings are now more common and often directly inspired by Columbine or other more recent attacks.

It’s harder to find anything written on the formal design of the UK Goverment’s Prevent and Channel programmes but the approach is clearly taken from the work in the United States.

The difference is that it has been deployed on a mass scale. Literally, millions of public workers have been trained in Prevent, and Channel programmes exist all over the country to receive and evaluate referrals.

It may be one of the largest psychological interventions ever deployed.

Link to Mother Jones article on preventing the next mass shooting.

The echoes of the Prozac revolution

The Lancet Psychiatry has a fantastic article giving a much needed cultural retrospective on the wave of antidepressants like Prozac – which first made us worry we would no longer be our true selves through ‘cosmetic pharmacology,’ to the dawning realisation that they are unreliably useful but side-effect-ridden tools that can help manage difficult moods.

From their first appearance in the late 1980s until recently, SSRIs were an A-list topic of debate in the culture wars, and the rhetoric, whether pro or con, was red hot. Antidepressants were going to heal, or destroy, the world as we knew it.

Those discussions now feel dated. While antidepressants themselves are here to stay, they just don’t pulse with meaning the way they once did. Like the automobile or the telephone before them, SSRIs are a one-time miracle technology that have since become a familiar—even frumpy—part of the furniture of modern life.

At some point recently, they’ve slid into the final act of Mickey Smith’s wonder-drug drama. And in the aftermath of that change, many of the things that people used to say about them have come to sound completely absurd.

It’s a wonderful piece that perfectly captures the current place of antidepressants in modern society.

It’s by author Katherine Sharpe who wrote the highly acclaimed book Coming of Age on Zoloft which I haven’t read but have just ordered.

Link to ‘The silence of prozac’ in The Lancet Psychiatry.


Get every new post delivered to your Inbox.

Join 26,495 other followers