Why do we forget names?

A reader, Dan, asks “Why do we forget people’s names when we first meet them? I can remember all kinds of other details about a person but completely forget their name. Even after a lengthy, in-depth conversation. It’s really embarrassing.”

Fortunately the answer involves learning something fundamental about the nature of memory. It also provides a solution that can help you to avoid the embarrassing social situation of having spoken to someone for an hour, only to have forgotten their name.

To know why this happens you have to recognise that our memories aren’t a simple filing system, with separate folders for each kind of information and a really brightly coloured folder labelled “Names”.

Rather, our minds are associative. They are built out of patterns of interconnected information. This is why we daydream: you notice that the book you’re reading was printed in Paris, and that Paris is home to the Eiffel Tower, that your cousin Mary visited last summer, and Mary loves pistachio ice-cream. Say, I wonder if she ate a pistachio ice cream while up the Tower? It goes on and on like that, each item connected to every other, not by logic but by coincidence of time, place, how you learnt the information and what it means.

The same associative network means you can guess a question from the answer. Answer: “Eiffel Tower?” Question: “Paris’s most famous landmark.” This makes memory useful, because you can often go as easily from the content to the label as vice versa: “what is in the top drawer?” isn’t a very interesting question, but it becomes so when you want the answer “where are my keys?”.

So memory is built like this on purpose, and now we can see the reason why we forget names. Our memories are amazing, but they respond to how many associations we make with new information, not with how badly we want to remember it.

When you meet someone for the first time you learn their name, but for your memory it is probably an arbitrary piece of information unconnected to anything else you know, and unconnected to all the other things you later learn about them. After your conversation, in which you probably learn about their job, and their hobbies, and their family or whatever, all this information becomes linked in your memory. Imagine you are talking to a guy with a blue shirt who likes fishing and works selling cars, but would rather give it up to sell fishing gear. Now if you can remember one bit of information (“sell cars”) you can follow the chain to the others (“sells cars but wants to give it up”, “wants to give it up to sell fishing gear”, “loves fishing” and so on). The trouble is that your new friend’s name doesn’t get a look in because it is simply a piece of arbitrary information you didn’t connect to anything else about the conversation.

Fortunately, there are ways to strengthen those links so it does become entrenched with the other memories. Here’s how to remember the name, using some basic principles of memory.

First, you should repeat any name said to you. Practice is one of the golden rules of learning: more practice makes stronger memories. In addition, when you use someone’s name you are linking it to yourself, in the physical act of saying it, but also to the current topic of the conversation in your memory (“So, James, just what is it about fishing that makes you love it so much?”).

Second, you should try to link the name you have just learnt to something you already know. It doesn’t matter if the link is completely silly, it is just important that you find some connection to help the name stick in memory. For example, maybe the guy is called James, and your high school buddy was called James, and although this guy is wearing a blue shirt, high school James only ever wore black, so he’d never wear blue. It’s a silly made up association, but it can help you remember.

Finally, you need to try to link their name to something else about them. If it was me I’d grab the first thing to come to mind to bridge between the name and something I’ve learnt about them. For example, James is a sort of biblical name, you get the King James bible after all, and James begins with J, just like Jonah in the bible who was swallowed by the whale, and this James likes fishing, but I bet he prefers catching them to being caught by them.

It doesn’t matter if the links you make are outlandish or weird. You don’t have to tell anyone. In fact, probably it is best if you don’t tell anyone, especially your new friend! But the links will help create a web of association in your memory, and that web will stop their name falling out of your mind when it is time to introduce them to someone else.

And if you’re sceptical, try this quick test. I’ve mentioned three names during this article. I bet you can remember James, who isn’t Jonah. And probably you can remember cousin Mary (or at least what kind of ice cream she likes). But you can you remember the name of the reader who asked the question? That’s the only one I introduced without elaborating some connections around the name, and that’s why I’ll bet it is the only one you’ve forgotten.

This is my BBC Future column from last week. The original is here

Spike activity 20-11-2015

Quick links from the past week in mind and brain news:

Wired has a good brief piece on the history of biodigital brain implants.

Why are conspiracy theories so attractive? Good discussion on the Science Weekly podcast.

The Wilson Quarterly has a piece on the mystery behind Japan’s high child suicide rate.

The Dream Life of Driverless Cars. Wonderful piece in The New York Times. Don’t miss the video.

The New Yorker has an extended profile on the people who run the legendary Erowid website on psychedelic drugs.

Allen Institute scientists identify human brain’s most common genetic patterns. Story in Geekwire.

BoingBoing covers a fascinating game where you play a blind girl and the game world is dynamically constructed through other senses and memory and shifts with new sensory information.

Excellent article on the real science behind the hype of neuroplasticity in Mosaic Science. Not to be missed.

No more Type I/II error confusion

Type I and Type II errors are, respectively, when you allow a statistical test to convinces you of a false effect, and when you allow a statistical test to convince you to dismiss a true effect. Despite being fundamentally important concepts, they are terribly named. Who can ever remember which way around the two errors go? Well now I can, thanks to a comment from a friend I thought so useful I made it into a picture:


Spike activity 13-11-2015

Quick links from the past week in mind and brain news:

The Weak Science Behind the Wrongly Named Moral Molecule. The Atlantic has some home truths about oxytocin.

Neurophilosophy reports on some half a billion year old brains found preserved in fool’s gold.

An Illuminated, 5,000-Pound Neuron Sculpture Is Coming to Boston. Boston magazine has some pictures.

Guardian Science Weekly podcast has neuroscientist David Eagleman discussing his new book.

A neurologist frustrated by the obstacles to his work on brain-machine interfaces paid a surgeon in Central America $25,000 to implant electrodes into his brain. MIT Tech Review reports.

Business Insider reports on Google’s troubled robotics division. It’s called Replicant, so I’m guessing incept dates may be a point of contention.

The real history of the ‘safe space’

There’s much debate in the media about a culture of demanding ‘safe spaces’ at university campuses in the US, a culture which has been accused of restricting free speech by defining contrary opinions as harmful.

The history of safe spaces is an interesting one and a recent article in Fusion cited the concept as originating in the feminist and gay liberation movements of the 1960s.

But the concept of the ‘safe space’ didn’t start with these movements, it started in a much more unlikely place – corporate America – largely thanks to the work of psychologist Kurt Lewin.

Like so many great psychologists of the early 20th Century, Lewin was a Jewish academic who left Europe after the rise of Nazism and moved to the United States.

Although originally a behaviourist, he became deeply involved in social psychology at the level of small group interactions and eventually became director of the Center for Group Dynamics at MIT.

Lewin’s work was massively influential and lots of our everyday phrases come from his ideas. The fact we talk about ‘social dynamics’ at all, is due to him, and the fact we give ‘feedback’ to our colleagues is because Lewin took the term from engineering and applied it to social situations.

In the late 1940s, Lewin was asked to help develop leadership training for corporate bosses and out of this work came the foundation of the National Training Laboratories and the invention of sensitivity training which was a form of group discussion where members could give honest feedback to each other to allow people to become aware of their unhelpful assumptions, implicit biases, and behaviours that were holding them back as effective leaders.

Lewin drew on ideas from group psychotherapy that had been around for years but formalised them into a specific and brief focused group activity.

One of the ideas behind sensitivity training, was that honesty and change would only occur if people could be frank and challenge others in an environment of psychological safety. In other words, without judgement.

Practically, this means that there is an explicit rule that everyone agrees to at the start of the group. A ‘safe space’ is created, confidential and free of judgement but precisely to allow people to mention concerns without fear of being condemned for them, on the understanding that they’re hoping to change.

It could be anything related to being an effective leader, but if we’re thinking about race, participants might discuss how, even though they try to be non-racist, they tend to feel fearful when they see a group of black youths, or that they often think white people are stuck up, and other group members, perhaps those affected by these fears, could give alternative angles.

The use of sensitivity groups began to gain currency in corporate America and the idea was taken up by psychologists such as the humanistic therapist Carl Rogers who, by the 1960s, developed the idea into encounter groups which were more aimed at self-actualisation and social change, in line with the spirit of the times, but based on the same ‘safe space’ environment. As you can imagine, they were popular in California.

It’s worth saying that although the ideal was non-judgement, the reality could be a fairly rocky emotional experience, as described by a famous 1971 study on ‘encounter group casualties’.

From here, the idea of safe space was taken up by feminist and gay liberation groups, but with a slightly different slant, in that sexist or homophobic behaviour was banned by mutual agreement but individuals could be pulled up if it occurred, with the understanding that people would make an honest attempt to recognise it and change.

And finally we get to the recent campus movements, where the safe space has become a public political act. Rather than individuals opting in, it is championed or imposed (depending on which side you take) as something that should define acceptable public behaviour.

In other words, creating a safe space is considered to be a social responsibility and you can opt out, but only by leaving.

Extremes of self-experimentation with brain electrodes

MIT Technology Review has jaw dropping article about brain-computer interface research Phil Kennedy. In the face of diminishing funding and increasing regulation he “paid a surgeon in Central America $25,000 to implant electrodes into his brain in order to establish a connection between his motor cortex and a computer”.

Both ethically dubious and interesting, it discusses what led Kennedy to this rather drastic decision:

Kennedy’s scientific aim has been to build a speech decoder—software that can translate the neuronal signals produced by imagined speech into words coming out of a speech synthesizer. But this work, carried out by his small Georgia company Neural Signals, had stalled, Kennedy says. He could no longer find research subjects, had little funding, and had lost the support of the U.S. Food and Drug Administration.

That is why in June 2014, he found himself sitting in a distant hospital contemplating the image of his own shaved scalp in a mirror. “This whole research effort of 29 years so far was going to die if I didn’t do something,” he says. “I didn’t want it to die on the vine. That is why I took the risk.”


Link to MIT Tech Review article.

A medieval attitude to suicide

I had always thought that suicide was made illegal in medieval times due to religious disapproval until suicidal people were finally freed from the risk of prosecution by the 1961 Suicide Act.

It turns out the history is a little more nuanced, as noted in this 1904 article from the Columbia Law Review entitled “Is Suicide Murder?” that explores the rather convoluted legal approach to suicide in centuries past.

In the UK, the legal status of suicide was first mentioned in a landmark 13th Century legal document attributed to Henry de Bracton.

But contrary to popular belief about medieval attitudes, suicide by ‘insane’ people was not considered a crime and was entirely blame free. Suicide by people who were motivated by “weariness of life or impatience of pain” received only a light punishment (their goods were forfeited but their family could still inherit their lands).

The most serious punishment of forfeiting everything to the Crown was restricted to those who were thought to have killed themselves “without any cause, through anger or ill will, as when he wished to hurt another”.

There are some examples of exactly these sorts of considerations in a British Journal of Psychiatry article that looks at these cases in the Middle Ages. This is a 1292 case from Hereford:

William la Emeyse of this vill, suffering from an acute fever which took away his senses, got up at night, entered the water of Kentford and drowned himself. The jury was asked if he did this feloniously and said no, he did it through his illness. The verdict was an accident.

We tend to think that the medieval world had a very simplistic view of the experiences and behaviour that we might now classify as mental illness but this often wasn’t the case.

Even the common assumption that all these experiences were put down to ‘demonic possession’ turns out to be a myth, as possession was considered to be a possible but rare explanation and was only accepted after psychological and physical disturbances were ruled out.