Why do we forget names?

A reader, Dan, asks “Why do we forget people’s names when we first meet them? I can remember all kinds of other details about a person but completely forget their name. Even after a lengthy, in-depth conversation. It’s really embarrassing.”

Fortunately the answer involves learning something fundamental about the nature of memory. It also provides a solution that can help you to avoid the embarrassing social situation of having spoken to someone for an hour, only to have forgotten their name.

To know why this happens you have to recognise that our memories aren’t a simple filing system, with separate folders for each kind of information and a really brightly coloured folder labelled “Names”.

Rather, our minds are associative. They are built out of patterns of interconnected information. This is why we daydream: you notice that the book you’re reading was printed in Paris, and that Paris is home to the Eiffel Tower, that your cousin Mary visited last summer, and Mary loves pistachio ice-cream. Say, I wonder if she ate a pistachio ice cream while up the Tower? It goes on and on like that, each item connected to every other, not by logic but by coincidence of time, place, how you learnt the information and what it means.

The same associative network means you can guess a question from the answer. Answer: “Eiffel Tower?” Question: “Paris’s most famous landmark.” This makes memory useful, because you can often go as easily from the content to the label as vice versa: “what is in the top drawer?” isn’t a very interesting question, but it becomes so when you want the answer “where are my keys?”.

So memory is built like this on purpose, and now we can see the reason why we forget names. Our memories are amazing, but they respond to how many associations we make with new information, not with how badly we want to remember it.

When you meet someone for the first time you learn their name, but for your memory it is probably an arbitrary piece of information unconnected to anything else you know, and unconnected to all the other things you later learn about them. After your conversation, in which you probably learn about their job, and their hobbies, and their family or whatever, all this information becomes linked in your memory. Imagine you are talking to a guy with a blue shirt who likes fishing and works selling cars, but would rather give it up to sell fishing gear. Now if you can remember one bit of information (“sell cars”) you can follow the chain to the others (“sells cars but wants to give it up”, “wants to give it up to sell fishing gear”, “loves fishing” and so on). The trouble is that your new friend’s name doesn’t get a look in because it is simply a piece of arbitrary information you didn’t connect to anything else about the conversation.

Fortunately, there are ways to strengthen those links so it does become entrenched with the other memories. Here’s how to remember the name, using some basic principles of memory.

First, you should repeat any name said to you. Practice is one of the golden rules of learning: more practice makes stronger memories. In addition, when you use someone’s name you are linking it to yourself, in the physical act of saying it, but also to the current topic of the conversation in your memory (“So, James, just what is it about fishing that makes you love it so much?”).

Second, you should try to link the name you have just learnt to something you already know. It doesn’t matter if the link is completely silly, it is just important that you find some connection to help the name stick in memory. For example, maybe the guy is called James, and your high school buddy was called James, and although this guy is wearing a blue shirt, high school James only ever wore black, so he’d never wear blue. It’s a silly made up association, but it can help you remember.

Finally, you need to try to link their name to something else about them. If it was me I’d grab the first thing to come to mind to bridge between the name and something I’ve learnt about them. For example, James is a sort of biblical name, you get the King James bible after all, and James begins with J, just like Jonah in the bible who was swallowed by the whale, and this James likes fishing, but I bet he prefers catching them to being caught by them.

It doesn’t matter if the links you make are outlandish or weird. You don’t have to tell anyone. In fact, probably it is best if you don’t tell anyone, especially your new friend! But the links will help create a web of association in your memory, and that web will stop their name falling out of your mind when it is time to introduce them to someone else.

And if you’re sceptical, try this quick test. I’ve mentioned three names during this article. I bet you can remember James, who isn’t Jonah. And probably you can remember cousin Mary (or at least what kind of ice cream she likes). But you can you remember the name of the reader who asked the question? That’s the only one I introduced without elaborating some connections around the name, and that’s why I’ll bet it is the only one you’ve forgotten.

This is my BBC Future column from last week. The original is here

No more Type I/II error confusion

Type I and Type II errors are, respectively, when you allow a statistical test to convinces you of a false effect, and when you allow a statistical test to convince you to dismiss a true effect. Despite being fundamentally important concepts, they are terribly named. Who can ever remember which way around the two errors go? Well now I can, thanks to a comment from a friend I thought so useful I made it into a picture:


a gold-standard study on brain training

The headlines

The Telegraph: Alzheimer’s disease: Online brain training “improves daily lives of over-60s”

Daily Mail: The quiz that makes over-60s better cooks: Computer brain games ‘stave off mental decline’

Yorkshire Post: Brain training study is “truly significant”

The story

A new trial shows the benefits of online ‘brain training’ exercises including improvements in everyday tasks, such as shopping, cooking and managing home finances.

What they actually did

A team led by Clive Ballard of King’s College London recruited people to a trial of online “brain training” exercises. Nearly 7,000 people over the age of 50 took part, and they were randomly assigned to one of three groups. One group did reasoning and problem solving tasks. A second group practised cognitive skills tasks, such as memory and attention training, and a third control group did a task which involved looking for information on the internet.

After six months, the reasoning and cognitive skills groups showed benefits compared with the control group. The main measure of the study was participants’ own reports of their ability to cope with daily activities. This was measured using something called the instrumental activities of daily living scale. (To give an example, you get a point if you are able to prepare your meals without assistance, and no points if you need help). The participants also showed benefits in short-term memory, judgements of grammatical accuracy and ability to learn new words.

Many of these benefits looked as if they accrued after just three months of regular practice, completing an average of five sessions a week. The benefits also seemed to affect those who went into the trial with the lowest performance, suggesting that such exercises may benefit those who are at risk of mild cognitive impairment (a precursor to dementia).

How plausible is this?

This is gold-standard research. The study was designed to the highest standards, as would be required if you were testing a new drug: a double-blind randomised control trial in which participants were assigned at random to the different treatment groups, and weren’t told which group they were in (nor what the researcher’s theory was). Large numbers of people took part, meaning that the study had a reasonable chance of detecting an effect of the treatment if it was there. The study design was also pre-registered on a database of clinical trials, meaning that the results couldn’t be buried if they turned out to be different from what the researchers (or funders) wanted, and the researchers declared in advance what their analysis would focus on.

So, overall, this is serious evidence that cognitive training exercises may bring some benefits, not just on similar cognitive tasks, but also on the everyday activities that are important for independent living among the older population.

Tom’s take

This kind of research is what “brain training” needs. Too many people – including those who just want to make some money – have leapt on the idea without the evidence that these kind of tasks can benefit anything other than performance on similar tasks. Because the evidence for broad benefits of cognitive training exercises is sparse, this study makes an important contribution to the supporters’ camp, although it far from settles the matter.

Why might you still be sceptical? Well there are some potential flaws in this study. It is useful to speculate on the effect these flaws might have had, even if only as an exercise to draw out the general lessons for interpreting this kind of research.

First up is the choice of control task. The benefits of the exercises tested in this research are only relative benefits compared with the scores of those who carried out the control task. If a different control task had been chosen maybe the benefits wouldn’t look so large. For example, we know that physical exercise has long-term and profound benefits for cognitive function. If the control group had been going for a brisk walk everyday, maybe the relative benefits of these computerised exercises would have vanished.

Or just go for a walk

Another possible distortion of the figures could have arisen as a result of people dropping out during the course of the trial. If people who were likely to score well were more likely to drop out of the control group (perhaps because it wasn’t challenging enough), then this would leave poor performers in the control group and so artificially inflate the relative benefits of being in the cognitive exercises group. More people did drop out of the control group, but it isn’t clear from reading the paper if the researchers’ analysis took steps to account for the effect this might have had on the results.

And finally, the really impressive result from this study is the benefit for the activities of daily living scale (the benefit for other cognitive abilities perhaps isn’t too surprising). This suggests a broad benefit of the cognitive exercises, something which other studies have had difficulty showing. However, it is important to note that this outcome was based on a self-report by the participants. There wasn’t any independent or objective verification, meaning that something as simple as people feeling more confident about themselves after having competed the study could skew the results.

None of these three possible flaws mean we should ignore this result, but questions like these mean that we will need follow up research before we can be certain that cognitive training brings benefits on mental function in older adults.

For now, the implications of the current state of brain training research are:

Don’t pay money for any “brain training” programme. There isn’t any evidence that commercially available exercises have any benefit over the kinds of tasks and problems you can access for free.

Do exercise. Your brain is a machine that runs on blood, and it is never too late to improve the blood supply to the brain through increased physical activity. How long have you been on the computer? Could it be time for a brisk walk round the garden or to the shops? (Younger people, take note, exercise in youth benefits mental function in older age)

A key feature of this study was that the exercises in the treatment group got progressively more difficult as the participants practised. The real benefit may not be from these exercises as such, but from continually facing new mental challenges. So, whatever your hobbies, perhaps – just perhaps – make sure you are learning something new as well as enjoying whatever you already know.

Read more

The original study: The Effect of an Online Cognitive Training Package in Healthy Older Adults: An Online Randomized Controlled Trial

Oliver Burkeman writes: http://www.theguardian.com/science/2014/jan/04/can-i-increase-my-brain-power

The New Yorker (2013): http://www.newyorker.com/tech/elements/brain-games-are-bogus

The Conversation

This article was originally published on The Conversation. Read the original article.

Web of illusion: how the internet affects our confidence in what we know

The internet can give us the illusion of knowledge, making us think we are smarter than we really are. Fortunately, there may be a cure for our arrogance, writes psychologist Tom Stafford.

The internet has a reputation for harbouring know-it-alls. Commenters on articles, bloggers, even your old school friends on Facebook all seem to swell with confidence in their understanding of exactly how the world works (and they are eager to share that understanding with everyone and anyone who will listen). Now, new research reveals that just having access to the world’s information can induce an illusion of overconfidence in our own wisdom. Fortunately the research also shares clues as to how that overconfidence can be corrected.

Specifically, we are looking at how the internet affects our thinking about what we know, a topic psychologists call metacognition. When you know you are boasting, you are being dishonest, but you haven’t made any actual error in estimating your ability. If you sincerely believe you know more than you do then you have made an error. The research suggests that an illusion of understanding may actually be incredibly common, and that this metacognitive error emerges in new ways in the age of the internet.

In a new paper, Matt Fisher of Yale University, considers a particular type of thinking known as transactive memory, which is the idea that we rely on other people and other parts of the world – books, objects – to remember things for us. If you’ve ever left something you needed for work by the door the night before, then you’ve been using transactive memory.

Part of this phenomenon is the tendency to then confuse what we really know in our personal memories, with what we have easy access to, the knowledge that is readily available in the world, or with which we are merely familiar without actually understanding in depth. It can feel like we understand how a car works, the argument goes, when in fact we are merely familiar with making it work. I press the accelerator and it goes forward, neglecting to realise that I don’t really know how it goes forward.

Fisher and colleagues were interested in how this tendency interacts with the internet age. They asked people to provide answers to factual questions, such as “Why are there time zones?”. Half of the participants were instructed to look up the answers on the internet before answering, half were told not to look up the answers on the internet. Next, all participants were asked how confidently they could explain the answers to a second series of questions (seperate, but also factual, questions such as “Why are cloudy nights warmer?” or “How is vinegar made?”).

Sure enough, people who had just been searching the internet for information were significantly more confident about their understanding of the second set of questions. Follow up studies confirmed that these people really did think the knowledge was theirs: they were still more confident if asked to indicate their response on a scale representing different levels of understanding with pictures of brain-scan activity (a ploy that was meant to emphasise that the information was there, in their heads). The confidence effect even persisted when the control group were provided answer material and the internet-search group were instructed to search for a site containing the exact same answer material. Something about actively searching for information on the internet specifically generated an illusion that the  knowledge was in the participants’ own heads.

If the feeling of controlling information generates overconfidence in our own wisdom, it might seem that the internet is an engine for turning us all into bores. Fortunately another study, also published this year, suggests a partial cure.

Amanda Ferguson of the University of Toronto and colleagues ran a similar study, except the set-up was in reverse: they asked participants to provide answers first and, if they didn’t know them, search the internet afterwards for the correct information (in the control condition participants who said “I don’t know” were let off the hook and just moved on to the next question). In this set up, people with access to the internet were actually less willing to give answers in the first place than people in the no internet condition. For these guys, access to the internet shut them up, rather than encouraging them to claim that they knew it all. Looking more closely at their judgements, it seems the effect wasn’t simply that the fact-checking had undermined their confidence. Those that knew they could fall back on the web to check the correct answer didn’t report feeling less confident within themselves, yet they were still less likely to share the information and show off their knowledge.

So, putting people in a position where they could be fact-checked made them more cautious in their initial claims. The implication I draw from this is that one way of fighting a know-it-all, if you have the energy, is to let them know that they are going to be thoroughly checked on whether they are right or wrong. It might not stop them researching a long answer with the internet, but it should slow them down, and diminish the feeling that just because the internet knows some information, they do to.

It is frequently asked if the internet is changing how we think. The answer, this research shows, is that the internet is giving new fuel to the way we’ve always thought. It can be both a cause of overconfidence,  when we mistake the boundary between what we know and what is available to us over the web, and it can be a cause of uncertainty, when we anticipate that we’ll be fact-checked using the web on the claims we make. Our tendencies to overestimate what we know, to use information that is readily available as a substitute for our own knowledge, and to worry about being caught out are all constants on how we think. The internet slots into this tangled cognitive ecosystem, from which endless new forms evolve.

This is my BBC Future column from earlier this week. The original is here

Statistical fallacy impairs post-publication mood

banksyNo scientific paper is perfect, but a recent result on the affect of mood on colour perception is getting a particularly rough ride post-publication. Thorstenson and colleagues published their paper this summer in Psychological Science, claiming that people who were sad had impaired colour perception along the blue-yellow colour axis but not along the red-green colour axis. Pubpeer – a site where scholars can anonymously discuss papers after publication – has a critique of the paper, which observes that the paper commits a known flaw in its analysis.

The flaw, anonymous comments suggest, is that a difference between the two types of colour perception is claimed, but this isn’t actually tested by the paper – instead it shows that mood significantly affects blue-yellow perception, but does not significantly affect red-green perception. If there is enough evidence that one effect is significant, but not enough evidence for the second being significant, that doesn’t mean that the two effects are different from each other. Analogously, if you can prove that one suspect was present at a crime scene, but can’t prove the other was, that doesn’t mean that you have proved that the two suspects were in different places.

This mistake in analysis  – which is far from unique to this paper – is discussed in a classic 2011 paper by Nieuwenhuis and colleagues: Erroneous analyses of interactions in neuroscience: a problem of significance. At the time of writing the sentiment on Pubpeer is that the paper should be retracted – in effect striking it from the scientific record.

With commentary like this, you can see why Pubpeer has previously been the target of legal action by aggrieved researchers who feel the site unfairly maligns their work.

(h/t to Daniël Lakens and jjodx on twitter)

UPDATE 5/11/15: It’s been retracted

How the magic of cinema unlocked one man’s coma-bound world

Image from NIH. Click for source.An Alfred Hitchcock film helped to prove one patient had been conscious while in a coma-like state for 16 years. The discovery shows that neuroscience may still have lots to learn from the ancient art of storytelling, says Tom Stafford.

If brain injury steals your consciousness then you are in a coma: we all know that. What is less well known is that there exist neighbouring states to the coma, in which victims keep their eyes open, but show no signs of consciousness. The vegetative state, or ‘unresponsive wakefulness syndrome’, is one in which the patient may appear to be awake, and even goes to sleep at times, but otherwise shows no reaction to the world. Patients who do inconsistently respond, such as by flinching when their name is called, or following a bright object with their eyes, are classified as in a ‘minimally conscious state’. Both categories of patients show no signs of deliberate actions, or sustained reaction to the environment, and until recently there was no way for anyone to discern their true, inner, level of consciousness.

The fear is that, like the ‘locked-in syndrome’ that can occur after strokes, these patients may be conscious, but are just unable to show it. The opposite possibility is that these patients are as unconscious as someone in the deepest coma, with only circuitry peripheral to consciousness keeping their eyes open and producing minimal responses automatically.

In the last 10 years, research spearheaded by cognitive neuroscientist Adrian Owen has transformed our understanding of these shadowlands of consciousness. There is now evidence, obtained using brain scans, that some patients (around one in five) in these ‘wakeful coma’ states have conscious awareness. If asked to imagine playing tennis, the brain areas specifically controlling movement become active. If asked to imagine finding their way around their house, the brain regions involved in navigation become active. Using these signals a small minority of patients have even communicated with the outside world, with the brain scanner helping observers to mind-read their answers to questions.

The practical and ethical implications of these findings are huge, not least for the treatment of the hundreds of thousands of people who are in hospitals around the world in these conditions right now.

But the meaning of the research is still hotly debated. One issue is that the mind reading uses neural responses to questions or commands, and careful controls are needed to ensure that their patients’ brains aren’t just responding automatically without any actual conscious involvement. A second issue, and one that cannot be controlled away, is that the method used may tell us that these patients are capable of responding, but it doesn’t tell us much about the quality of conscious experience they are having. How alert, aware and focused they are is hard to discern.

In a relatively new study, a post-doctoral fellow at Owen’s lab, Lorina Naci, has used cinema to show just how sophisticated conscious awareness can be in a ‘minimally conscious’ patient.

The trick they used involved an 8 minute edit of “Bang! You’re dead”, a 1961 episode of “Alfred Hitchcock Presents”. In the film, a young boy with a toy gun obsession wanders around aiming and firing at people. Unbeknownst to him, and the adults he aims at, on this day he has found a real gun and it has a live bullet in the chamber.

The film works because of this hidden knowledge we, the viewers, have. Knowing about the bullet, a small boy’s mundane antics become high drama, as he unwittingly involves unsuspecting people in round after deadly round of Russian roulette.

Naci showed the film to healthy participants. To a separate group she showed a scrambled version involving rearranged one-second segments. This ‘control’ version was important because it contained many of the same features as the original; the same visual patterns, the same objects, the same actions. But it lacked the crucial narrative coherence – the knowledge of the bullet – which generated the suspense.

Using brain scanning, and the comparison of the two versions of the film, Naci and colleagues were able to show that the unscrambled, suspenseful version activated nearly every part of the cortex. Everything from primary sensory areas, to motor areas, to areas involved in memory and anticipation were engaged (as you might hope from a film from one of the masters of storytelling). The researchers were particularly interested in a network of activity that rose and fell in synchrony across ‘executive’ areas of the brain – those known to be involved in planning, anticipation, and integrating information from different sources. This network, they found, responded to the moments of highest suspense in the film; the moments when the boy was about to fire, for example. These were the moments you could only find so dramatic if you were following the plot.

Next the researchers showed the film to two patients in wakeful comas. In one, the auditory cortex became activated, but nothing beyond this primary sensory region. Their brain was responding to sounds, perhaps automatically, but there was no evidence of more complex processing. But in a second patient, who had been hospitalised and non-responsive for 16 years, his brain response matched those of the healthy controls who’d seen the film. Like them, activity across the cortex rose and fell with the action of the film, indicating an inner consciousness rich enough to follow the plot.

The astounding result should make us think carefully about how we treat such patients and marks an advance on the arsenal of techniques we can use to connect to the inner lives of non-responsive patients. It also shows how cognitive neuroscience can benefit from the use of more complex stimuli, such as movies, rather than the typically boring visual patterns and simple button-press responses that scientists usually use to probe the mysteries of the brain.

The genius of this research is that to test for the rich consciousness of the patient who appears unresponsive you need to use rich stimuli. The Hitchcock film was perfect because of its ability to create drama by what we believe and expect not because of what we merely see.

My BBC Future column from last week. The original is here. The original paper is: Naci, L., Cusack, R., Anello, M., Owen, A. M. A common neural code for similar conscious experiences in different individuals. PNAS. 2014;111(39):14277–82.

Images of ultra-thin models need your attention to make you feel bad

I have a guest post over at the BPS Research Digest, covering research on the psychological effects of pictures of ultra-thin fashion models.

A crucial question is whether the effect of these thin-ideal images is automatic. Does the comparison to the models, which is thought to be the key driver in their negative effects, happen without our intention, attention or both? Knowing the answer will tell us just how much power these images have, and also how best we might protect ourselves from them.

It’s a great study from the lab of Stephen Want (Ryerson University). For the full details of the research, head over: Images of ultra-thin models need your attention to make you feel bad

Update: Download the preprint of the paper, and the original data here