I had a piece in the Guardian on Saturday, ‘The way you’re revising may let you down in exams – and here’s why. In it I talk about a pervasive feature of our memories: that we tend to overestimate how much of a memory is ‘ours’, and how little is actually shared with other people, or the environment (see also the illusion of explanatory depth). This memory trap can combine with our instinct to make things easy for ourselves and result in us thinking we are learning when really we’re just flattering our feeling of familiarity with a topic.
Here’s the start of the piece:
Even the most dedicated study plan can be undone by a failure to understand how human memory works. Only when you’re aware of the trap set for us by overconfidence, can you most effectively deploy the study skills you already know about.
… even the best [study] advice can be useless if you don’t realise why it works. Understanding one fundamental principle of human memory can help you avoid wasting time studying the wrong way.
I go on to give four evidence-based pieces of revision advice, all of which – I hope – use psychology to show that some of our intuitions about how to study can’t be trusted.
A few classic studies help to define the way we think about the science of learning. A classic study isn’t classic just because it uncovered a new fact, but because it neatly demonstrates a profound truth about how we learn – often at the same time showing up our unjustified assumptions about how our minds work.
My picks for five classics of learning were:
Bartlett’s “War of the Ghosts”
Skinner’s operant conditioning
work on dissociable memory systems by Larry Squire and colleagues
de Groot’s studies of expertise in chess grandmasters, and ….
Anders Ericcson’s work on deliberate practice (of ‘ten thousands hours’ fame)
Obviously, that’s just my choice (and you can read my reasons in the article). Did I choose right? Or is there a classic study of learning I missed? Answers in the comments.
A new trial shows the benefits of online ‘brain training’ exercises including improvements in everyday tasks, such as shopping, cooking and managing home finances.
What they actually did
A team led by Clive Ballard of King’s College London recruited people to a trial of online “brain training” exercises. Nearly 7,000 people over the age of 50 took part, and they were randomly assigned to one of three groups. One group did reasoning and problem solving tasks. A second group practised cognitive skills tasks, such as memory and attention training, and a third control group did a task which involved looking for information on the internet.
After six months, the reasoning and cognitive skills groups showed benefits compared with the control group. The main measure of the study was participants’ own reports of their ability to cope with daily activities. This was measured using something called the instrumental activities of daily living scale. (To give an example, you get a point if you are able to prepare your meals without assistance, and no points if you need help). The participants also showed benefits in short-term memory, judgements of grammatical accuracy and ability to learn new words.
Many of these benefits looked as if they accrued after just three months of regular practice, completing an average of five sessions a week. The benefits also seemed to affect those who went into the trial with the lowest performance, suggesting that such exercises may benefit those who are at risk of mild cognitive impairment (a precursor to dementia).
How plausible is this?
This is gold-standard research. The study was designed to the highest standards, as would be required if you were testing a new drug: a double-blind randomised control trial in which participants were assigned at random to the different treatment groups, and weren’t told which group they were in (nor what the researcher’s theory was). Large numbers of people took part, meaning that the study had a reasonable chance of detecting an effect of the treatment if it was there. The study design was also pre-registered on a database of clinical trials, meaning that the results couldn’t be buried if they turned out to be different from what the researchers (or funders) wanted, and the researchers declared in advance what their analysis would focus on.
So, overall, this is serious evidence that cognitive training exercises may bring some benefits, not just on similar cognitive tasks, but also on the everyday activities that are important for independent living among the older population.
This kind of research is what “brain training” needs. Too many people – including those who just want to make some money – have leapt on the idea without the evidence that these kind of tasks can benefit anything other than performance on similar tasks. Because the evidence for broad benefits of cognitive training exercises is sparse, this study makes an important contribution to the supporters’ camp, although it far from settles the matter.
Why might you still be sceptical? Well there are some potential flaws in this study. It is useful to speculate on the effect these flaws might have had, even if only as an exercise to draw out the general lessons for interpreting this kind of research.
First up is the choice of control task. The benefits of the exercises tested in this research are only relative benefits compared with the scores of those who carried out the control task. If a different control task had been chosen maybe the benefits wouldn’t look so large. For example, we know that physical exercise has long-term and profound benefits for cognitive function. If the control group had been going for a brisk walk everyday, maybe the relative benefits of these computerised exercises would have vanished.
Another possible distortion of the figures could have arisen as a result of people dropping out during the course of the trial. If people who were likely to score well were more likely to drop out of the control group (perhaps because it wasn’t challenging enough), then this would leave poor performers in the control group and so artificially inflate the relative benefits of being in the cognitive exercises group. More people did drop out of the control group, but it isn’t clear from reading the paper if the researchers’ analysis took steps to account for the effect this might have had on the results.
And finally, the really impressive result from this study is the benefit for the activities of daily living scale (the benefit for other cognitive abilities perhaps isn’t too surprising). This suggests a broad benefit of the cognitive exercises, something which other studies have had difficulty showing. However, it is important to note that this outcome was based on a self-report by the participants. There wasn’t any independent or objective verification, meaning that something as simple as people feeling more confident about themselves after having competed the study could skew the results.
None of these three possible flaws mean we should ignore this result, but questions like these mean that we will need follow up research before we can be certain that cognitive training brings benefits on mental function in older adults.
For now, the implications of the current state of brain training research are:
Don’t pay money for any “brain training” programme. There isn’t any evidence that commercially available exercises have any benefit over the kinds of tasks and problems you can access for free.
Do exercise. Your brain is a machine that runs on blood, and it is never too late to improve the blood supply to the brain through increased physical activity. How long have you been on the computer? Could it be time for a brisk walk round the garden or to the shops? (Younger people, take note, exercise in youth benefits mental function in older age)
A key feature of this study was that the exercises in the treatment group got progressively more difficult as the participants practised. The real benefit may not be from these exercises as such, but from continually facing new mental challenges. So, whatever your hobbies, perhaps – just perhaps – make sure you are learning something new as well as enjoying whatever you already know.
Want to enhance your memory for facts? Tom Stafford explains a counterintuitive method for retaining information.
If I asked you to sit down and remember a list of phone numbers or a series of facts, how would you go about it? There’s a fair chance that you’d be doing it wrong.
One of the interesting things about the mind is that even though we all have one, we don’t have perfect insight into how to get the best from it. This is in part because of flaws in our ability to think about our own thinking, which is called metacognition. Studying this self-reflective thought process reveals that the human species has mental blind spots.
One area where these blind spots are particularly large is learning. We’re actually surprisingly bad at having insight into how we learn best.
Researchers Jeffrey Karpicke and Henry Roediger III set out to look at one aspect: how testing can consolidate our memory of facts. In their experiment they asked college students to learn pairs of Swahili and English words. So, for example, they had to learn that if they were given the Swahili word ‘mashua’ the correct response was ‘boat’. They could have used the sort of facts you might get on a high-school quiz (e.g. “Who wrote the first computer programs?”/”Ada Lovelace”), but the use of Swahili meant that there was little chance their participants could use any background knowledge to help them learn. After the pairs had all been learnt, there would be a final test a week later.
Now if many of us were revising this list we might study the list, test ourselves and then repeat this cycle, dropping items we got right. This makes studying (and testing) quicker and allows us to focus our effort on the things we haven’t yet learnt. It’s a plan that seems to make perfect sense, but it’s a plan that is disastrous if we really want to learn properly.
Karpicke and Roediger asked students to prepare for a test in various ways, and compared their success – for example, one group kept testing themselves on all items without dropping what they were getting right, while another group stopped testing themselves on their correct answers.
On the final exam differences between the groups were dramatic. While dropping items from study didn’t have much of an effect, the people who dropped items from testing performed relatively poorly: they could only remember about 35% of the word pairs, compared to 80% for people who kept testing items after they had learnt them.
It seems the effective way to learn is to practice retrieving items from memory, not trying to cement them in there by further study. Moreover, dropping items entirely from your revision, which is the advice given by many study guides, is wrong. You can stop studying them if you’ve learnt them, but you should keep testing what you’ve learnt if you want to remember them at the time of the final exam.
Finally, the researchers had the neat idea of asking their participants how well they would remember what they had learnt. All groups guessed at about 50%. This was a large overestimate for those who dropped items from test (and an underestimate from those who kept testing learnt items).
So it seems that we have a metacognitive blind spot for which revision strategies will work best. Making this a situation where we need to be guided by the evidence, and not our instinct. But the evidence has a moral for teachers as well: there’s more to testing than finding out what students know – tests can also help us remember.
Released on 6th of June 1984, Tetris is 30 years old today. Here’s a video where I try and explain something of the psychology of Tetris:
All credit for the graphics to Andrew Twist. What I say in the video is based on an article I wrote a while back for BBC Future.
As well as hijacking the minds and twitchy fingers of puzzle-gamers for 30 years, Tetris has also been involved in some important psychological research.
My favourite is Kirsh and Maglio’s work on “epistemic action“, which showed how Tetris players prefer to rotate the blocks in the game world rather than mentally. This using the world in synchrony with your mental representations is part of what makes it so immersive, I argue.
The BBC is reporting that a UK teachers union “is calling for urgent action over the impact of modern technology on children’s ability to learn” and that “some pupils were unable to concentrate or socialise properly” due to what they perceive as ‘over-use’ of digital technology.
Due to evidence reviewed by neuroscientist Kathryn Mills in a recent paper (pdf) we know that we’ve really got no reason to worry about technology having an adverse effects on kids’ brains.
It may not be that the teachers’ union is completely mistaken, however. They may be on to something but maybe just not what they think they’re onto.
To make sense of the confusion, you need to check out an elegant study completed by psychologists Robert Weis and Brittany Cerankosky who decided to test the psychological effects of giving young boys video game consoles.
They asked for families to take part who did not have a video-game system already in their home, had a parent interested in purchasing a system for their use, and where the kid had no history of developmental, behavioural, medical, or learning problems.
They ran a randomised controlled trial or RCT where 6 to 9-year-old boys were first given neuropsychological tests to measure their cognitive abilities (memory, concentration and problem-solving) and then randomly assigned to get a video games console.
The families in the control group were promised a console at the end of the study, by the way, so they didn’t think ‘oh sod it’ and go and buy one anyway.
So, we have half the kids with spanking brand new console, and, as part of the trial, the amount of time kids spent gaming and doing their school work was measured throughout, as was reporting of any behavioural problems. At the end of the study their academic progress was measured and their cognitive abilities were tested again.
The results were clear: kids who got video game consoles were worse off academically compared to their non-console-owning peers – their progress in reading and writing had suffered.
But this wasn’t due to an impact on their concentration, memory, problem-solving or behaviour – their neuropsychological and social performance was completely unaffected.
By looking at how much time the kids spent on the consoles, they found that reduced academic performance was due to the fact that kids in the console-owning families started spending less time doing their homework.
In other words, if your kids play a lot of computer games instead of doing homework they may well appear worse off, and from the teachers’ point-of-view, might seem a little slowed-down compared to their peers, but this is not due to cognitive changes.
Interestingly, teachers may not be in the best position to see this distinction very well because they tend, like the rest of us, to measure ability by performance in the tasks they set and not in comparison to neuropsychological test performance.
The solution is not to panic about technology as this same conclusion probably applies to anything that displaces homework (too many piano lessons will have the same effect) but good parental management of out-of-school time is clearly important.
Link to locked study on the effects of video games.
In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning.