Spaced repetition is a memory hack. We know that spacing out your study is more effective than cramming, but using an app you can tailor your own spaced repetition schedule, allowing you to efficiently create reliable memories for any material you like.
Michael Nielsen, has a nice thread on his use of spaced repetition on twitter:
The use of spaced repetition memory systems has changed my life over the past couple of years. Here's a few things I've found helpful:
He covers how he chooses what to put into his review system, what the right amount of information is for each item, and what memory alone won’t give you (understanding of the process which uses the memorised items). Nielsen is pretty enthusiastic about the benefits:
The single biggest change is that memory is no longer a haphazard event, to be left to chance. Rather, I can guarantee I will remember something, with minimal effort: it makes memory a choice.
There are lots of apps/programmes which can help you run a spaced repetition system, but Nielsen used Anki (ankiweb.net), which is open source, and has desktop and mobile clients (which sync between themselves, which is useful if you want to add information while at a computer, then review it on your mobile while you wait in line for coffee or whatever).
Checking Anki out, it seems pretty nice, and I’ve realised I can use it to overcome a cognitive bias we all suffer from: a tendency to forget facts which are an inconvenient for our beliefs.
Charles Darwin notes this in his autobiography:
“I had, also, during many years, followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favourable ones. Owing to this habit, very few objections were raised against my views which I had not at least noticed and attempted to answer.”
(Darwin, 1856/1958, p123).
I have notebooks, and Darwin’s habit of forgetting “unfavourable” facts, but I wonder if my thinking might be improved by not just noting the facts, but being able to keep them in memory – using a spaced repetition system. I’m going to give it a go.
For more on the science, see this recent review for educators: Weinstein, Y., Madan, C. R., & Sumeracki, M. A. (2018). Teaching the science of learning. Cognitive research: principles and implications, 3(1), 2.
I note that Anki-based spaced repetition also does a side serving of retrieval practice and interleaving (other effective learning techniques).
It’s a great illustrated read about the scientific history of the ideas behind ‘persuasive technology’, and ends with a plea that perhaps we can hijack our weakness for variable reward schedules for better ends:
What is we set up a variable reward system to reward ourselves for the time spent away fro our phones & physically connecting with others? Even time spend meditating or reading without technological distractions is a heroic endeavor worthy of a prize
Which isn’t a bad idea, but the pattern of the reward schedule is only one factor in what makes an activity habit forming. The timing of a reward is more important than the reliability – it’s easier to train in habits with immediate than delayed rewards. The timing is so crucial that in the animal learning literature even a delay of 2 seconds between a lever press and the delivery of a food pellet impairs learning in rats. In experiments we did with humans a delay of 150ms we enough to hinder our participants connecting their own actions with a training signal.
So the dilemma for persuasive technology, and anyone who wants to free themselves from its hold, is not just how phones/emails/social media structure our rewards, but also the fact that they allow gratification at almost any moment. There are always new notifications, new news, and so phones let us have zero delay for the reward of checking our phones. If you want to focus on other things, like being a successful parent, friend or human the delays on the rewards of these are far larger (not to mention more nebulous).
The way I like to think about it is the conflict between the impatient, narrow, smaller self – the self that likes sweets and gossip and all things immediate gratification – and the wider, wiser self – the self than invests in the future and carers about the bigger picture. That self can win out, does win out as we make our stumbling journey into adulthood, but my hunch is we’re going to need a different framework from the one of reinforcement learning to do it
George Ainslie’s book Breakdown of Will is what happens if you go so deep into the reinforcement learning paradigm you explode its reductionism and reinvent the notion of the self. Mind-alteringly good.
Putting a student at the centre of their own learning seems like fundamental pedagogy. The Constructivist approach to education emphasises the need for knowledge to reassembled in the mind of the learner, and the related impossibility of its direct transmission from the mind of the teacher. Believe this, and student input into how they learn must follow.
At the same time, we know there is a deep neurobiological connection between the machinery of reward in our brain, and that of learning. Both functions seem to be entangled in the subcortical circuitry of a network known as the basal ganglia. It’s perhaps not surprising that curiosity, which we all know personally to be a powerful motivator of learning, activates the same subcortical circuitry involved in the pleasurable anticipation of reward. Further, curiosity enhances memory, even for things you learn while your curiosity is aroused about something else.
This neurobiological alignment of enjoyment and learning isn’t mere coincidence. When building learning algorithms for embedding in learning robots, the basic rules of learning from experience have to be augmented with a drive to explore – curiosity! – so that they don’t become stuck repeating suboptimal habits. Whether it is motivated by curiosity or other factors, exploration seems to support enhanced learning in a range of domains from simple skills to more complex ideas.
Obviously we learn best when motivated, and when learning is fun, and allowing us to explore our curiosity is a way to allow both. However, putting the trajectory of their experience into students’ hands can go awry.
False beliefs impede learning
One reason is false beliefs about how much we know, or how we learn best. Psychologists studying memory have long documented such metacognitive errors, which include overconfidence, and a mistaken reliance on our familiarity with a thing as a guide to how well we understand it, or how well we’ll be able to recall it when tested (recognition and recall are in fact different cognitive processes). Sure enough, when tested in experiments people will over-rely on ineffective study strategies (like rereading, or reviewing the answers to questions, rather than testing their ability to generate the answers from the questions). Cramming is another ineffective study strategy, with experiment after experiment showing the benefit of spreading out your study rather than massing it all together. Obviously this requires being more organised, but my belief is that a metacognitive error supports students’ over-reliance on cramming – cramming feels good, because, for a moment, you feel familiar with all the information. The problem is that this feel-good familiarity isn’t the kind of memory that will support recall in an exam, but immature learners often don’t realise the extent of that.
In agreement with these findings from psychologists, education scholars have reacted against pure student-led or discovery learning, with one review summarising the findings from multiple distinct research programmes taking place over three decades: “In each case, guided discovery was more effective than pure discovery in helping students learn and transfer”.
The solution: balancing guided and discovery learning
This leaves us at a classic “middle way”, where pure student-led or teacher-led learning is ruled out. Some kind of guided exploration, structured study, or student choice in learning is obviously a necessity, but we’re not sure how much.
There’s an exciting future for research which informs us what the right blend of guided and discovery learning is, and which students and topics suit which exact blend. One strand of this is to take the cognitive psychology experiments which demonstrate a benefit of active choice learning over passive instruction and to tweak them so that we can see when passive instruction can be used to jump-start or augment active choice learning. One experiment from Kyle MacDonald and Michael Frank of Stanford University used a highly abstract concept learning task in which participants use trial and error to figure out a categorisation of different shapes. Previous research had shown that people learned faster if they were allowed to choose their own examples to receive feedback on, but this latest iteration of the experiment from MacDonald and Frank showed that an initial session of passive learning, where the examples were chosen for the learner boosted performance even further. Presumably this effect is due to the scaffolding in the structure of the concept-space that the passive learning gives the learner. This, and myriad experiments, are possible to show when and how active learning and instructor-led learning can be blended.
Education is about more than students learning the material on the syllabus. There is a meta-goal of producing students who are better able to learn for themselves. The same cognitive machinery in all of us might push us towards less effective strategies. The simple fact of being located within our own selfish consciousness means that even the best performers in the world need a coach to help them learn. But as we mature we can learn to better avoid pitfalls in our learning and evolve into better self-determining students. Ultimately the best education needs to keep its focus on that need to help each of us take on more and more responsibility for how we learn, whether that means submitting to others’ choices or exploring things for ourselves – or, often, a bit of both.
You’ve probably heard of “brain training exercises” – puzzles, tasks and drills which claim to keep you mentally agile. Maybe, especially if you’re an older person, you’ve even bought the book, or the app, in the hope of staving off mental decline. The idea of brain training has widespread currency, but is that due to science, or empty marketing?
The review team, led by Dan Simons of the University of Illinois, set out to inspect all the literature which brain training companies cited in their promotional material – in effect, taking them at their word, with the rationale that the best evidence in support of brain training exercises would be that cited by the companies promoting them.
The chairman says it works
A major finding of the review is the poverty of the supporting evidence for these supposedly scientific exercises. Simons’ team found that half of the brain training companies that promoted their products as being scientifically validated didn’t cite any peer-reviewed journal articles, relying instead on things like testimonials from scientists (including the company founders). Of the companies which did cite evidence for brain training, many cited general research on neuroplasticity, but nothing directly relevant to the effectiveness of what they promote.
The key issue for claims around brain training is that practising these exercises will help you in general, or on unrelated tasks. Nobody doubts that practising a crossword will help you get better at crosswords, but will it improve your memory, your IQ or your ability to skim read email? Such effects are called transfer effects, and so called “far transfer” (transfer to a very different task than that trained) is the ultimate goal of brain training studies. What we know about transfer effect is reviewed in Simons’ paper.
As well as trawling the company websites, the reviewers inspected a list provided by an industry group (Cognitive Training Data of some 132 scientific papers claiming to support the efficacy of brain training. Of these, 106 reported new data (rather than being reviews themselves). Of those 106, 71 used a proper control group, so that the effects of the brain training could be isolated. Of those 71, only 49 had so called “active control” group, in which the control participants actually did something rather than being ignored by the the researchers. (An active control is important if you want to distinguish the benefit of your treatment from the benefits of expectation or responding to researchers’ attentions.) Of these 49, about half of the results came from just six studies.
Overall, the reviewers conclude, no study which is cited in support of brain training products meets the gold standard for best research practises, and few even approached the standard of a good randomised control trial (although note their cut off for considering papers missed this paper from late last year).
A bit premature
The implications, they argue, are that claims for general benefits of brain training are premature. There’s excellent evidence for benefits of training specific to the task trained on, they conclude, less evidence for enhancement on closely related tasks and little evidence that brain training enhances performance on distantly related tasks or everyday cognitive performance.
The flaws in the studies supporting the benefits of brain training aren’t unique to the study of brain training. Good research is hard and all studies have flaws. Assembling convincing evidence for a treatment takes years, with evidence required from multiple studies and from different types of studies. Indeed, it may yet be that some kind of cognitive training can be shown to have the general benefits that are hoped for from existing brain training exercises. What this review shows is not that brain training can’t work, merely that promotion of brain training exercises is – at the very least – premature based on the current scientific evidence.
Yet in a 2014 survey of US adults, over 50% had heard of brain training exercises and showed some credence to their performance enhancing powers. Even the name “brain training”, the authors of the review admit, is a concession to marketing – this is how people know these exercises, despite their development having little to do with the brain directly.
The widespread currency of brain training isn’t because of overwhelming evidence of benefits from neuroscience and psychological science, as the review shows, but it does rely on the appearance of being scientifically supported. The billion-dollar market in brain training is parasitic on the credibility of neuroscience and psychology. It also taps into our lazy desire to address complex problems with simple, purchasable, solutions (something written about at length by Ben Goldacre in his book Bad Science).
The Simons review ends with recommendations for researchers into brain training, and for journalists reporting on the topic. My favourite was their emphasis that any treatment needs to be considered for its costs, as well as its benefits. By this standard there is no commercial brain training product which has been shown to have greater benefits than something you can do for free. Also important is the opportunity cost: what could you be doing in the time you invest in brain training? The reviewers deliberately decided to focus on brain training, so they didn’t cover the proven and widespread benefits of exercise for mental function, but I’m happy to tell you now that a brisk walk round the park with a friend is not only free, and not only more fun, but has better scientific support for its cognitive-enhancing powers than all the brain training products which are commercially available.
I had a piece in the Guardian on Saturday, ‘The way you’re revising may let you down in exams – and here’s why. In it I talk about a pervasive feature of our memories: that we tend to overestimate how much of a memory is ‘ours’, and how little is actually shared with other people, or the environment (see also the illusion of explanatory depth). This memory trap can combine with our instinct to make things easy for ourselves and result in us thinking we are learning when really we’re just flattering our feeling of familiarity with a topic.
Here’s the start of the piece:
Even the most dedicated study plan can be undone by a failure to understand how human memory works. Only when you’re aware of the trap set for us by overconfidence, can you most effectively deploy the study skills you already know about.
… even the best [study] advice can be useless if you don’t realise why it works. Understanding one fundamental principle of human memory can help you avoid wasting time studying the wrong way.
I go on to give four evidence-based pieces of revision advice, all of which – I hope – use psychology to show that some of our intuitions about how to study can’t be trusted.
A few classic studies help to define the way we think about the science of learning. A classic study isn’t classic just because it uncovered a new fact, but because it neatly demonstrates a profound truth about how we learn – often at the same time showing up our unjustified assumptions about how our minds work.
My picks for five classics of learning were:
Bartlett’s “War of the Ghosts”
Skinner’s operant conditioning
work on dissociable memory systems by Larry Squire and colleagues
de Groot’s studies of expertise in chess grandmasters, and ….
Anders Ericcson’s work on deliberate practice (of ‘ten thousands hours’ fame)
Obviously, that’s just my choice (and you can read my reasons in the article). Did I choose right? Or is there a classic study of learning I missed? Answers in the comments.
A new trial shows the benefits of online ‘brain training’ exercises including improvements in everyday tasks, such as shopping, cooking and managing home finances.
What they actually did
A team led by Clive Ballard of King’s College London recruited people to a trial of online “brain training” exercises. Nearly 7,000 people over the age of 50 took part, and they were randomly assigned to one of three groups. One group did reasoning and problem solving tasks. A second group practised cognitive skills tasks, such as memory and attention training, and a third control group did a task which involved looking for information on the internet.
After six months, the reasoning and cognitive skills groups showed benefits compared with the control group. The main measure of the study was participants’ own reports of their ability to cope with daily activities. This was measured using something called the instrumental activities of daily living scale. (To give an example, you get a point if you are able to prepare your meals without assistance, and no points if you need help). The participants also showed benefits in short-term memory, judgements of grammatical accuracy and ability to learn new words.
Many of these benefits looked as if they accrued after just three months of regular practice, completing an average of five sessions a week. The benefits also seemed to affect those who went into the trial with the lowest performance, suggesting that such exercises may benefit those who are at risk of mild cognitive impairment (a precursor to dementia).
How plausible is this?
This is gold-standard research. The study was designed to the highest standards, as would be required if you were testing a new drug: a double-blind randomised control trial in which participants were assigned at random to the different treatment groups, and weren’t told which group they were in (nor what the researcher’s theory was). Large numbers of people took part, meaning that the study had a reasonable chance of detecting an effect of the treatment if it was there. The study design was also pre-registered on a database of clinical trials, meaning that the results couldn’t be buried if they turned out to be different from what the researchers (or funders) wanted, and the researchers declared in advance what their analysis would focus on.
So, overall, this is serious evidence that cognitive training exercises may bring some benefits, not just on similar cognitive tasks, but also on the everyday activities that are important for independent living among the older population.
This kind of research is what “brain training” needs. Too many people – including those who just want to make some money – have leapt on the idea without the evidence that these kind of tasks can benefit anything other than performance on similar tasks. Because the evidence for broad benefits of cognitive training exercises is sparse, this study makes an important contribution to the supporters’ camp, although it far from settles the matter.
Why might you still be sceptical? Well there are some potential flaws in this study. It is useful to speculate on the effect these flaws might have had, even if only as an exercise to draw out the general lessons for interpreting this kind of research.
First up is the choice of control task. The benefits of the exercises tested in this research are only relative benefits compared with the scores of those who carried out the control task. If a different control task had been chosen maybe the benefits wouldn’t look so large. For example, we know that physical exercise has long-term and profound benefits for cognitive function. If the control group had been going for a brisk walk everyday, maybe the relative benefits of these computerised exercises would have vanished.
Another possible distortion of the figures could have arisen as a result of people dropping out during the course of the trial. If people who were likely to score well were more likely to drop out of the control group (perhaps because it wasn’t challenging enough), then this would leave poor performers in the control group and so artificially inflate the relative benefits of being in the cognitive exercises group. More people did drop out of the control group, but it isn’t clear from reading the paper if the researchers’ analysis took steps to account for the effect this might have had on the results.
And finally, the really impressive result from this study is the benefit for the activities of daily living scale (the benefit for other cognitive abilities perhaps isn’t too surprising). This suggests a broad benefit of the cognitive exercises, something which other studies have had difficulty showing. However, it is important to note that this outcome was based on a self-report by the participants. There wasn’t any independent or objective verification, meaning that something as simple as people feeling more confident about themselves after having competed the study could skew the results.
None of these three possible flaws mean we should ignore this result, but questions like these mean that we will need follow up research before we can be certain that cognitive training brings benefits on mental function in older adults.
For now, the implications of the current state of brain training research are:
Don’t pay money for any “brain training” programme. There isn’t any evidence that commercially available exercises have any benefit over the kinds of tasks and problems you can access for free.
Do exercise. Your brain is a machine that runs on blood, and it is never too late to improve the blood supply to the brain through increased physical activity. How long have you been on the computer? Could it be time for a brisk walk round the garden or to the shops? (Younger people, take note, exercise in youth benefits mental function in older age)
A key feature of this study was that the exercises in the treatment group got progressively more difficult as the participants practised. The real benefit may not be from these exercises as such, but from continually facing new mental challenges. So, whatever your hobbies, perhaps – just perhaps – make sure you are learning something new as well as enjoying whatever you already know.
Want to enhance your memory for facts? Tom Stafford explains a counterintuitive method for retaining information.
If I asked you to sit down and remember a list of phone numbers or a series of facts, how would you go about it? There’s a fair chance that you’d be doing it wrong.
One of the interesting things about the mind is that even though we all have one, we don’t have perfect insight into how to get the best from it. This is in part because of flaws in our ability to think about our own thinking, which is called metacognition. Studying this self-reflective thought process reveals that the human species has mental blind spots.
One area where these blind spots are particularly large is learning. We’re actually surprisingly bad at having insight into how we learn best.
Researchers Jeffrey Karpicke and Henry Roediger III set out to look at one aspect: how testing can consolidate our memory of facts. In their experiment they asked college students to learn pairs of Swahili and English words. So, for example, they had to learn that if they were given the Swahili word ‘mashua’ the correct response was ‘boat’. They could have used the sort of facts you might get on a high-school quiz (e.g. “Who wrote the first computer programs?”/”Ada Lovelace”), but the use of Swahili meant that there was little chance their participants could use any background knowledge to help them learn. After the pairs had all been learnt, there would be a final test a week later.
Now if many of us were revising this list we might study the list, test ourselves and then repeat this cycle, dropping items we got right. This makes studying (and testing) quicker and allows us to focus our effort on the things we haven’t yet learnt. It’s a plan that seems to make perfect sense, but it’s a plan that is disastrous if we really want to learn properly.
Karpicke and Roediger asked students to prepare for a test in various ways, and compared their success – for example, one group kept testing themselves on all items without dropping what they were getting right, while another group stopped testing themselves on their correct answers.
On the final exam differences between the groups were dramatic. While dropping items from study didn’t have much of an effect, the people who dropped items from testing performed relatively poorly: they could only remember about 35% of the word pairs, compared to 80% for people who kept testing items after they had learnt them.
It seems the effective way to learn is to practice retrieving items from memory, not trying to cement them in there by further study. Moreover, dropping items entirely from your revision, which is the advice given by many study guides, is wrong. You can stop studying them if you’ve learnt them, but you should keep testing what you’ve learnt if you want to remember them at the time of the final exam.
Finally, the researchers had the neat idea of asking their participants how well they would remember what they had learnt. All groups guessed at about 50%. This was a large overestimate for those who dropped items from test (and an underestimate from those who kept testing learnt items).
So it seems that we have a metacognitive blind spot for which revision strategies will work best. Making this a situation where we need to be guided by the evidence, and not our instinct. But the evidence has a moral for teachers as well: there’s more to testing than finding out what students know – tests can also help us remember.
Released on 6th of June 1984, Tetris is 30 years old today. Here’s a video where I try and explain something of the psychology of Tetris:
All credit for the graphics to Andrew Twist. What I say in the video is based on an article I wrote a while back for BBC Future.
As well as hijacking the minds and twitchy fingers of puzzle-gamers for 30 years, Tetris has also been involved in some important psychological research.
My favourite is Kirsh and Maglio’s work on “epistemic action“, which showed how Tetris players prefer to rotate the blocks in the game world rather than mentally. This using the world in synchrony with your mental representations is part of what makes it so immersive, I argue.
The BBC is reporting that a UK teachers union “is calling for urgent action over the impact of modern technology on children’s ability to learn” and that “some pupils were unable to concentrate or socialise properly” due to what they perceive as ‘over-use’ of digital technology.
Due to evidence reviewed by neuroscientist Kathryn Mills in a recent paper (pdf) we know that we’ve really got no reason to worry about technology having an adverse effects on kids’ brains.
It may not be that the teachers’ union is completely mistaken, however. They may be on to something but maybe just not what they think they’re onto.
To make sense of the confusion, you need to check out an elegant study completed by psychologists Robert Weis and Brittany Cerankosky who decided to test the psychological effects of giving young boys video game consoles.
They asked for families to take part who did not have a video-game system already in their home, had a parent interested in purchasing a system for their use, and where the kid had no history of developmental, behavioural, medical, or learning problems.
They ran a randomised controlled trial or RCT where 6 to 9-year-old boys were first given neuropsychological tests to measure their cognitive abilities (memory, concentration and problem-solving) and then randomly assigned to get a video games console.
The families in the control group were promised a console at the end of the study, by the way, so they didn’t think ‘oh sod it’ and go and buy one anyway.
So, we have half the kids with spanking brand new console, and, as part of the trial, the amount of time kids spent gaming and doing their school work was measured throughout, as was reporting of any behavioural problems. At the end of the study their academic progress was measured and their cognitive abilities were tested again.
The results were clear: kids who got video game consoles were worse off academically compared to their non-console-owning peers – their progress in reading and writing had suffered.
But this wasn’t due to an impact on their concentration, memory, problem-solving or behaviour – their neuropsychological and social performance was completely unaffected.
By looking at how much time the kids spent on the consoles, they found that reduced academic performance was due to the fact that kids in the console-owning families started spending less time doing their homework.
In other words, if your kids play a lot of computer games instead of doing homework they may well appear worse off, and from the teachers’ point-of-view, might seem a little slowed-down compared to their peers, but this is not due to cognitive changes.
Interestingly, teachers may not be in the best position to see this distinction very well because they tend, like the rest of us, to measure ability by performance in the tasks they set and not in comparison to neuropsychological test performance.
The solution is not to panic about technology as this same conclusion probably applies to anything that displaces homework (too many piano lessons will have the same effect) but good parental management of out-of-school time is clearly important.
Link to locked study on the effects of video games.
In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning.
From touching wood for good luck, to walking around ladders to avoid bad luck, we all have little routines or superstitions, which make little sense when you stop to think about them. And they are not always done to bring us luck. I wait until just after the kettle has boiled to pour the water for a cup of tea, rather than pouring just before it boils. I do not know why I feel the need to do this, I am sure it cannot make a difference to the drink.
So, why do I and others repeat these curious habits? Behind the seemingly irrational acts of kettle boiling, ball bouncing or stomach slapping lies something that tells us about what makes animals succeed in their continuing evolutionary struggles.
We refer to something that we do without thinking as being a habit. This is precisely why habits are useful – they do not take up mental effort. Our brains have mechanisms for acquiring new routines, and part of what makes us, and other creatures successful is the ability to create these habits.
Even pigeons can develop superstitious habits, as psychologist B. F. Skinner famously showed in an experiment. Skinner would begin a lecture by placing a pigeon in a cage with an automatic feeder that delivered a food pellet every 15 seconds. At the start of the lecture Skinner would let the audience observe the ordinary, passive behaviour of the pigeon, before covering the box. After fifty minutes he would uncover the box and show that different pigeons developed different behaviours. One bird would be turning counter clockwise three times before looking in the food basket, another would be thrusting its head into the top left corner. In other words, all pigeons struck upon some particular ritual that they would do over and over again.
Skinner’s explanation for this strange behaviour is as straightforward as it is ingenious. Although we know the food is delivered regardless of the pigeon’s behaviour, the pigeon doesn’t know this. So imagine yourself in the position of the pigeon; your brain knows very little about the world of men, or cages, or automatic food dispensers. You strut around your cage for a while, you decide to turn counter clockwise three times, and right at that moment some food appears. What should you do to make that happen again? The obvious answer is that you should repeat what you have just been doing. You repeat that action and – lo! – it works, food arrives.
From this seed, argued Skinner, superstition develops. Superstitions take over behaviour because our brains try and repeat whatever actions precede success, even if we cannot see how they have had their influence. Faced with the choice of figuring out how the world works and calculating the best outcome (which is the sensible rational thing to do), or repeating whatever you did last time before something good happened, we are far more likely to choose the latter. Or to put it another way: “if it ain’t broke, don’t fix it”, regardless of the cause.
University of Cambridge psychologist Tony Dickinson has taken the investigation of habits one step further. Dickinson trains rats to press a lever for food and perform another action (usually pulling a chain) for water. The animals can now decide which reward they would like most. If you give them water before the experiment they press the lever for food, if you give them food beforehand they pull the chain for water.
But something strange happens if the animals keep practising these actions beyond the point at which they have effectively learnt them – they seem to “forget” about the specific effects of each action. After this “overtraining”, you feed the animal food before the experiment and they keep on pressing the lever to produce food, regardless of the fact that they have just been fed. The rat has developed a habit, something it does just because it the opportunity is there, without thinking about the outcome.
Sound like anyone we know? To a psychologist, lots of human rituals look a lot like the automatic behaviours developed by Skinner’s pigeons or Dickinson’s rats. Chunks of behaviour that do not truly have an effect on the world, but which get stuck in our repertoire of actions.
And when the stakes are high – such as with sports – there is even more pressure on our brains to “capture” whatever behaviours might be important for success. Some rituals can help a sportsperson to relax and get “in the zone” as part of a well-established routine before and during a big game. But some of the habits you see put my kettle boiling routine to shame. Tiger Woods always wears red the last day of a golf tournament, because he says it is his “power colour”. In baseball, Wade Boggs claimed he hit better if he ate chicken the night before. Soccer’s Kolo Toure once missed the start of the second half because refused to come out – superstition dictated he had to be the last player to re-emerge from the dressing room, but on that occasion he was stuck there waiting for a stricken teammate to finish treatment.
We cling to these habits because we – or ancient animal parts of our brains – do not want to risk finding out what happens if we change. The rituals survive despite seeming irrational because they are coded in parts of our brains, which are designed by evolution not to think about reasons. They just repeat what seemed to work last time. This explains why having personal rituals is a normal part of being human. It is part of our inheritance as intelligent animals, a strategy that works in the long-term, even though it clearly does not make sense for every individual act.
Link: My columns at BBC Future
Link: UK readers – you’ll have to try it via here
Decades old research into how memory works should have revolutionised University teaching. It didn’t.
If you’re a student, what I’m about to tell you will let you change how you study so that it is more effective, more enjoyable and easier. If you work at a University, you – like me – should hang your head in shame that we’ve known this for decades but still teach the way we do.
There’s a dangerous idea in education that students are receptacles, and teachers are responsible for providing content that fills them up. This model encourages us to test students by the amount of content they can regurgitate, to focus overly on statements rather than skills in assessment and on syllabuses rather than values in teaching. It also encourages us to believe that we should try and learn things by trying to remember them. Sounds plausible, perhaps, but there’s a problem. Research into the psychology of memory shows that intention to remember is a very minor factor in whether you remember something or not. Far more important than whether you want to remember something is how you think about the material when you encounter it.
A classic experiment by Hyde and Jenkins (1973) illustrates this. These researchers gave participants lists of words, which they later tested recall of, as their memory items. To affect their thinking about the words, half the participants were told to rate the pleasantness of each word, and half were told to check if the word contained the letters ‘e’ or ‘g’. This manipulation was designed to affect ‘depth of processing’. The participants in the rating-pleasantness condition had to think about what the word meant, and relate it to themselves (how they felt about it) – “deep processing”. Participants in the letter-checking condition just had to look at the shape of the letters, they didn’t even have to read the word if they didn’t want to – “shallow processing”. The second, independent, manipulation concerned whether participants knew that they would be tested later on the words. Half of each group were told this – the “intentional learning” condition – and half weren’t told, the test would come as a surprise – the “incidental learning” condition.
I’ve made a graph so you can see the effects of these two manipulations
As you can see, there isn’t much difference between the intentional and incidental learning conditions. Whether or not a participant wanted to remember the words didn’t affect how many words they remembered. Instead, the major effect is due to how participants thought about the words when they encountered them. Participants who thought deeply about the words remembered nearly twice as many as participants who only thought shallowly about the words, regardless of whether they intended to remember them or not.
The implications for how we teach and learn should be clear. Wanting to remember, or telling people to remember, isn’t effective. If you want to remember something you need to think about it deeply. This means you need to think about what you are trying to remember means, both in relationship to other material you are trying to learn, and to yourself. Other research in memory has shown the importance of schema – memory patterns and structures – for recall. As teachers, we try and organise our course material for the convenience of students, to best help them understand it. Unfortunately, this organisation – the schema – for the material then becomes part of the assessment and something which students try to remember. What this research suggests is that, merely in terms of remembering, it would be more effective for students to come up with their own organisation for course material.
If you are a student the implication of this study and those like it is clear : don’t stress yourself with revision where you read and re-read textbooks and course notes. You’ll remember better (and understand much better) if you try and re-organise the material you’ve been given in your own way.
If you are a teacher, like me, then this research raises some disturbing questions. At a University the main form of teaching we do is the lecture, which puts the student in a passive role and, essentially, asks them to “remember this” – an instruction we know to be ineffective. Instead, we should be thinking hard, always, about how to create teaching experiences in which students are more active, and about creating courses in which students are permitted and encouraged to come up with their own organisation of material, rather than just forced to regurgitate ours.
Reference: Hyde, T. S., & Jenkins, J. J. (1973). Recall for words as a function of semantic, graphic, and syntactic orienting tasks. Journal of Verbal Learning and Verbal Behavior, 12(5), 471–480.
Business Week has an important article on how internet companies are using the massive data sets collected from the minutia of users’ behaviour to influence customer choices.
The article is a useful insight into how tech companies are basing their entire profit model on the ability to model and manipulate human behaviour but the implication for psychology is, perhaps, more profound.
Psychological theories and ideas about how the mind work seem to play a small, if not absent role in these models which are almost entirely based on deriving mathematical models from massive data sets.
Sometimes the objective is simply to turn people on. Zynga, the maker of popular Facebook games such as CityVille and FarmVille, collects 60 billion data points per day—how long people play games, when they play them, what they’re buying, and so forth. The Wants (Zynga’s term is “data ninjas”) troll this information to figure out which people like to visit their friends’ farms and cities, the most popular items people buy, and how often people send notes to their friends.
Discovery: People enjoy the games more if they receive gifts from their friends, such as the virtual wood and nails needed to build a digital barn. As for the poor folks without many friends who aren’t having as much fun, the Wants came up with a solution. “We made it easier for those players to find the parts elsewhere in the game, so they relied less on receiving the items as gifts,” says Ken Rudin, Zynga’s vice-president for analytics.
Although the example given might seem trivial, it is a massive generator of profit and can be applied to any sort of online behaviour.
What’s striking is that the relationships between the context, motivations, evaluation and behaviour of the users is not being described in terms of how the mind or brain understand and respond the situation but purely as a statistical relationship.
It is psychology devoid of psychology. Rather than the wisdom of crowds approach, it’s the behaviour of zombies model. Unsurprisingly, none of the entrepreneurs mentioned are cognitive scientists. They’re all mathematicians.
I am reminded of the Wiredarticle ‘The End of Theory’ which warned that big data crunching computers could solve scientific problems in the same way. The generated mathematical model ‘works’ but the model is uninterpretable and does not help us understand anything about what’s being studied.
Similarly, while the experimental psychologist’s dream for more than a century has been to work with large data sets to have confidence in our conclusions about the mind, the reality, currently being realised, may actually make the mind redundant in the majority of the commercial world.
An amusing YouTubevideo demonstrates Ivan Pavlov’s principal of classical conditioning with an air gun, a novelty alarm and a reluctant college roommate.
Pavlov discovered that we learn to associate an established response to a new event simply by repeatedly pairing the new event to a situation that already caused the response. Famously he could trigger salivation in a dog just with the sound of a bell, simply by ringing a bell every time food was presented.
This video uses exactly the same principle, but instead of food, an airgun pellet is fired at a college roommate causing a painful reaction, and instead of a bell, an annoying novelty alarm is sounded.
The Economist has a great article on how computer models of how bees, ants and birds operated in swarms, are being deployed as ‘artificial intelligence’ systems to solve previously unassailable problems.
To be honest, the premise of the piece is a little too grand to be plausible: the introductory paragraph announces “The search for artificial intelligence modelled on human brains has been a dismal failure. AI based on ant behaviour, though, is having some success.”
This is really not true, as artificial intelligence has actually been a great success when applied to limited and well-defined problems. The article really just explains how the study of swarm intelligence has allowed us to tackle a new set of limited and well-defined problems that were previously out of easy reach.
However, it does give some fantastic examples of how swarm behaviour, where the combination of simple individual behaviours can solve complex problems, can be applied to a range of problems:
In particular, Dr Dorigo was interested to learn that ants are good at choosing the shortest possible route between a food source and their nest. This is reminiscent of a classic computational conundrum, the travelling-salesman problem. Given a list of cities and their distances apart, the salesman must find the shortest route needed to visit each city once. As the number of cities grows, the problem gets more complicated. A computer trying to solve it will take longer and longer, and suck in more and more processing power. The reason the travelling-salesman problem is so interesting is that many other complex problems, including designing silicon chips and assembling DNA sequences, ultimately come down to a modified version of it.
Ants solve their own version using chemical signals called pheromones. When an ant finds food, she takes it back to the nest, leaving behind a pheromone trail that will attract others. The more ants that follow the trail, the stronger it becomes. The pheromones evaporate quickly, however, so once all the food has been collected, the trail soon goes cold. Moreover, this rapid evaporation means long trails are less attractive than short ones, all else being equal. Pheromones thus amplify the limited intelligence of the individual ants into something more powerful.
Link to Economist article ‘Riders on a Swarm’. Link to Wikipedia article on swarm intelligence.