reinforcing your wiser self

phoneNautilus has a piece by David Perezcassar on how technology takes advantage of our animal instinct for variable reward schedules (Unreliable rewards trap us into addictive cell phone use, but they can also get us out).

It’s a great illustrated read about the scientific history of the ideas behind ‘persuasive technology’, and ends with a plea that perhaps we can hijack our weakness for variable reward schedules for better ends:

What is we set up a variable reward system to reward ourselves for the time spent away fro our phones & physically connecting with others? Even time spend meditating or reading without technological distractions is a heroic endeavor worthy of a prize

Which isn’t a bad idea, but the pattern of the reward schedule is only one factor in what makes an activity habit forming. The timing of a reward is more important than the reliability – it’s easier to train in habits with immediate than delayed rewards. The timing is so crucial that in the animal learning literature even a delay of 2 seconds between a lever press and the delivery of a food pellet impairs learning in rats. In experiments we did with humans a delay of 150ms we enough to hinder our participants connecting their own actions with a training signal.

So the dilemma for persuasive technology, and anyone who wants to free themselves from its hold, is not just how phones/emails/social media structure our rewards, but also the fact that they allow gratification at almost any moment. There are always new notifications, new news, and so phones let us have zero delay for the reward of checking our phones. If you want to focus on other things, like being a successful parent, friend or human the delays on the rewards of these are far larger (not to mention more nebulous).

The way I like to think about it is the conflict between the impatient, narrow, smaller self – the self that likes sweets and gossip and all things immediate gratification – and the wider, wiser self – the self than invests in the future and carers about the bigger picture. That self can win out, does win out as we make our stumbling journey into adulthood, but my hunch is we’re going to need a different framework from the one of reinforcement learning to do it

Nautilus article: Unreliable rewards trap us into addictive cell phone use, but they can also get us out post about reinforcement schedules, and how they might be used to break technology compulsion (from 2006 – just sayin’)

George Ainslie’s book Breakdown of Will is what happens if you go so deep into the reinforcement learning paradigm you explode its reductionism and reinvent the notion of the self. Mind-alteringly good.

Do students know what’s good for them?

Of course they do, and of course they don’t.

Putting a student at the centre of their own learning seems like fundamental pedagogy. The Constructivist approach to education emphasises the need for knowledge to reassembled in the mind of the learner, and the related impossibility of its direct transmission from the mind of the teacher. Believe this, and student input into how they learn must follow.

At the same time, we know there is a deep neurobiological connection between the machinery of reward in our brain, and that of learning. Both functions seem to be entangled in the subcortical circuitry of a network known as the basal ganglia. It’s perhaps not surprising that curiosity, which we all know personally to be a powerful motivator of learning, activates the same subcortical circuitry involved in the pleasurable anticipation of reward. Further, curiosity enhances memory, even for things you learn while your curiosity is aroused about something else.

This neurobiological alignment of enjoyment and learning isn’t mere coincidence. When building learning algorithms for embedding in learning robots, the basic rules of learning from experience have to be augmented with a drive to explore – curiosity! – so that they don’t become stuck repeating suboptimal habits. Whether it is motivated by curiosity or other factors, exploration seems to support enhanced learning in a range of domains from simple skills to more complex ideas.

Obviously we learn best when motivated, and when learning is fun, and allowing us to explore our curiosity is a way to allow both. However, putting the trajectory of their experience into students’ hands can go awry.

False beliefs impede learning

One reason is false beliefs about how much we know, or how we learn best. Psychologists studying memory have long documented such metacognitive errors, which include overconfidence, and a mistaken reliance on our familiarity with a thing as a guide to how well we understand it, or how well we’ll be able to recall it when tested (recognition and recall are in fact different cognitive processes). Sure enough, when tested in experiments people will over-rely on ineffective study strategies (like rereading, or reviewing the answers to questions, rather than testing their ability to generate the answers from the questions). Cramming is another ineffective study strategy, with experiment after experiment showing the benefit of spreading out your study rather than massing it all together. Obviously this requires being more organised, but my belief is that a metacognitive error supports students’ over-reliance on cramming – cramming feels good, because, for a moment, you feel familiar with all the information. The problem is that this feel-good familiarity isn’t the kind of memory that will support recall in an exam, but immature learners often don’t realise the extent of that.

In agreement with these findings from psychologists, education scholars have reacted against pure student-led or discovery learning, with one review summarising the findings from multiple distinct research programmes taking place over three decades: “In each case, guided discovery was more effective than pure discovery in helping students learn and transfer”.

The solution: balancing guided and discovery learning

This leaves us at a classic “middle way”, where pure student-led or teacher-led learning is ruled out. Some kind of guided exploration, structured study, or student choice in learning is obviously a necessity, but we’re not sure how much.

There’s an exciting future for research which informs us what the right blend of guided and discovery learning is, and which students and topics suit which exact blend. One strand of this is to take the cognitive psychology experiments which demonstrate a benefit of active choice learning over passive instruction and to tweak them so that we can see when passive instruction can be used to jump-start or augment active choice learning. One experiment from Kyle MacDonald and Michael Frank of Stanford University used a highly abstract concept learning task in which participants use trial and error to figure out a categorisation of different shapes. Previous research had shown that people learned faster if they were allowed to choose their own examples to receive feedback on, but this latest iteration of the experiment from MacDonald and Frank showed that an initial session of passive learning, where the examples were chosen for the learner boosted performance even further. Presumably this effect is due to the scaffolding in the structure of the concept-space that the passive learning gives the learner. This, and myriad experiments, are possible to show when and how active learning and instructor-led learning can be blended.

Education is about more than students learning the material on the syllabus. There is a meta-goal of producing students who are better able to learn for themselves. The same cognitive machinery in all of us might push us towards less effective strategies. The simple fact of being located within our own selfish consciousness means that even the best performers in the world need a coach to help them learn. But as we mature we can learn to better avoid pitfalls in our learning and evolve into better self-determining students. Ultimately the best education needs to keep its focus on that need to help each of us take on more and more responsibility for how we learn, whether that means submitting to others’ choices or exploring things for ourselves – or, often, a bit of both.

This post originally appeared on the NPJ ‘Science of Learning’ Community

Does ‘brain training’ work?

You’ve probably heard of “brain training exercises” – puzzles, tasks and drills which claim to keep you mentally agile. Maybe, especially if you’re an older person, you’ve even bought the book, or the app, in the hope of staving off mental decline. The idea of brain training has widespread currency, but is that due to science, or empty marketing?

Now a major new review, published in Psychology in the Public Interest, sets out to systematically examine the evidence for brain training. The results should give you pause before spending any of your time and money on brain training, but they also highlight what happens when research and commerce become entangled.

The review team, led by Dan Simons of the University of Illinois, set out to inspect all the literature which brain training companies cited in their promotional material – in effect, taking them at their word, with the rationale that the best evidence in support of brain training exercises would be that cited by the companies promoting them.

The chairman says it works

A major finding of the review is the poverty of the supporting evidence for these supposedly scientific exercises. Simons’ team found that half of the brain training companies that promoted their products as being scientifically validated didn’t cite any peer-reviewed journal articles, relying instead on things like testimonials from scientists (including the company founders). Of the companies which did cite evidence for brain training, many cited general research on neuroplasticity, but nothing directly relevant to the effectiveness of what they promote.

The key issue for claims around brain training is that practising these exercises will help you in general, or on unrelated tasks. Nobody doubts that practising a crossword will help you get better at crosswords, but will it improve your memory, your IQ or your ability to skim read email? Such effects are called transfer effects, and so called “far transfer” (transfer to a very different task than that trained) is the ultimate goal of brain training studies. What we know about transfer effect is reviewed in Simons’ paper.

Doing puzzles make you, well, good at doing puzzles.
Jne Valokuvaus/

As well as trawling the company websites, the reviewers inspected a list provided by an industry group (Cognitive Training Data of some 132 scientific papers claiming to support the efficacy of brain training. Of these, 106 reported new data (rather than being reviews themselves). Of those 106, 71 used a proper control group, so that the effects of the brain training could be isolated. Of those 71, only 49 had so called “active control” group, in which the control participants actually did something rather than being ignored by the the researchers. (An active control is important if you want to distinguish the benefit of your treatment from the benefits of expectation or responding to researchers’ attentions.) Of these 49, about half of the results came from just six studies.

Overall, the reviewers conclude, no study which is cited in support of brain training products meets the gold standard for best research practises, and few even approached the standard of a good randomised control trial (although note their cut off for considering papers missed this paper from late last year).

A bit premature

The implications, they argue, are that claims for general benefits of brain training are premature. There’s excellent evidence for benefits of training specific to the task trained on, they conclude, less evidence for enhancement on closely related tasks and little evidence that brain training enhances performance on distantly related tasks or everyday cognitive performance.

The flaws in the studies supporting the benefits of brain training aren’t unique to the study of brain training. Good research is hard and all studies have flaws. Assembling convincing evidence for a treatment takes years, with evidence required from multiple studies and from different types of studies. Indeed, it may yet be that some kind of cognitive training can be shown to have the general benefits that are hoped for from existing brain training exercises. What this review shows is not that brain training can’t work, merely that promotion of brain training exercises is – at the very least – premature based on the current scientific evidence.

Yet in a 2014 survey of US adults, over 50% had heard of brain training exercises and showed some credence to their performance enhancing powers. Even the name “brain training”, the authors of the review admit, is a concession to marketing – this is how people know these exercises, despite their development having little to do with the brain directly.

The widespread currency of brain training isn’t because of overwhelming evidence of benefits from neuroscience and psychological science, as the review shows, but it does rely on the appearance of being scientifically supported. The billion-dollar market in brain training is parasitic on the credibility of neuroscience and psychology. It also taps into our lazy desire to address complex problems with simple, purchasable, solutions (something written about at length by Ben Goldacre in his book Bad Science).

The Simons review ends with recommendations for researchers into brain training, and for journalists reporting on the topic. My favourite was their emphasis that any treatment needs to be considered for its costs, as well as its benefits. By this standard there is no commercial brain training product which has been shown to have greater benefits than something you can do for free. Also important is the opportunity cost: what could you be doing in the time you invest in brain training? The reviewers deliberately decided to focus on brain training, so they didn’t cover the proven and widespread benefits of exercise for mental function, but I’m happy to tell you now that a brisk walk round the park with a friend is not only free, and not only more fun, but has better scientific support for its cognitive-enhancing powers than all the brain training products which are commercially available.

The Conversation

Tom Stafford, Lecturer in Psychology and Cognitive Science, University of Sheffield

This article was originally published on The Conversation. Read the original article.

The memory trap

CC Licensed Photo by Flickr user greeblie. Click for source.I had a piece in the Guardian on Saturday, ‘The way you’re revising may let you down in exams – and here’s why. In it I talk about a pervasive feature of our memories: that we tend to overestimate how much of a memory is ‘ours’, and how little is actually shared with other people, or the environment (see also the illusion of explanatory depth). This memory trap can combine with our instinct to make things easy for ourselves and result in us thinking we are learning when really we’re just flattering our feeling of familiarity with a topic.

Here’s the start of the piece:

Even the most dedicated study plan can be undone by a failure to understand how human memory works. Only when you’re aware of the trap set for us by overconfidence, can you most effectively deploy the study skills you already know about.
… even the best [study] advice can be useless if you don’t realise why it works. Understanding one fundamental principle of human memory can help you avoid wasting time studying the wrong way.

I go on to give four evidence-based pieces of revision advice, all of which – I hope – use psychology to show that some of our intuitions about how to study can’t be trusted.

Link: The way you’re revising may let you down in exams – and here’s why

Previously at the Guardian by me:

The science of learning: five classic studies

Five secrets to revising that can improve your grades

5 classic studies of learning

Photo by Wellcome and Flickr user Rebecca-Lee. Click for source.I have a piece in the Guardian, ‘The science of learning: five classic studies‘. Here’s the intro:

A few classic studies help to define the way we think about the science of learning. A classic study isn’t classic just because it uncovered a new fact, but because it neatly demonstrates a profound truth about how we learn – often at the same time showing up our unjustified assumptions about how our minds work.

My picks for five classics of learning were:

  • Bartlett’s “War of the Ghosts”
  • Skinner’s operant conditioning
  • work on dissociable memory systems by Larry Squire and colleagues
  • de Groot’s studies of expertise in chess grandmasters, and ….
  • Anders Ericcson’s work on deliberate practice (of ‘ten thousands hours’ fame)

Obviously, that’s just my choice (and you can read my reasons in the article). Did I choose right? Or is there a classic study of learning I missed? Answers in the comments.

Link: ‘The science of learning: five classic studies

a gold-standard study on brain training

The headlines

The Telegraph: Alzheimer’s disease: Online brain training “improves daily lives of over-60s”

Daily Mail: The quiz that makes over-60s better cooks: Computer brain games ‘stave off mental decline’

Yorkshire Post: Brain training study is “truly significant”

The story

A new trial shows the benefits of online ‘brain training’ exercises including improvements in everyday tasks, such as shopping, cooking and managing home finances.

What they actually did

A team led by Clive Ballard of King’s College London recruited people to a trial of online “brain training” exercises. Nearly 7,000 people over the age of 50 took part, and they were randomly assigned to one of three groups. One group did reasoning and problem solving tasks. A second group practised cognitive skills tasks, such as memory and attention training, and a third control group did a task which involved looking for information on the internet.

After six months, the reasoning and cognitive skills groups showed benefits compared with the control group. The main measure of the study was participants’ own reports of their ability to cope with daily activities. This was measured using something called the instrumental activities of daily living scale. (To give an example, you get a point if you are able to prepare your meals without assistance, and no points if you need help). The participants also showed benefits in short-term memory, judgements of grammatical accuracy and ability to learn new words.

Many of these benefits looked as if they accrued after just three months of regular practice, completing an average of five sessions a week. The benefits also seemed to affect those who went into the trial with the lowest performance, suggesting that such exercises may benefit those who are at risk of mild cognitive impairment (a precursor to dementia).

How plausible is this?

This is gold-standard research. The study was designed to the highest standards, as would be required if you were testing a new drug: a double-blind randomised control trial in which participants were assigned at random to the different treatment groups, and weren’t told which group they were in (nor what the researcher’s theory was). Large numbers of people took part, meaning that the study had a reasonable chance of detecting an effect of the treatment if it was there. The study design was also pre-registered on a database of clinical trials, meaning that the results couldn’t be buried if they turned out to be different from what the researchers (or funders) wanted, and the researchers declared in advance what their analysis would focus on.

So, overall, this is serious evidence that cognitive training exercises may bring some benefits, not just on similar cognitive tasks, but also on the everyday activities that are important for independent living among the older population.

Tom’s take

This kind of research is what “brain training” needs. Too many people – including those who just want to make some money – have leapt on the idea without the evidence that these kind of tasks can benefit anything other than performance on similar tasks. Because the evidence for broad benefits of cognitive training exercises is sparse, this study makes an important contribution to the supporters’ camp, although it far from settles the matter.

Why might you still be sceptical? Well there are some potential flaws in this study. It is useful to speculate on the effect these flaws might have had, even if only as an exercise to draw out the general lessons for interpreting this kind of research.

First up is the choice of control task. The benefits of the exercises tested in this research are only relative benefits compared with the scores of those who carried out the control task. If a different control task had been chosen maybe the benefits wouldn’t look so large. For example, we know that physical exercise has long-term and profound benefits for cognitive function. If the control group had been going for a brisk walk everyday, maybe the relative benefits of these computerised exercises would have vanished.

Or just go for a walk

Another possible distortion of the figures could have arisen as a result of people dropping out during the course of the trial. If people who were likely to score well were more likely to drop out of the control group (perhaps because it wasn’t challenging enough), then this would leave poor performers in the control group and so artificially inflate the relative benefits of being in the cognitive exercises group. More people did drop out of the control group, but it isn’t clear from reading the paper if the researchers’ analysis took steps to account for the effect this might have had on the results.

And finally, the really impressive result from this study is the benefit for the activities of daily living scale (the benefit for other cognitive abilities perhaps isn’t too surprising). This suggests a broad benefit of the cognitive exercises, something which other studies have had difficulty showing. However, it is important to note that this outcome was based on a self-report by the participants. There wasn’t any independent or objective verification, meaning that something as simple as people feeling more confident about themselves after having competed the study could skew the results.

None of these three possible flaws mean we should ignore this result, but questions like these mean that we will need follow up research before we can be certain that cognitive training brings benefits on mental function in older adults.

For now, the implications of the current state of brain training research are:

Don’t pay money for any “brain training” programme. There isn’t any evidence that commercially available exercises have any benefit over the kinds of tasks and problems you can access for free.

Do exercise. Your brain is a machine that runs on blood, and it is never too late to improve the blood supply to the brain through increased physical activity. How long have you been on the computer? Could it be time for a brisk walk round the garden or to the shops? (Younger people, take note, exercise in youth benefits mental function in older age)

A key feature of this study was that the exercises in the treatment group got progressively more difficult as the participants practised. The real benefit may not be from these exercises as such, but from continually facing new mental challenges. So, whatever your hobbies, perhaps – just perhaps – make sure you are learning something new as well as enjoying whatever you already know.

Read more

The original study: The Effect of an Online Cognitive Training Package in Healthy Older Adults: An Online Randomized Controlled Trial

Oliver Burkeman writes:

The New Yorker (2013):

The Conversation

This article was originally published on The Conversation. Read the original article.

A simple trick to improve your memory

Want to enhance your memory for facts? Tom Stafford explains a counterintuitive method for retaining information.

If I asked you to sit down and remember a list of phone numbers or a series of facts, how would you go about it? There’s a fair chance that you’d be doing it wrong.

One of the interesting things about the mind is that even though we all have one, we don’t have perfect insight into how to get the best from it. This is in part because of flaws in our ability to think about our own thinking, which is called metacognition. Studying this self-reflective thought process reveals that the human species has mental blind spots.

One area where these blind spots are particularly large is learning. We’re actually surprisingly bad at having insight into how we learn best.

Researchers Jeffrey Karpicke and Henry Roediger III set out to look at one aspect: how testing can consolidate our memory of facts. In their experiment they asked college students to learn pairs of Swahili and English words. So, for example, they had to learn that if they were given the Swahili word ‘mashua’ the correct response was ‘boat’. They could have used the sort of facts you might get on a high-school quiz (e.g. “Who wrote the first computer programs?”/”Ada Lovelace”), but the use of Swahili meant that there was little chance their participants could use any background knowledge to help them learn. After the pairs had all been learnt, there would be a final test a week later.

Now if many of us were revising this list we might study the list, test ourselves and then repeat this cycle, dropping items we got right. This makes studying (and testing) quicker and allows us to focus our effort on the things we haven’t yet learnt. It’s a plan that seems to make perfect sense, but it’s a plan that is disastrous if we really want to learn properly.

Karpicke and Roediger asked students to prepare for a test in various ways, and compared their success – for example, one group kept testing themselves on all items without dropping what they were getting right, while another group stopped testing themselves on their correct answers.

On the final exam differences between the groups were dramatic. While dropping items from study didn’t have much of an effect, the people who dropped items from testing performed relatively poorly: they could only remember about 35% of the word pairs, compared to 80% for people who kept testing items after they had learnt them.

It seems the effective way to learn is to practice retrieving items from memory, not trying to cement them in there by further study. Moreover, dropping items entirely from your revision, which is the advice given by many study guides, is wrong. You can stop studying them if you’ve learnt them, but you should keep testing what you’ve learnt if you want to remember them at the time of the final exam.

Finally, the researchers had the neat idea of asking their participants how well they would remember what they had learnt. All groups guessed at about 50%. This was a large overestimate for those who dropped items from test (and an underestimate from those who kept testing learnt items).

So it seems that we have a metacognitive blind spot for which revision strategies will work best. Making this a situation where we need to be guided by the evidence, and not our instinct. But the evidence has a moral for teachers as well: there’s more to testing than finding out what students know – tests can also help us remember.

Read more: Why cramming for tests often fails

This is my BBC Future column from last week. The original is here