Do students know what’s good for them?

Of course they do, and of course they don’t.

Putting a student at the centre of their own learning seems like fundamental pedagogy. The Constructivist approach to education emphasises the need for knowledge to reassembled in the mind of the learner, and the related impossibility of its direct transmission from the mind of the teacher. Believe this, and student input into how they learn must follow.

At the same time, we know there is a deep neurobiological connection between the machinery of reward in our brain, and that of learning. Both functions seem to be entangled in the subcortical circuitry of a network known as the basal ganglia. It’s perhaps not surprising that curiosity, which we all know personally to be a powerful motivator of learning, activates the same subcortical circuitry involved in the pleasurable anticipation of reward. Further, curiosity enhances memory, even for things you learn while your curiosity is aroused about something else.

This neurobiological alignment of enjoyment and learning isn’t mere coincidence. When building learning algorithms for embedding in learning robots, the basic rules of learning from experience have to be augmented with a drive to explore – curiosity! – so that they don’t become stuck repeating suboptimal habits. Whether it is motivated by curiosity or other factors, exploration seems to support enhanced learning in a range of domains from simple skills to more complex ideas.

Obviously we learn best when motivated, and when learning is fun, and allowing us to explore our curiosity is a way to allow both. However, putting the trajectory of their experience into students’ hands can go awry.

False beliefs impede learning

One reason is false beliefs about how much we know, or how we learn best. Psychologists studying memory have long documented such metacognitive errors, which include overconfidence, and a mistaken reliance on our familiarity with a thing as a guide to how well we understand it, or how well we’ll be able to recall it when tested (recognition and recall are in fact different cognitive processes). Sure enough, when tested in experiments people will over-rely on ineffective study strategies (like rereading, or reviewing the answers to questions, rather than testing their ability to generate the answers from the questions). Cramming is another ineffective study strategy, with experiment after experiment showing the benefit of spreading out your study rather than massing it all together. Obviously this requires being more organised, but my belief is that a metacognitive error supports students’ over-reliance on cramming – cramming feels good, because, for a moment, you feel familiar with all the information. The problem is that this feel-good familiarity isn’t the kind of memory that will support recall in an exam, but immature learners often don’t realise the extent of that.

In agreement with these findings from psychologists, education scholars have reacted against pure student-led or discovery learning, with one review summarising the findings from multiple distinct research programmes taking place over three decades: “In each case, guided discovery was more effective than pure discovery in helping students learn and transfer”.

The solution: balancing guided and discovery learning

This leaves us at a classic “middle way”, where pure student-led or teacher-led learning is ruled out. Some kind of guided exploration, structured study, or student choice in learning is obviously a necessity, but we’re not sure how much.

There’s an exciting future for research which informs us what the right blend of guided and discovery learning is, and which students and topics suit which exact blend. One strand of this is to take the cognitive psychology experiments which demonstrate a benefit of active choice learning over passive instruction and to tweak them so that we can see when passive instruction can be used to jump-start or augment active choice learning. One experiment from Kyle MacDonald and Michael Frank of Stanford University used a highly abstract concept learning task in which participants use trial and error to figure out a categorisation of different shapes. Previous research had shown that people learned faster if they were allowed to choose their own examples to receive feedback on, but this latest iteration of the experiment from MacDonald and Frank showed that an initial session of passive learning, where the examples were chosen for the learner boosted performance even further. Presumably this effect is due to the scaffolding in the structure of the concept-space that the passive learning gives the learner. This, and myriad experiments, are possible to show when and how active learning and instructor-led learning can be blended.

Education is about more than students learning the material on the syllabus. There is a meta-goal of producing students who are better able to learn for themselves. The same cognitive machinery in all of us might push us towards less effective strategies. The simple fact of being located within our own selfish consciousness means that even the best performers in the world need a coach to help them learn. But as we mature we can learn to better avoid pitfalls in our learning and evolve into better self-determining students. Ultimately the best education needs to keep its focus on that need to help each of us take on more and more responsibility for how we learn, whether that means submitting to others’ choices or exploring things for ourselves – or, often, a bit of both.

This post originally appeared on the NPJ ‘Science of Learning’ Community

The hidden history of war on terror torture

The Hidden Persuaders project has interviewed neuropsychologist Tim Shallice about his opposition to the British government’s use of ‘enhanced interrogation’ in the Northern Ireland conflict of the 1970s – a practice eventually abandoned as torture.

Shallice is little known to the wider public but is one of the most important and influential neuropsychologists of his generation, having pioneered the systematic study of neurological problems as a window on typical cognitive function.

One of his first papers was not on brain injury, however, it was an article titled ‘Ulster depth interrogation techniques and their relation to sensory deprivation research’ where he set out a cognitive basis for why the ‘five techniques’ – wall-standing, hooding, white noise, sleep deprivation, and deprivation of food and drink – amounted to torture.

Shallice traces a link between the use of these techniques and research on sensory deprivation – which was investigated both by regular scientists for reasons of scientific curiosity, and as we learned later, by intelligence services while trying to understand ‘brain washing’.

The use of these techniques in Northern Ireland was subject to an official investigation and Shallice and other researchers testified to the Parker Committee which led Prime Minister Edward Heath to ban the practice.

If those techniques sound eerily familiar, it is because they formed the basis of interrogation practices at Guantanamo Bay and other notorious sites in the ‘war on terror’.

The Hidden Persuaders is a research project at Birkbeck, University of London, which is investigating the history of ‘brainwashing’. It traces the practice to its use by the British during the colonisation of Yemen, who seemed to have borrowed it off the KGB.

And if you want to read about the modern day effects of the abusive techniques, The New York Times has just published a disturbing feature article about the long-term consequences of being tortured in Guantanamo and other ‘black sites’ by following up many of the people subject to the brutal techniques.

Link to Hidden Persuaders interview with Tim Shallice.
Link to NYT on long-term legacy of war on terror torture.

Does ‘brain training’ work?

You’ve probably heard of “brain training exercises” – puzzles, tasks and drills which claim to keep you mentally agile. Maybe, especially if you’re an older person, you’ve even bought the book, or the app, in the hope of staving off mental decline. The idea of brain training has widespread currency, but is that due to science, or empty marketing?

Now a major new review, published in Psychology in the Public Interest, sets out to systematically examine the evidence for brain training. The results should give you pause before spending any of your time and money on brain training, but they also highlight what happens when research and commerce become entangled.

The review team, led by Dan Simons of the University of Illinois, set out to inspect all the literature which brain training companies cited in their promotional material – in effect, taking them at their word, with the rationale that the best evidence in support of brain training exercises would be that cited by the companies promoting them.

The chairman says it works

A major finding of the review is the poverty of the supporting evidence for these supposedly scientific exercises. Simons’ team found that half of the brain training companies that promoted their products as being scientifically validated didn’t cite any peer-reviewed journal articles, relying instead on things like testimonials from scientists (including the company founders). Of the companies which did cite evidence for brain training, many cited general research on neuroplasticity, but nothing directly relevant to the effectiveness of what they promote.

The key issue for claims around brain training is that practising these exercises will help you in general, or on unrelated tasks. Nobody doubts that practising a crossword will help you get better at crosswords, but will it improve your memory, your IQ or your ability to skim read email? Such effects are called transfer effects, and so called “far transfer” (transfer to a very different task than that trained) is the ultimate goal of brain training studies. What we know about transfer effect is reviewed in Simons’ paper.

Doing puzzles make you, well, good at doing puzzles.
Jne Valokuvaus/

As well as trawling the company websites, the reviewers inspected a list provided by an industry group (Cognitive Training Data of some 132 scientific papers claiming to support the efficacy of brain training. Of these, 106 reported new data (rather than being reviews themselves). Of those 106, 71 used a proper control group, so that the effects of the brain training could be isolated. Of those 71, only 49 had so called “active control” group, in which the control participants actually did something rather than being ignored by the the researchers. (An active control is important if you want to distinguish the benefit of your treatment from the benefits of expectation or responding to researchers’ attentions.) Of these 49, about half of the results came from just six studies.

Overall, the reviewers conclude, no study which is cited in support of brain training products meets the gold standard for best research practises, and few even approached the standard of a good randomised control trial (although note their cut off for considering papers missed this paper from late last year).

A bit premature

The implications, they argue, are that claims for general benefits of brain training are premature. There’s excellent evidence for benefits of training specific to the task trained on, they conclude, less evidence for enhancement on closely related tasks and little evidence that brain training enhances performance on distantly related tasks or everyday cognitive performance.

The flaws in the studies supporting the benefits of brain training aren’t unique to the study of brain training. Good research is hard and all studies have flaws. Assembling convincing evidence for a treatment takes years, with evidence required from multiple studies and from different types of studies. Indeed, it may yet be that some kind of cognitive training can be shown to have the general benefits that are hoped for from existing brain training exercises. What this review shows is not that brain training can’t work, merely that promotion of brain training exercises is – at the very least – premature based on the current scientific evidence.

Yet in a 2014 survey of US adults, over 50% had heard of brain training exercises and showed some credence to their performance enhancing powers. Even the name “brain training”, the authors of the review admit, is a concession to marketing – this is how people know these exercises, despite their development having little to do with the brain directly.

The widespread currency of brain training isn’t because of overwhelming evidence of benefits from neuroscience and psychological science, as the review shows, but it does rely on the appearance of being scientifically supported. The billion-dollar market in brain training is parasitic on the credibility of neuroscience and psychology. It also taps into our lazy desire to address complex problems with simple, purchasable, solutions (something written about at length by Ben Goldacre in his book Bad Science).

The Simons review ends with recommendations for researchers into brain training, and for journalists reporting on the topic. My favourite was their emphasis that any treatment needs to be considered for its costs, as well as its benefits. By this standard there is no commercial brain training product which has been shown to have greater benefits than something you can do for free. Also important is the opportunity cost: what could you be doing in the time you invest in brain training? The reviewers deliberately decided to focus on brain training, so they didn’t cover the proven and widespread benefits of exercise for mental function, but I’m happy to tell you now that a brisk walk round the park with a friend is not only free, and not only more fun, but has better scientific support for its cognitive-enhancing powers than all the brain training products which are commercially available.

The Conversation

Tom Stafford, Lecturer in Psychology and Cognitive Science, University of Sheffield

This article was originally published on The Conversation. Read the original article.

Hallucinating sleep researchers

I just stumbled across a fascinating 2002 paper where pioneering sleep researcher Allan Hobson describes the effect of a precisely located stroke he suffered. It affected the medulla in his brain stem, important for regulating sleep, and caused total insomnia and a suppression of dreaming.

In one fascinating section, Hobson describes the hallucinations he experienced, likely due to his inability to sleep or dream, which included disconnected body parts and a hallucinated Robert Stickgold – another well known sleep researcher.

Between Days 1 and 10 I could visually perceive a vault over my supine body immediately upon closing my eyes. The vault resembled the bottom of a swimming pool but the gunitelike surface of the vault could be not only aqua, but also white or beige and, more rarely, engraved obsidian or of a gauzelike nature mixed with ice or glass crystals.

There were three categories of formed imagery that appeared on these surfaces. In the first category of geologic forms the imagery tended to be protomorphic and crude but often gave way to the more elaborate structures of category two inanimate sculptural forms.

The most amusing of these (which occurred on the fourth night) were enormous lucite telephone/computers. But there were also tables and tableaux in which the geologic forms sometimes took unusual and bizarre shapes. One that I recall is a TV-set-like representation of a tropical landscape.

In category three, the most elaborate forms have human anatomical elements, including long swirling flesh, columns that metamorphosed into sphincters, nipples, and crotches, but these were never placed in real bodies.

In fact whole body forms almost never emerged. Instead I saw profiles of faces and profiles of bodies which were often inextricably mixed with penises, noses, lips, eyebrows; torsos arose out of the sculptural columns of flesh and sank back into them again.

The most fully realized human images include my wife, featuring her lower anatomy and (most amusingly) a Peter Pan-like Robert Stickgold and two fairies enjoying a bedtime story. While visual disturbances are quire common in Wallenberg’s syndrome, they have only been reported to occur in waking with eyes open.

Blurring of vision (which I had), and the tendency of objects to appear to move called oscillopsia (which I did not have), are attributed to the disturbed oculomotor and vestibular physiology.


Link to locked report of Hobson’s stroke.

a literary case of the exploding head

eOne of the most commented-upon posts on this blog is this from 2009, ‘Exploding head syndrome‘. The name stems from the 1920s, and describes an under-documented and mysterious condition in which the suffer experiences a viscerally loud explosion, as if occurring inside their own head.

I’m reading V.S.Naipaul’s “The Enigma of Arrival”, and the autobiographical main character experiences the same thing. Here we are on p93 of my edition of that novel:

In this dream there occurred always, at a critical moment in the dream narrative, what I can only describe as an explosion in my head. It was how every dream ended, with this explosion that threw me flat on my back, in the presence of people, in a street, a crowded room, or wherever, threw me into this degraded posture in the midst of standing people, threw me into the posture of sleep in which I found myself when I awakened. The explosion was so loud, so reverberating and slow in my head that I felt, with the part of my brain that miraculously could still think and draw conclusions, that I couldn’t possibly survive, that I was in fact dying, that the explosion this time, in this dream, regardless of the other dreams that had revealed themselves at the end as dreams, would kill, that I was consciously living through, or witnessing, my own death. And when I awoke my head felt queer, shaken up, exhausted; as though some discharge in my brain had in fact occurred.

The Enigma of Arrival on Goodreads
Vaughan’s 2009 post on Exploding Head Syndrome
Wikipedia: Exploding head syndrome

How curiosity can save you from political tribalism

Neither intelligence nor education can stop you from forming prejudiced opinions – but an inquisitive attitude may help you make wiser judgements.

Ask a left-wing Brit what they believe about the safety of nuclear power, and you can guess their answer. Ask a right-wing American about the risks posed by climate change, and you can also make a better guess than if you didn’t know their political affiliation. Issues like these feel like they should be informed by science, not our political tribes, but sadly, that’s not what happens.

Psychology has long shown that education and intelligence won’t stop your politics from shaping your broader worldview, even if those beliefs do not match the hard evidence. Instead, your ability to weigh up the facts may depend on a less well-recognised trait – curiosity.

The political lens

There is now a mountain of evidence to show that politics doesn’t just help predict people’s views on some scientific issues; it also affects how they interpret new information. This is why it is a mistake to think that you can somehow ‘correct’ people’s views on an issue by giving them more facts, since study after study has shown that people have a tendency to selectively reject facts that don’t fit with their existing views.

This leads to the odd situation that people who are most extreme in their anti-science views – for example skeptics of the risks of climate change – are more scientifically informed than those who hold anti-science views but less strongly.

But smarter people shouldn’t be susceptible to prejudice swaying their opinions, right? Wrong. Other research shows that people with the most education, highest mathematical abilities, and the strongest tendencies to be reflective about their beliefs are the most likely to resist information which should contradict their prejudices. This undermines the simplistic assumption that prejudices are the result of too much gut instinct and not enough deep thought. Rather, people who have the facility for deeper thought about an issue can use those cognitive powers to justify what they already believe and find reasons to dismiss apparently contrary evidence.

It’s a messy picture, and at first looks like a depressing one for those who care about science and reason. A glimmer of hope can be found in new research from a collaborative team of philosophers, film-makers and psychologists led by Dan Kahan of Yale University.

Kahan and his team were interested in politically biased information processing, but also in studying the audience for scientific documentaries and using this research to help film-makers. They developed two scales. The first measured a person’s scientific background, a fairly standard set of questions asking about knowledge of basic scientific facts and methods, as well as quantitative judgement and reasoning. The second scale was more innovative. The idea of this scale was to measure something related but independent – a person’s curiosity about scientific issues, not how much they already knew. This second scale was also innovative in how they measured scientific curiosity. As well as asking some questions, they also gave people choices about what material to read as part of a survey about reactions to news. If an individual chooses to read about science stories rather than sports or politics, their corresponding science curiosity score was marked up.

Armed with their scales, the team then set out to see how they predicted people’s opinions on public issues which should be informed by science. With the scientific knowledge scale the results were depressingly predictable. The left-wing participants – liberal Democrats – tended to judge issues such as global warming or fracking as significant risks to human health, safety or prosperity. The right-wing participants – conservative Republicans – were less likely to judge the issues as significant risks. What’s more, the liberals with more scientific background were most concerned about the risks, while the conservatives with more scientific background were least concerned. That’s right – higher levels of scientific education results in a greater polarisation between the groups, not less.

So much for scientific background, but scientific curiosity showed a different pattern. Differences between liberals and conservatives still remained – on average there was still a noticeable gap in their estimates of the risks – but their opinions were at least heading in the same direction. For fracking for example, more scientific curiosity was associated with more concern, for both liberals and conservatives.

The team confirmed this using an experiment which gave participants a choice of science stories, either in line with their existing beliefs, or surprising to them. Those participants who were high in scientific curiosity defied the predictions and selected stories which contradicted their existing beliefs – this held true whether they were liberal or conservative.

And, in case you are wondering, the results hold for issues in which political liberalism is associated with the anti-science beliefs, such as attitudes to GMO or vaccinations.

So, curiosity might just save us from using science to confirm our identity as members of a political tribe. It also shows that to promote a greater understanding of public issues, it is as important for educators to try and convey their excitement about science and the pleasures of finding out stuff, as it is to teach people some basic curriculum of facts.

This is my BBC Future column from last week. The original is here. My ebook ‘For argument’s sake: evidence that reason can change minds’ is out now

Making the personal, geospatial

CC licensed photo by Flickr user Paul Townsend. Click for origin.There is an old story in London, and it goes like this. Following extensive rioting, there is an impassioned debate about the state of society with some saying it shows moral decay while others claim it demonstrates the desperation of poverty.

In 1886, London hosted one of its regular retellings when thousands of unemployed people trashed London’s West End during two days of violent disturbances.

In the weeks of consternation that followed, the press stumbled on the work of wealthy ship owner Charles Booth who had begun an unprecedented project – mapping poverty across the entire city.

He started the project because he thought Henry Hyndman was bullshitting.

Hyndman, a rather too earnest social campaigner, claimed that 1 in 4 Londoners lived in poverty, a figure Booth scoffed at as a gross exaggeration.

So Booth paid for an impressive team of researchers and sent to them out to interview officials who assessed families for compulsory schooling and he created a map, initially of the East End, and eventually as far west as Hammersmith, of every house and the social state of the families within it.

Each dwelling was classified into seven gradations – from “Wealthy; upper middle and upper classes” to “Lowest class; vicious, semi-criminal”. For the first time, deprivation could be seen etched into London’s social landscape.

I suspect that the term ‘vicious’ referred to its older meaning: ‘of given to vice’- rather than cruel. But what Booth created, for the first time and in exceptional detail, was a map of social environments.

The map is amazingly detailed. Literally, a house by house mapping of the whole of London.

The results showed that Hyndman was indeed wrong, but not in the direction Booth assumed. He found 1 in 3 Londoners lived below the poverty line.

If you know a bit about the capital today, you can see how many of the deprived areas from 1886 are still some of the most deprived in 2016.

So I was fascinated when I read about a new study that allows poverty to be mapped from the air, using machine learning to analyse satellite images Nigeria, Tanzania, Uganda, Malawi, and Rwanda.

But rather than pre-defining what counts as an image of a wealthy area (swimming pools perhaps?) compared to an impoverished one (unpaved roads maybe), they trained a neural network learn its own associations between image properties and income on an initial set of training data before trying it out on new data sets.

The neural network could explain up to 75% of the variation in the local economy.

Knowing both the extent and geography of poverty is massively important. It allows a macro view of something that manifests in very local ways, mapping it to street corners, housing blocks and small settlements.

It makes the vast forces of the economy, personal.

Link to Booth’s poverty map.
Link to Science reporting of satellite mapping study.