The gender similarities hypothesis

cubeThere is a popular notion that men and women are very different in their cognitive abilities. The evidence for this may be weaker than you expect. Janet Hyde advances what she calls the ‘gender similarities hypothesis‘, ‘which holds that males and females are similar on most, but not all, psychological variables’. In a 2016 review she states:

According to meta-analyses, however, among both children and adults, females perform equally to males on mathematics assessments. The gender difference in verbal skills is small and varies depending on the type of skill assessed (e.g., vocabulary, essay writing). The gender difference in 3D mental rotation shows a moderate advantage for males.

So from three celebrated examples of differences in ability only two actually show a moderate gender difference. Other abilities show no or negligible gender differences, Hyde concludes. Gender differences in ability may be overinflated in the popular imagination.

Worth noting is that the name of the game here isn’t to find gender differences in behaviour. That’s too easy. Women wear more make-up for example, men are more likely to wear trousers. The game is to find a measure which reflects some more fundamental aspect of mental capacity. Hence the focus on vocabulary size, mental rotation ability, maths ability and the like. These may be less subject to the vagaries of exactly what is expected of each gender, but that’s a shaky assumption. Indeed, it would be weird if different roles and expectations for men vs women didn’t produce different motivations and opportunities for practice of cognitive abilities such as these.

The real challenge is to find immutable gender differences, or to track differences in how abilities develop under different conditions. Without this evidence, we’re not going to be sure which gender differences are immutable, and which are contingent on the specific psychological history of particular men and particular women living in our particular societies.

One way of addressing this challenge is to look at how gender differences change across different socities, or across time as society changes. A 2014 study, ‘The changing face of cognitive gender differences in Europe‘ did just that, showing that less gender-restricted educational opportunities tended to decrease some gender differences but not others. In other words, increasing equality in educational attainment magnified some differences between the sexes.

You can read my take on this in this piece for The Conversation : Are women and men forever destined to think differently?

The Gender Similarities Hypothesis: Hyde, J. S. (2005). The gender similarities hypothesis. American psychologist, 60(6), 581-592

2016 update: Hyde, J. S. (2016). Sex and cognition: gender and cognitive functions. Current opinion in neurobiology, 38, 53-56.

Previously: Gender brain blogging: Sex differences in brain size, no male and female brain types.

no male and female brain types

What would it mean for there to be a “male brain” or a “female brain”? Human genitals are mostly easy to categorise just by sight as either male or female. It makes sense to talk about there being different male and female types of genitals. What would it mean for the same to be true of brains? Daphna Joel and colleagues, in a 2015 paper Sex beyond the genitalia: The human brain mosaic have a proposal on what needs to hold for us to be able to say there are distinct male and female varieties of brains:

1. particular brain features must be highly dimorphic (i.e., little overlap between the forms of these features in males and females).
and
2. those features which are dimorphic must be consistent for each brain (i.e. a brain has only “male” or only “female” features).

They analyse MRI scans of 1400 human brains and find that these conditions don’t hold. There is extensive overlap, so that categorical brains, defined like this, just don’t exist. They write:

…analyses of internal consistency reveal that brains with features that are consistently at one end of the “maleness-femaleness” continuum are rare. Rather, most brains are comprised of unique “mosaics” of features, some more common in females compared with males, some more common in males compared with females, and some common in both females and males…Our study demonstrates that, although there are sex/gender differences in the brain, human brains do not belong to one of two distinct categories: male brain/female brain.

So the easy gender categorisation we can do on the genitals doesn’t translate to the (usually-unseen) anatomy of the brain. The ‘male/female brain’ doesn’t exist in the same way as the male/female sex organs.

Context for this is that there are differences between the average male and average female brain (for overall size, at least, these differences are large). Although there may not be categorical types, a follow up analysis showed that it is possible to classify the brains used in the Joel paper as belonging to a man or a women at somewhere between 69%-77% accuracy. A related study, on a different data set, claimed 93% classification accuracy.

Paper: Joel, D., Berman, Z., Tavor, I., Wexler, N., Gaber, O., Stein, Y., … & Liem, F. (2015). Sex beyond the genitalia: The human brain mosaic. Proceedings of the National Academy of Sciences, 112(50), 15468-15473.

Responses: Del Giudice, M., Lippa, R. A., Puts, D. A., Bailey, D. H., Bailey, J. M., & Schmitt, D. P. (2016). Joel et al.’s method systematically fails to detect large, consistent sex differences. Proceedings of the National Academy of Sciences, 113(14), E1965-E1965.

Chekroud, A. M., Ward, E. J., Rosenberg, M. D., & Holmes, A. J. (2016). Patterns in the human brain mosaic discriminate males from females. Proceedings of the National Academy of Sciences, 113(14), E1968-E1968.

The responses are linked to in Debra Soh’s LA Times article Are gender feminists and transgender activists undermining science?

Betteridge’s Law

Previously: gender brain blogging

Sex differences in brain size

Next time someone asks you “Are men and women’s brains different?”, you can answer, without hesitation, “Yes”. Not only do they tend to be found in different types of bodies, but they are different sizes. Men’s are typically larger by something like 130 cubic centimeters.

Not only are they actually larger, but they are larger even once you take into account body size (i.e. men’s brains are bigger even when accounting for the fact that heavier and/or taller people will tend to have bigger heads and brains, and than men tend to be heavier and taller than women). And this is despite the fact that there is no difference in size of brain at birth – the sex difference in brain volume development seems to begin around age two. (Side note: no difference in brain volume between male and female cats).

But is this difference in brain volume a lot? There’s substantial variation between individuals, as well as across the individuals of each sex. What does ~130cc mean in the context of this variation? One way of thinking about it is in terms of standardised effect size, which measures the size of a difference between the two population averages in standard units based on the variation within those populations.

Here’s a good example – we all know that men are taller than women. Not all men are taller than all women, but men tend to be taller. With the effect size, we can precisely express this vague idea of ‘tend to be’. The (Cohen’s d) effect size statistic of the height difference between men and women is ~1.72.

What this means is that the distribution of heights in the two populations can be visualised like this:

mf_heightsWith this spread of heights, the average man is taller than 95.7% of women.

Estimates of the effect size of total brain volume vary, but a reasonable value is about ~1.3, which looks like this:

mf_brainsThis means that the average man has a larger brain, by volume, than 90% of the female population.

For reference, psychology experiments typically look at phenomena with effet sizes of the order ~0.4 , which looks like this:

mf_0p4And which means that the average of group A exceeds 65.5% of group B.

In this context, human sexual dimorphism in brain volume is an extremely large effect.

So when they ask “Are men and women’s brains different?”, you can unhesitatingly say, “yes”. And when they ask “And what does that mean for differences in how they think” you can say “Ah, now that’s a different issue”.

Link: meta-analysis of male-female differences in brain structure:

Kristoffer Magnusson’s awesome interactive effect size visualisation

Previously: gendered brain blogging

Edit 8/2/17: Andy Fugard pointed out that there are many different measures of effect size, and I only discuss/use one: the Cohen’s d effect size. I’ve edited the text to make this clearer.

Edit 2 (8/2/17): Kevin Mitchell points out this paper that claims sex differences in brain size are already apparent in neonates

How to overcome bias

How do you persuade somebody of the facts? Asking them to be fair, impartial and unbiased is not enough. To explain why, psychologist Tom Stafford analyses a classic scientific study.

One of the tricks our mind plays is to highlight evidence which confirms what we already believe. If we hear gossip about a rival we tend to think “I knew he was a nasty piece of work”; if we hear the same about our best friend we’re more likely to say “that’s just a rumour”. If you don’t trust the government then a change of policy is evidence of their weakness; if you do trust them the same change of policy can be evidence of their inherent reasonableness.

Once you learn about this mental habit – called confirmation bias – you start seeing it everywhere.

This matters when we want to make better decisions. Confirmation bias is OK as long as we’re right, but all too often we’re wrong, and we only pay attention to the deciding evidence when it’s too late.

How we should to protect our decisions from confirmation bias depends on why, psychologically, confirmation bias happens. There are, broadly, two possible accounts and a classic experiment from researchers at Princeton University pits the two against each other, revealing in the process a method for overcoming bias.

The first theory of confirmation bias is the most common. It’s the one you can detect in expressions like “You just believe what you want to believe”, or “He would say that, wouldn’t he?” or when the someone is accused of seeing things a particular way because of who they are, what their job is or which friends they have. Let’s call this the motivational theory of confirmation bias. It has a clear prescription for correcting the bias: change people’s motivations and they’ll stop being biased.

The alternative theory of confirmation bias is more subtle. The bias doesn’t exist because we only believe what we want to believe, but instead because we fail to ask the correct questions about new information and our own beliefs. This is a less neat theory, because there could be one hundred reasons why we reason incorrectly – everything from limitations of memory to inherent faults of logic. One possibility is that we simply have a blindspot in our imagination for the ways the world could be different from how we first assume it is. Under this account the way to correct confirmation bias is to give people a strategy to adjust their thinking. We assume people are already motivated to find out the truth, they just need a better method. Let’s call this the cognition theory of confirmation bias.

Thirty years ago, Charles Lord and colleagues published a classic experiment which pitted these two methods against each other. Their study used a persuasion experiment which previously had shown a kind of confirmation bias they called ‘biased assimilation’. Here, participants were recruited who had strong pro- or anti-death penalty views and were presented with evidence that seemed to support the continuation or abolition of the death penalty. Obviously, depending on what you already believe, this evidence is either confirmatory or disconfirmatory. Their original finding showed that the nature of the evidence didn’t matter as much as what people started out believing. Confirmatory evidence strengthened people’s views, as you’d expect, but so did disconfirmatory evidence. That’s right, anti-death penalty people became more anti-death penalty when shown pro-death penalty evidence (and vice versa). A clear example of biased reasoning.

For their follow-up study, Lord and colleagues re-ran the biased assimilation experiment, but testing two types of instructions for assimilating evidence about the effectiveness of the death penalty as a deterrent for murder. The motivational instructions told participants to be “as objective and unbiased as possible”, to consider themselves “as a judge or juror asked to weigh all of the evidence in a fair and impartial manner”. The alternative, cognition-focused, instructions were silent on the desired outcome of the participants’ consideration, instead focusing only on the strategy to employ: “Ask yourself at each step whether you would have made the same high or low evaluations had exactly the same study produced results on the other side of the issue.” So, for example, if presented with a piece of research that suggested the death penalty lowered murder rates, the participants were asked to analyse the study’s methodology and imagine the results pointed the opposite way.

They called this the “consider the opposite” strategy, and the results were striking. Instructed to be fair and impartial, participants showed the exact same biases when weighing the evidence as in the original experiment. Pro-death penalty participants thought the evidence supported the death penalty. Anti-death penalty participants thought it supported abolition. Wanting to make unbiased decisions wasn’t enough. The “consider the opposite” participants, on the other hand, completely overcame the biased assimilation effect – they weren’t driven to rate the studies which agreed with their preconceptions as better than the ones that disagreed, and didn’t become more extreme in their views regardless of which evidence they read.

The finding is good news for our faith in human nature. It isn’t that we don’t want to discover the truth, at least in the microcosm of reasoning tested in the experiment. All people needed was a strategy which helped them overcome the natural human short-sightedness to alternatives.

The moral for making better decisions is clear: wanting to be fair and objective alone isn’t enough. What’s needed are practical methods for correcting our limited reasoning – and a major limitation is our imagination for how else things might be. If we’re lucky, someone else will point out these alternatives, but if we’re on our own we can still take advantage of crutches for the mind like the “consider the opposite” strategy.

This is my BBC Future column from last week. You can read the original here. My ebook For argument’s sake: Evidence that reason can change minds is out now.

Can boy monkeys throw?

180px-cebus_albifrons_editAimed throwing is a gendered activity – men are typically better at it than women (by about 1 standard deviation, some studies claim). Obviously this could be due to differential practice, which is in turn due to cultural bias in what men vs women are expected to be a good at and enjoy (some say “not so” to this practice-effect explanation).

Monkeys are interesting because they are close evolutionary relatives, but don’t have human gender expectations. So we note with interest this 2000 study which claims no difference in throwing accuracy between male and female Capuchin monkeys. In fact, the female monkeys were (non-significantly) more accurate than the males (perhaps due to throwing as part of Capuchin female sexual displays?).

Elsewhere, a review of cross-species gender differences in spatial ability finds “most of the hypotheses [that male mammals have better spatial ability than females] are either logically flawed or, as yet, have no substantial support. Few of the data exclusively support or exclude any current hypotheses“.

Chimps are closer relatives to humans than monkeys, but although there is a literature on gendered differences in object use/preference among chimps, I couldn’t immediately find anything on gendered differences in throwing among chimps. Possibly because few scientists want to get near a chimp when it is flinging sh*t around.

Cite: Westergaard, G. C., Liv, C., Haynie, M. K., & Suomi, S. J. (2000). A comparative study of aimed throwing by monkeys and humans. Neuropsychologia, 38(11), 1511-1517.

Previously: gendered brain blogging

Gender brain blogging

s-l300I’ve started teaching a graduate seminar on the cognitive neuroscience of sex-differences. The ambition is to carry out a collective close-reading of Cordelia Fine’s “Delusions of Gender: The Real Science Behind Sex Differences” (US: “How Our Minds, Society, and Neurosexism Create Difference“). Week by week the class is going to extract the arguments and check the references from each chapter of Fine’s book.

I mention this to explain why there is likely to be an increase in the number of gender-themed posts by me to mindhacks.com.

Here’s Fine summarising her argument in the introduction to the 2010 book:

There are sex differences in the brain. There are also large […] sex differences in who does what and who achieves what. It would make sense if these facts were connected in some way, and perhaps they are. But when we follow the trail of contemporary science we discover a surprising number of gaps, assumptions, inconsistencies, poor methodologies and leaps of faith.

This is a book about science works and how is made to work as much as it is a book about gender. It’s the Bad Science of  cognitive neuroscience.  Essential.

The troubled friendship of Tversky and Kahneman

Daniel Kahneman, by Pat Kinsella (detail)
Daniel Kahneman, by Pat Kinsella for the Chronicle Review (detail)

Writer Michael Lewis’s new book, “The Undoing Project: The Friendship That Changed Our Minds”, is about two of the most important figures in modern psychology, Amos Tversky and Daniel Kahneman.

In this extract for the Chronicle of Higher Education, Lewis describes the emotional tension between the pair towards the end of their collaboration. It’s a compelling ‘behind the scenes’ view of the human side to the foundational work of the heuristics and biases programme in psychology, as well as being brilliantly illustrated by Pat Kinsella.

One detail that caught my eye is this response by Amos Tversky to a critique of the work he did with Kahneman. As well as being something I’ve wanted to write myself on occasion, it illustrates the forthrightness which made Tversky a productive and difficult colleague:

the objections you raised against our experimental method are simply unsupported. In essence, you engage in the practice of criticizing a procedural departure without showing how the departure might account for the results obtained. You do not present either contradictory data or a plausible alternative interpretation of our findings. Instead, you express a strong bias against our method of data collection and in favor of yours. This position is certainly understandable, yet it is hardly convincing.

Link: A Bitter Ending: Daniel Kahneman, Amos Tversky, and the limits of collaboration

Annette Karmiloff-Smith has left the building

The brilliant developmental neuropsychologist Annette Karmiloff-Smith has passed away and one of the brightest lights into the psychology of children’s development has been dimmed.

She actually started her professional life as a simultaneous interpreter for the UN and then went on to study psychology and trained with Jean Piaget.

Karmiloff-Smith went into neuropsychology and starting rethinking some of the assumptions of how cognition was organised in the brain which, until then, had almost entirely been based on studies of adults with brain injury.

These studies showed that some mental abilities could be independently impaired after brain damage suggesting that there was a degree of ‘modularity’ in the organisation of cognitive functions.

But Karmiloff-Smith investigated children with developmental disorders, like autism or William’s syndrome, and showed that what seemed to be the ‘natural’ organisation of the brain in adults was actually a result of development itself – an approach she called neuroconstructivism.

In other words, developmental disorders were not ‘knocking out’ specific abilities but affecting the dynamics of neurodevelopment as the child interacted with the world.

If you want to hear more of Karmiloff-Smith’s life and work, her interview on BBC Radio 4’s The Life Scientific is well worth a listen.
 

Link to page of remembrance for Annette Karmiloff-Smith.

echo chambers: old psych, new tech

If you were surprised by the result of the Brexit vote in the UK or by the Trump victory in the US, you might live in an echo chamber – a self-reinforcing world of people who share the same opinions as you. Echo chambers are a problem, and not just because it means some people make incorrect predictions about political events. They threaten our democratic conversation, splitting up the common ground of assumption and fact that is needed for diverse people to talk to each other.

Echo chambers aren’t just a product of the internet and social media, however, but of how those things interact with fundamental features of human nature. Understand these features of human nature and maybe we can think creatively about ways to escape them.

Built-in bias

One thing that drives echo chambers is our tendency to associate with people like us. Sociologists call this homophily. We’re more likely to make connections with people who are similar to us. That’s true for ethnicity, age, gender, education and occupation (and, of course, geography), as well as a range of other dimensions. We’re also more likely to lose touch with people who aren’t like us, further strengthening the niches we find ourselves in. Homophily is one reason obesity can seem contagious – people who are at risk of gaining weight are disproportionately more likely to hang out with each other and share an environment that encourages obesity.

Another factor that drives the echo chamber is our psychological tendency to seek information that confirms what we already know – often called confirmation bias. Worse, even when presented with evidence to the contrary, we show a tendency to dismiss it and even harden our convictions. This means that even if you break into someone’s echo chamber armed with facts that contradict their view, you’re unlikely to persuade them with those facts alone.

News as information and identity

More and more of us get our news primarily from social media and use that same social media to discuss the news.

Social media takes our natural tendencies to associate with similar minded people and seek information that confirms and amplifies our convictions. Dan Kahan, professor of law and psychology at Yale, describes each of us switching between two modes of information processing – identity affirming and truth seeking. The result is that for issues that, for whatever reasons, become associated with a group identity, even the most informed or well educated can believe radically different things because believing those things is tied up with signalling group identity more than a pursuit of evidence.

Mitigating human foibles

Where we go from here isn’t clear. The fundamentals of human psychology won’t just go away, but they do change depending on the environment we’re in. If technology and the technological economy reinforce the echo chamber, we can work to reshape these forces so as to mitigate it.

We can recognise that a diverse and truth-seeking media is a public good. That means it is worth supporting – both in established forms like the BBC, and in new forms like Wikipedia and The Conversation.

We can support alternative funding models for non-public media. Paying for news may seem old-fashioned, but there are long-term benefits. New ways of doing it are popping up. Services such as Blendle let you access news stories that are behind a pay wall by offering a pay-per-article model.

Technology can also help with individual solutions to the echo chamber, if you’re so minded. For Twitter users, otherside.site let’s you view the feed of any other Twitter user, so if you want to know what Nigel Farage or Donald Trump read on Twitter, you can. (I wouldn’t bother with Trump. He only follows 41 people – mostly family and his own businesses. Now that’s an echo chamber.)

For Facebook users, politecho.org is a browser extension that shows the political biases of your friends and Facebook newsfeed. If you want a shortcut, this Wall Street Journal article puts Republican and Democratic Facebook feeds side-by-side.

Of course, these things don’t remove the echo chamber, but they do highlight the extent to which you’re in one, and – as with other addictions – recognising that you have a problem is the first step to recovery.

The ConversationThis article was originally published on The Conversation. Read the original article.

Is psychosis an ‘immune disorder’?

A fascinating new study has just been published which found evidence for the immune system attacking a neuroreceptor in the brain in a small proportion of people with psychosis. It’s an interesting study that probably reflects what’s going to be a cultural tipping point for the idea of ‘immune system mental health problems’ or ‘madness as inflammation disorder’ but it’s worth being a little wary of the coming hype.

This new study, published in The Lancet Psychiatry, did blood tests on people who presented with their first episode of psychosis and looked for antibodies that attack specific receptors in the brain. Receptors are what receive neurotransmitters – the brain’s chemical signals – and allow information to be transferred around the nervous system, so disruption to these can cause brain disturbances.

The most scientifically interesting finding is that the research team found a type of antibody that attacks NMDA receptors in 7 patients (3%) out of 228, but zero controls.

The study found markers for other neuroreceptors that the immune system was attacking, but the reason the NMDA finding is so crucial is because it shows evidence of a condition called anti-NMDA receptor encephalitis which is known to cause episodes of psychosis that can be indistinguishable from ‘regular’ psychosis but for which the best treatment is dealing with the autoimmune problem.

It was only discovered in 2007 but there has been a long-running suspicion that it may be the best explanation for a small minority of cases of psychosis which can be easily misdiagnosed as schizophrenia.

Importantly, the findings from this research have been supported by another independent study that has just been published online. The two studies used different ranges for the concentration of NMDA antibodies they measured, but they came up with roughly the same figures.

It also chimes with a growing debate about the role of the immune system in mental health. A lot of this evidence is circumstantial but suggestive. For example, many of the genes associated (albeit weakly) with the diagnosis of schizophrenia are involved in the immune system – particularly in coding proteins for the major histocompatibility complex.

However, it’s worth being a little circumspect about this new enthusiasm for thinking of psychosis as an ‘immune disorder’.

Importantly, these new studies did blood tests, rather than checking cerebrospinal fluid – the fluid that your brain floats around in which lies on the other side of the blood-brain barrier, so we can’t be sure that these antibodies were actually affecting the brain in everyone found to have them. It’s likely, but not certain.

Also, we’re not sure to what extent anti-NMDA antibodies contribute to the chance of developing psychosis in everyone. Certainly there are some cases where it seems to be the main cause, but we’re not sure how that holds for all.

It’s also worth bearing in mind that the science over the role of the genes associated with the schizophrenia diagnosis in the immune system is certainly not settled. A recent large study compared the role of these genes in schizophrenia to known autoimmune disorders and concluded that the genes just don’t look like they’re actually impacting on the immune system.

There’s also a constant background of cultural enthusiasm in psychiatry to identify ‘biomarkers’ and anything that looks like a clear common biological pathway even for a small number of cases of ‘psychiatric’ problem gets a lot of airtime.

Curiously, in this case, Hollywood may also play a part.

A film called Brain On Fire has just been shown to film festivals and is being tested for a possible big release. It’s based on the (excellent) book of the same name by journalist Susannah Cahalan and describes her experience of developing psychosis only for it later to be discovered that she had anti-NMDA receptor encephalitis.

Hollywood has historically had a big effect on discussions about mental health and you can be sure that if the movie becomes a hit, popular media will be alive with discussions on ‘whether your mental health problems are really an immune problem’.

But taking a less glitzy view, in terms of these new studies, they probably reflect that a small percentage of people with psychosis, maybe 1-2%, have NMDA receptor-related immune problems that play an important role in the generation of their mental health difficulties.

It’s important not to underestimate the importance of these findings. It could potentially translate into more effective treatment for millions of people a year globally.

But in terms of psychosis as a whole, for which we know social adversity in its many forms plays a massive role, it’s just a small piece of the puzzle.
 

Link to locked Lancet Psychiatry study.

How liars create the illusion of truth

Repetition makes a fact seem more true, regardless of whether it is or not. Understanding this effect can help you avoid falling for propaganda, says psychologist Tom Stafford.

“Repeat a lie often enough and it becomes the truth”, is a law of propaganda often attributed to the Nazi Joseph Goebbels. Among psychologists something like this known as the “illusion of truth” effect. Here’s how a typical experiment on the effect works: participants rate how true trivia items are, things like “A prune is a dried plum”. Sometimes these items are true (like that one), but sometimes participants see a parallel version which isn’t true (something like “A date is a dried plum”).

After a break – of minutes or even weeks – the participants do the procedure again, but this time some of the items they rate are new, and some they saw before in the first phase. The key finding is that people tend to rate items they’ve seen before as more likely to be true, regardless of whether they are true or not, and seemingly for the sole reason that they are more familiar.

So, here, captured in the lab, seems to be the source for the saying that if you repeat a lie often enough it becomes the truth. And if you look around yourself, you may start to think that everyone from advertisers to politicians are taking advantage of this foible of human psychology.

But a reliable effect in the lab isn’t necessarily an important effect on people’s real-world beliefs. If you really could make a lie sound true by repetition, there’d be no need for all the other techniques of persuasion.

One obstacle is what you already know. Even if a lie sounds plausible, why would you set what you know aside just because you heard the lie repeatedly?

Recently, a team led by Lisa Fazio of Vanderbilt University set out to test how the illusion of truth effect interacts with our prior knowledge. Would it affect our existing knowledge? They used paired true and un-true statements, but also split their items according to how likely participants were to know the truth (so “The Pacific Ocean is the largest ocean on Earth” is an example of a “known” items, which also happens to be true, and “The Atlantic Ocean is the largest ocean on Earth” is an un-true item, for which people are likely to know the actual truth).

Their results show that the illusion of truth effect worked just as strongly for known as for unknown items, suggesting that prior knowledge won’t prevent repetition from swaying our judgements of plausibility.

To cover all bases, the researchers performed one study in which the participants were asked to rate how true each statement seemed on a six-point scale, and one where they just categorised each fact as “true” or “false”. Repetition pushed the average item up the six-point scale, and increased the odds that a statement would be categorised as true. For statements that were actually fact or fiction, known or unknown, repetition made them all seem more believable.

At first this looks like bad news for human rationality, but – and I can’t emphasise this strongly enough – when interpreting psychological science, you have to look at the actual numbers.

What Fazio and colleagues actually found, is that the biggest influence on whether a statement was judged to be true was… whether it actually was true. The repetition effect couldn’t mask the truth. With or without repetition, people were still more likely to believe the actual facts as opposed to the lies.

This shows something fundamental about how we update our beliefs – repetition has a power to make things sound more true, even when we know differently, but it doesn’t over-ride that knowledge

The next question has to be, why might that be? The answer is to do with the effort it takes to being rigidly logical about every piece of information you hear. If every time you heard something you assessed it against everything you already knew, you’d still be thinking about breakfast at supper-time. Because we need to make quick judgements, we adopt shortcuts – heuristics which are right more often than wrong. Relying on how often you’ve heard something to judge how truthful something feels is just one strategy. Any universe where truth gets repeated more often than lies, even if only 51% vs 49% will be one where this is a quick and dirty rule for judging facts.

If repetition was the only thing that influenced what we believed we’d be in trouble, but it isn’t. We can all bring to bear more extensive powers of reasoning, but we need to recognise they are a limited resource. Our minds are prey to the illusion of truth effect because our instinct is to use short-cuts in judging how plausible something is. Often this works. Sometimes it is misleading.

Once we know about the effect we can guard against it. Part of this is double-checking why we believe what we do – if something sounds plausible is it because it really is true, or have we just been told that repeatedly? This is why scholars are so mad about providing references – so we can track the origin on any claim, rather than having to take it on faith.

But part of guarding against the illusion is the obligation it puts on us to stop repeating falsehoods. We live in a world where the facts matter, and should matter. If you repeat things without bothering to check if they are true, you are helping to make a world where lies and truth are easier to confuse. So, please, think before you repeat.

This is my BBC Future column from the other week, the original is here. For more on this topic, see my ebook : For argument’s sake: evidence that reason can change minds (smashwords link here)

reinforcing your wiser self

phoneNautilus has a piece by David Perezcassar on how technology takes advantage of our animal instinct for variable reward schedules (Unreliable rewards trap us into addictive cell phone use, but they can also get us out).

It’s a great illustrated read about the scientific history of the ideas behind ‘persuasive technology’, and ends with a plea that perhaps we can hijack our weakness for variable reward schedules for better ends:

What is we set up a variable reward system to reward ourselves for the time spent away fro our phones & physically connecting with others? Even time spend meditating or reading without technological distractions is a heroic endeavor worthy of a prize

Which isn’t a bad idea, but the pattern of the reward schedule is only one factor in what makes an activity habit forming. The timing of a reward is more important than the reliability – it’s easier to train in habits with immediate than delayed rewards. The timing is so crucial that in the animal learning literature even a delay of 2 seconds between a lever press and the delivery of a food pellet impairs learning in rats. In experiments we did with humans a delay of 150ms we enough to hinder our participants connecting their own actions with a training signal.

So the dilemma for persuasive technology, and anyone who wants to free themselves from its hold, is not just how phones/emails/social media structure our rewards, but also the fact that they allow gratification at almost any moment. There are always new notifications, new news, and so phones let us have zero delay for the reward of checking our phones. If you want to focus on other things, like being a successful parent, friend or human the delays on the rewards of these are far larger (not to mention more nebulous).

The way I like to think about it is the conflict between the impatient, narrow, smaller self – the self that likes sweets and gossip and all things immediate gratification – and the wider, wiser self – the self than invests in the future and carers about the bigger picture. That self can win out, does win out as we make our stumbling journey into adulthood, but my hunch is we’re going to need a different framework from the one of reinforcement learning to do it

Nautilus article: Unreliable rewards trap us into addictive cell phone use, but they can also get us out

Mindhacks.com: post about reinforcement schedules, and how they might be used to break technology compulsion (from 2006 – just sayin’)

George Ainslie’s book Breakdown of Will is what happens if you go so deep into the reinforcement learning paradigm you explode its reductionism and reinvent the notion of the self. Mind-alteringly good.

Do students know what’s good for them?

Of course they do, and of course they don’t.

Putting a student at the centre of their own learning seems like fundamental pedagogy. The Constructivist approach to education emphasises the need for knowledge to reassembled in the mind of the learner, and the related impossibility of its direct transmission from the mind of the teacher. Believe this, and student input into how they learn must follow.

At the same time, we know there is a deep neurobiological connection between the machinery of reward in our brain, and that of learning. Both functions seem to be entangled in the subcortical circuitry of a network known as the basal ganglia. It’s perhaps not surprising that curiosity, which we all know personally to be a powerful motivator of learning, activates the same subcortical circuitry involved in the pleasurable anticipation of reward. Further, curiosity enhances memory, even for things you learn while your curiosity is aroused about something else.

This neurobiological alignment of enjoyment and learning isn’t mere coincidence. When building learning algorithms for embedding in learning robots, the basic rules of learning from experience have to be augmented with a drive to explore – curiosity! – so that they don’t become stuck repeating suboptimal habits. Whether it is motivated by curiosity or other factors, exploration seems to support enhanced learning in a range of domains from simple skills to more complex ideas.

Obviously we learn best when motivated, and when learning is fun, and allowing us to explore our curiosity is a way to allow both. However, putting the trajectory of their experience into students’ hands can go awry.

False beliefs impede learning

One reason is false beliefs about how much we know, or how we learn best. Psychologists studying memory have long documented such metacognitive errors, which include overconfidence, and a mistaken reliance on our familiarity with a thing as a guide to how well we understand it, or how well we’ll be able to recall it when tested (recognition and recall are in fact different cognitive processes). Sure enough, when tested in experiments people will over-rely on ineffective study strategies (like rereading, or reviewing the answers to questions, rather than testing their ability to generate the answers from the questions). Cramming is another ineffective study strategy, with experiment after experiment showing the benefit of spreading out your study rather than massing it all together. Obviously this requires being more organised, but my belief is that a metacognitive error supports students’ over-reliance on cramming – cramming feels good, because, for a moment, you feel familiar with all the information. The problem is that this feel-good familiarity isn’t the kind of memory that will support recall in an exam, but immature learners often don’t realise the extent of that.

In agreement with these findings from psychologists, education scholars have reacted against pure student-led or discovery learning, with one review summarising the findings from multiple distinct research programmes taking place over three decades: “In each case, guided discovery was more effective than pure discovery in helping students learn and transfer”.

The solution: balancing guided and discovery learning

This leaves us at a classic “middle way”, where pure student-led or teacher-led learning is ruled out. Some kind of guided exploration, structured study, or student choice in learning is obviously a necessity, but we’re not sure how much.

There’s an exciting future for research which informs us what the right blend of guided and discovery learning is, and which students and topics suit which exact blend. One strand of this is to take the cognitive psychology experiments which demonstrate a benefit of active choice learning over passive instruction and to tweak them so that we can see when passive instruction can be used to jump-start or augment active choice learning. One experiment from Kyle MacDonald and Michael Frank of Stanford University used a highly abstract concept learning task in which participants use trial and error to figure out a categorisation of different shapes. Previous research had shown that people learned faster if they were allowed to choose their own examples to receive feedback on, but this latest iteration of the experiment from MacDonald and Frank showed that an initial session of passive learning, where the examples were chosen for the learner boosted performance even further. Presumably this effect is due to the scaffolding in the structure of the concept-space that the passive learning gives the learner. This, and myriad experiments, are possible to show when and how active learning and instructor-led learning can be blended.

Education is about more than students learning the material on the syllabus. There is a meta-goal of producing students who are better able to learn for themselves. The same cognitive machinery in all of us might push us towards less effective strategies. The simple fact of being located within our own selfish consciousness means that even the best performers in the world need a coach to help them learn. But as we mature we can learn to better avoid pitfalls in our learning and evolve into better self-determining students. Ultimately the best education needs to keep its focus on that need to help each of us take on more and more responsibility for how we learn, whether that means submitting to others’ choices or exploring things for ourselves – or, often, a bit of both.

This post originally appeared on the NPJ ‘Science of Learning’ Community

The hidden history of war on terror torture

The Hidden Persuaders project has interviewed neuropsychologist Tim Shallice about his opposition to the British government’s use of ‘enhanced interrogation’ in the Northern Ireland conflict of the 1970s – a practice eventually abandoned as torture.

Shallice is little known to the wider public but is one of the most important and influential neuropsychologists of his generation, having pioneered the systematic study of neurological problems as a window on typical cognitive function.

One of his first papers was not on brain injury, however, it was an article titled ‘Ulster depth interrogation techniques and their relation to sensory deprivation research’ where he set out a cognitive basis for why the ‘five techniques’ – wall-standing, hooding, white noise, sleep deprivation, and deprivation of food and drink – amounted to torture.

Shallice traces a link between the use of these techniques and research on sensory deprivation – which was investigated both by regular scientists for reasons of scientific curiosity, and as we learned later, by intelligence services while trying to understand ‘brain washing’.

The use of these techniques in Northern Ireland was subject to an official investigation and Shallice and other researchers testified to the Parker Committee which led Prime Minister Edward Heath to ban the practice.

If those techniques sound eerily familiar, it is because they formed the basis of interrogation practices at Guantanamo Bay and other notorious sites in the ‘war on terror’.

The Hidden Persuaders is a research project at Birkbeck, University of London, which is investigating the history of ‘brainwashing’. It traces the practice to its use by the British during the colonisation of Yemen, who seemed to have borrowed it off the KGB.

And if you want to read about the modern day effects of the abusive techniques, The New York Times has just published a disturbing feature article about the long-term consequences of being tortured in Guantanamo and other ‘black sites’ by following up many of the people subject to the brutal techniques.
 

Link to Hidden Persuaders interview with Tim Shallice.
Link to NYT on long-term legacy of war on terror torture.

Does ‘brain training’ work?

You’ve probably heard of “brain training exercises” – puzzles, tasks and drills which claim to keep you mentally agile. Maybe, especially if you’re an older person, you’ve even bought the book, or the app, in the hope of staving off mental decline. The idea of brain training has widespread currency, but is that due to science, or empty marketing?

Now a major new review, published in Psychology in the Public Interest, sets out to systematically examine the evidence for brain training. The results should give you pause before spending any of your time and money on brain training, but they also highlight what happens when research and commerce become entangled.

The review team, led by Dan Simons of the University of Illinois, set out to inspect all the literature which brain training companies cited in their promotional material – in effect, taking them at their word, with the rationale that the best evidence in support of brain training exercises would be that cited by the companies promoting them.

The chairman says it works

A major finding of the review is the poverty of the supporting evidence for these supposedly scientific exercises. Simons’ team found that half of the brain training companies that promoted their products as being scientifically validated didn’t cite any peer-reviewed journal articles, relying instead on things like testimonials from scientists (including the company founders). Of the companies which did cite evidence for brain training, many cited general research on neuroplasticity, but nothing directly relevant to the effectiveness of what they promote.

The key issue for claims around brain training is that practising these exercises will help you in general, or on unrelated tasks. Nobody doubts that practising a crossword will help you get better at crosswords, but will it improve your memory, your IQ or your ability to skim read email? Such effects are called transfer effects, and so called “far transfer” (transfer to a very different task than that trained) is the ultimate goal of brain training studies. What we know about transfer effect is reviewed in Simons’ paper.

Doing puzzles make you, well, good at doing puzzles.
Jne Valokuvaus/Shutterstock.com

As well as trawling the company websites, the reviewers inspected a list provided by an industry group (Cognitive Training Data of some 132 scientific papers claiming to support the efficacy of brain training. Of these, 106 reported new data (rather than being reviews themselves). Of those 106, 71 used a proper control group, so that the effects of the brain training could be isolated. Of those 71, only 49 had so called “active control” group, in which the control participants actually did something rather than being ignored by the the researchers. (An active control is important if you want to distinguish the benefit of your treatment from the benefits of expectation or responding to researchers’ attentions.) Of these 49, about half of the results came from just six studies.

Overall, the reviewers conclude, no study which is cited in support of brain training products meets the gold standard for best research practises, and few even approached the standard of a good randomised control trial (although note their cut off for considering papers missed this paper from late last year).

A bit premature

The implications, they argue, are that claims for general benefits of brain training are premature. There’s excellent evidence for benefits of training specific to the task trained on, they conclude, less evidence for enhancement on closely related tasks and little evidence that brain training enhances performance on distantly related tasks or everyday cognitive performance.

The flaws in the studies supporting the benefits of brain training aren’t unique to the study of brain training. Good research is hard and all studies have flaws. Assembling convincing evidence for a treatment takes years, with evidence required from multiple studies and from different types of studies. Indeed, it may yet be that some kind of cognitive training can be shown to have the general benefits that are hoped for from existing brain training exercises. What this review shows is not that brain training can’t work, merely that promotion of brain training exercises is – at the very least – premature based on the current scientific evidence.

Yet in a 2014 survey of US adults, over 50% had heard of brain training exercises and showed some credence to their performance enhancing powers. Even the name “brain training”, the authors of the review admit, is a concession to marketing – this is how people know these exercises, despite their development having little to do with the brain directly.

The widespread currency of brain training isn’t because of overwhelming evidence of benefits from neuroscience and psychological science, as the review shows, but it does rely on the appearance of being scientifically supported. The billion-dollar market in brain training is parasitic on the credibility of neuroscience and psychology. It also taps into our lazy desire to address complex problems with simple, purchasable, solutions (something written about at length by Ben Goldacre in his book Bad Science).

The Simons review ends with recommendations for researchers into brain training, and for journalists reporting on the topic. My favourite was their emphasis that any treatment needs to be considered for its costs, as well as its benefits. By this standard there is no commercial brain training product which has been shown to have greater benefits than something you can do for free. Also important is the opportunity cost: what could you be doing in the time you invest in brain training? The reviewers deliberately decided to focus on brain training, so they didn’t cover the proven and widespread benefits of exercise for mental function, but I’m happy to tell you now that a brisk walk round the park with a friend is not only free, and not only more fun, but has better scientific support for its cognitive-enhancing powers than all the brain training products which are commercially available.

The Conversation

Tom Stafford, Lecturer in Psychology and Cognitive Science, University of Sheffield

This article was originally published on The Conversation. Read the original article.

Hallucinating sleep researchers

I just stumbled across a fascinating 2002 paper where pioneering sleep researcher Allan Hobson describes the effect of a precisely located stroke he suffered. It affected the medulla in his brain stem, important for regulating sleep, and caused total insomnia and a suppression of dreaming.

In one fascinating section, Hobson describes the hallucinations he experienced, likely due to his inability to sleep or dream, which included disconnected body parts and a hallucinated Robert Stickgold – another well known sleep researcher.

Between Days 1 and 10 I could visually perceive a vault over my supine body immediately upon closing my eyes. The vault resembled the bottom of a swimming pool but the gunitelike surface of the vault could be not only aqua, but also white or beige and, more rarely, engraved obsidian or of a gauzelike nature mixed with ice or glass crystals.

There were three categories of formed imagery that appeared on these surfaces. In the first category of geologic forms the imagery tended to be protomorphic and crude but often gave way to the more elaborate structures of category two inanimate sculptural forms.

The most amusing of these (which occurred on the fourth night) were enormous lucite telephone/computers. But there were also tables and tableaux in which the geologic forms sometimes took unusual and bizarre shapes. One that I recall is a TV-set-like representation of a tropical landscape.

In category three, the most elaborate forms have human anatomical elements, including long swirling flesh, columns that metamorphosed into sphincters, nipples, and crotches, but these were never placed in real bodies.

In fact whole body forms almost never emerged. Instead I saw profiles of faces and profiles of bodies which were often inextricably mixed with penises, noses, lips, eyebrows; torsos arose out of the sculptural columns of flesh and sank back into them again.

The most fully realized human images include my wife, featuring her lower anatomy and (most amusingly) a Peter Pan-like Robert Stickgold and two fairies enjoying a bedtime story. While visual disturbances are quire common in Wallenberg’s syndrome, they have only been reported to occur in waking with eyes open.

Blurring of vision (which I had), and the tendency of objects to appear to move called oscillopsia (which I did not have), are attributed to the disturbed oculomotor and vestibular physiology.

 

Link to locked report of Hobson’s stroke.