Sex differences in cognition are small

Lately I’ve been thinking about sex differences in brain and cognition. There are undeniable differences in the physical size of the brain, and different brain areas, even if there are no ‘female’ and ‘male’ brains categorically. These physical differences do not translate directly into commensurate differences in cognition. Indeed, there is support for a ‘gender similarities hypothesis’ which asserts that on most measures there is no difference between men and women.

Most, but maybe not all. There are a few areas of fundamental cognitive ability where gender differences seem to persist – mental rotation, vocabulary and maybe maths. But these differences are small. To see how small, I put them on the same chart with the physical differences and a few other behavioural differences for perspective.

Standardised mean differences (Cohen’s d effect size) for various gender differences in brain, behaviour and cognition:

gender_effectsReferences and calculations at the end of this post, below the fold. And if you need a primer on what is meant by standardised difference then go here.

Even with these, small, observed differences in cognition, we don’t know what proportion is due to contingent facts, such as the different experience and expectations men and women encounter in their lifetimes, and what proportion is immutable consequence of genetic difference in sex.

One possibility for why there is a mismatch between physical differences in the brain and cognitive differences is the possibility that structural differences between male and female brains may actually serve to support functional similarity, not difference.

For more, so much more, on this, see the special issue of Journal of Neuroscience Research (January/February 2017) on An Issue Whose Time Has Come: Sex/Gender Influences on Nervous System Function.

Includes: Grabowska, A. (2017). Sex on the brain: Are gender‐dependent structural and functional differences associated with behavior?. Journal of Neuroscience Research, 95(1-2), 200-212.

Previously: Gender brain blogging

Continue reading “Sex differences in cognition are small”

The gender similarities hypothesis

cubeThere is a popular notion that men and women are very different in their cognitive abilities. The evidence for this may be weaker than you expect. Janet Hyde advances what she calls the ‘gender similarities hypothesis‘, ‘which holds that males and females are similar on most, but not all, psychological variables’. In a 2016 review she states:

According to meta-analyses, however, among both children and adults, females perform equally to males on mathematics assessments. The gender difference in verbal skills is small and varies depending on the type of skill assessed (e.g., vocabulary, essay writing). The gender difference in 3D mental rotation shows a moderate advantage for males.

So from three celebrated examples of differences in ability only two actually show a moderate gender difference. Other abilities show no or negligible gender differences, Hyde concludes. Gender differences in ability may be overinflated in the popular imagination.

Worth noting is that the name of the game here isn’t to find gender differences in behaviour. That’s too easy. Women wear more make-up for example, men are more likely to wear trousers. The game is to find a measure which reflects some more fundamental aspect of mental capacity. Hence the focus on vocabulary size, mental rotation ability, maths ability and the like. These may be less subject to the vagaries of exactly what is expected of each gender, but that’s a shaky assumption. Indeed, it would be weird if different roles and expectations for men vs women didn’t produce different motivations and opportunities for practice of cognitive abilities such as these.

The real challenge is to find immutable gender differences, or to track differences in how abilities develop under different conditions. Without this evidence, we’re not going to be sure which gender differences are immutable, and which are contingent on the specific psychological history of particular men and particular women living in our particular societies.

One way of addressing this challenge is to look at how gender differences change across different socities, or across time as society changes. A 2014 study, ‘The changing face of cognitive gender differences in Europe‘ did just that, showing that less gender-restricted educational opportunities tended to decrease some gender differences but not others. In other words, increasing equality in educational attainment magnified some differences between the sexes.

You can read my take on this in this piece for The Conversation : Are women and men forever destined to think differently?

The Gender Similarities Hypothesis: Hyde, J. S. (2005). The gender similarities hypothesis. American psychologist, 60(6), 581-592

2016 update: Hyde, J. S. (2016). Sex and cognition: gender and cognitive functions. Current opinion in neurobiology, 38, 53-56.

Previously: Gender brain blogging: Sex differences in brain size, no male and female brain types.

no male and female brain types

What would it mean for there to be a “male brain” or a “female brain”? Human genitals are mostly easy to categorise just by sight as either male or female. It makes sense to talk about there being different male and female types of genitals. What would it mean for the same to be true of brains? Daphna Joel and colleagues, in a 2015 paper Sex beyond the genitalia: The human brain mosaic have a proposal on what needs to hold for us to be able to say there are distinct male and female varieties of brains:

1. particular brain features must be highly dimorphic (i.e., little overlap between the forms of these features in males and females).
and
2. those features which are dimorphic must be consistent for each brain (i.e. a brain has only “male” or only “female” features).

They analyse MRI scans of 1400 human brains and find that these conditions don’t hold. There is extensive overlap, so that categorical brains, defined like this, just don’t exist. They write:

…analyses of internal consistency reveal that brains with features that are consistently at one end of the “maleness-femaleness” continuum are rare. Rather, most brains are comprised of unique “mosaics” of features, some more common in females compared with males, some more common in males compared with females, and some common in both females and males…Our study demonstrates that, although there are sex/gender differences in the brain, human brains do not belong to one of two distinct categories: male brain/female brain.

So the easy gender categorisation we can do on the genitals doesn’t translate to the (usually-unseen) anatomy of the brain. The ‘male/female brain’ doesn’t exist in the same way as the male/female sex organs.

Context for this is that there are differences between the average male and average female brain (for overall size, at least, these differences are large). Although there may not be categorical types, a follow up analysis showed that it is possible to classify the brains used in the Joel paper as belonging to a man or a women at somewhere between 69%-77% accuracy. A related study, on a different data set, claimed 93% classification accuracy.

Paper: Joel, D., Berman, Z., Tavor, I., Wexler, N., Gaber, O., Stein, Y., … & Liem, F. (2015). Sex beyond the genitalia: The human brain mosaic. Proceedings of the National Academy of Sciences, 112(50), 15468-15473.

Responses: Del Giudice, M., Lippa, R. A., Puts, D. A., Bailey, D. H., Bailey, J. M., & Schmitt, D. P. (2016). Joel et al.’s method systematically fails to detect large, consistent sex differences. Proceedings of the National Academy of Sciences, 113(14), E1965-E1965.

Chekroud, A. M., Ward, E. J., Rosenberg, M. D., & Holmes, A. J. (2016). Patterns in the human brain mosaic discriminate males from females. Proceedings of the National Academy of Sciences, 113(14), E1968-E1968.

The responses are linked to in Debra Soh’s LA Times article Are gender feminists and transgender activists undermining science?

Betteridge’s Law

Previously: gender brain blogging

Sex differences in brain size

Next time someone asks you “Are men and women’s brains different?”, you can answer, without hesitation, “Yes”. Not only do they tend to be found in different types of bodies, but they are different sizes. Men’s are typically larger by something like 130 cubic centimeters.

Not only are they actually larger, but they are larger even once you take into account body size (i.e. men’s brains are bigger even when accounting for the fact that heavier and/or taller people will tend to have bigger heads and brains, and than men tend to be heavier and taller than women). And this is despite the fact that there is no difference in size of brain at birth – the sex difference in brain volume development seems to begin around age two. (Side note: no difference in brain volume between male and female cats).

But is this difference in brain volume a lot? There’s substantial variation between individuals, as well as across the individuals of each sex. What does ~130cc mean in the context of this variation? One way of thinking about it is in terms of standardised effect size, which measures the size of a difference between the two population averages in standard units based on the variation within those populations.

Here’s a good example – we all know that men are taller than women. Not all men are taller than all women, but men tend to be taller. With the effect size, we can precisely express this vague idea of ‘tend to be’. The (Cohen’s d) effect size statistic of the height difference between men and women is ~1.72.

What this means is that the distribution of heights in the two populations can be visualised like this:

mf_heightsWith this spread of heights, the average man is taller than 95.7% of women.

Estimates of the effect size of total brain volume vary, but a reasonable value is about ~1.3, which looks like this:

mf_brainsThis means that the average man has a larger brain, by volume, than 90% of the female population.

For reference, psychology experiments typically look at phenomena with effet sizes of the order ~0.4 , which looks like this:

mf_0p4And which means that the average of group A exceeds 65.5% of group B.

In this context, human sexual dimorphism in brain volume is an extremely large effect.

So when they ask “Are men and women’s brains different?”, you can unhesitatingly say, “yes”. And when they ask “And what does that mean for differences in how they think” you can say “Ah, now that’s a different issue”.

Link: meta-analysis of male-female differences in brain structure:

Kristoffer Magnusson’s awesome interactive effect size visualisation

Previously: gendered brain blogging

Edit 8/2/17: Andy Fugard pointed out that there are many different measures of effect size, and I only discuss/use one: the Cohen’s d effect size. I’ve edited the text to make this clearer.

Edit 2 (8/2/17): Kevin Mitchell points out this paper that claims sex differences in brain size are already apparent in neonates

How to overcome bias

How do you persuade somebody of the facts? Asking them to be fair, impartial and unbiased is not enough. To explain why, psychologist Tom Stafford analyses a classic scientific study.

One of the tricks our mind plays is to highlight evidence which confirms what we already believe. If we hear gossip about a rival we tend to think “I knew he was a nasty piece of work”; if we hear the same about our best friend we’re more likely to say “that’s just a rumour”. If you don’t trust the government then a change of policy is evidence of their weakness; if you do trust them the same change of policy can be evidence of their inherent reasonableness.

Once you learn about this mental habit – called confirmation bias – you start seeing it everywhere.

This matters when we want to make better decisions. Confirmation bias is OK as long as we’re right, but all too often we’re wrong, and we only pay attention to the deciding evidence when it’s too late.

How we should to protect our decisions from confirmation bias depends on why, psychologically, confirmation bias happens. There are, broadly, two possible accounts and a classic experiment from researchers at Princeton University pits the two against each other, revealing in the process a method for overcoming bias.

The first theory of confirmation bias is the most common. It’s the one you can detect in expressions like “You just believe what you want to believe”, or “He would say that, wouldn’t he?” or when the someone is accused of seeing things a particular way because of who they are, what their job is or which friends they have. Let’s call this the motivational theory of confirmation bias. It has a clear prescription for correcting the bias: change people’s motivations and they’ll stop being biased.

The alternative theory of confirmation bias is more subtle. The bias doesn’t exist because we only believe what we want to believe, but instead because we fail to ask the correct questions about new information and our own beliefs. This is a less neat theory, because there could be one hundred reasons why we reason incorrectly – everything from limitations of memory to inherent faults of logic. One possibility is that we simply have a blindspot in our imagination for the ways the world could be different from how we first assume it is. Under this account the way to correct confirmation bias is to give people a strategy to adjust their thinking. We assume people are already motivated to find out the truth, they just need a better method. Let’s call this the cognition theory of confirmation bias.

Thirty years ago, Charles Lord and colleagues published a classic experiment which pitted these two methods against each other. Their study used a persuasion experiment which previously had shown a kind of confirmation bias they called ‘biased assimilation’. Here, participants were recruited who had strong pro- or anti-death penalty views and were presented with evidence that seemed to support the continuation or abolition of the death penalty. Obviously, depending on what you already believe, this evidence is either confirmatory or disconfirmatory. Their original finding showed that the nature of the evidence didn’t matter as much as what people started out believing. Confirmatory evidence strengthened people’s views, as you’d expect, but so did disconfirmatory evidence. That’s right, anti-death penalty people became more anti-death penalty when shown pro-death penalty evidence (and vice versa). A clear example of biased reasoning.

For their follow-up study, Lord and colleagues re-ran the biased assimilation experiment, but testing two types of instructions for assimilating evidence about the effectiveness of the death penalty as a deterrent for murder. The motivational instructions told participants to be “as objective and unbiased as possible”, to consider themselves “as a judge or juror asked to weigh all of the evidence in a fair and impartial manner”. The alternative, cognition-focused, instructions were silent on the desired outcome of the participants’ consideration, instead focusing only on the strategy to employ: “Ask yourself at each step whether you would have made the same high or low evaluations had exactly the same study produced results on the other side of the issue.” So, for example, if presented with a piece of research that suggested the death penalty lowered murder rates, the participants were asked to analyse the study’s methodology and imagine the results pointed the opposite way.

They called this the “consider the opposite” strategy, and the results were striking. Instructed to be fair and impartial, participants showed the exact same biases when weighing the evidence as in the original experiment. Pro-death penalty participants thought the evidence supported the death penalty. Anti-death penalty participants thought it supported abolition. Wanting to make unbiased decisions wasn’t enough. The “consider the opposite” participants, on the other hand, completely overcame the biased assimilation effect – they weren’t driven to rate the studies which agreed with their preconceptions as better than the ones that disagreed, and didn’t become more extreme in their views regardless of which evidence they read.

The finding is good news for our faith in human nature. It isn’t that we don’t want to discover the truth, at least in the microcosm of reasoning tested in the experiment. All people needed was a strategy which helped them overcome the natural human short-sightedness to alternatives.

The moral for making better decisions is clear: wanting to be fair and objective alone isn’t enough. What’s needed are practical methods for correcting our limited reasoning – and a major limitation is our imagination for how else things might be. If we’re lucky, someone else will point out these alternatives, but if we’re on our own we can still take advantage of crutches for the mind like the “consider the opposite” strategy.

This is my BBC Future column from last week. You can read the original here. My ebook For argument’s sake: Evidence that reason can change minds is out now.

Can boy monkeys throw?

180px-cebus_albifrons_editAimed throwing is a gendered activity – men are typically better at it than women (by about 1 standard deviation, some studies claim). Obviously this could be due to differential practice, which is in turn due to cultural bias in what men vs women are expected to be a good at and enjoy (some say “not so” to this practice-effect explanation).

Monkeys are interesting because they are close evolutionary relatives, but don’t have human gender expectations. So we note with interest this 2000 study which claims no difference in throwing accuracy between male and female Capuchin monkeys. In fact, the female monkeys were (non-significantly) more accurate than the males (perhaps due to throwing as part of Capuchin female sexual displays?).

Elsewhere, a review of cross-species gender differences in spatial ability finds “most of the hypotheses [that male mammals have better spatial ability than females] are either logically flawed or, as yet, have no substantial support. Few of the data exclusively support or exclude any current hypotheses“.

Chimps are closer relatives to humans than monkeys, but although there is a literature on gendered differences in object use/preference among chimps, I couldn’t immediately find anything on gendered differences in throwing among chimps. Possibly because few scientists want to get near a chimp when it is flinging sh*t around.

Cite: Westergaard, G. C., Liv, C., Haynie, M. K., & Suomi, S. J. (2000). A comparative study of aimed throwing by monkeys and humans. Neuropsychologia, 38(11), 1511-1517.

Previously: gendered brain blogging

Gender brain blogging

s-l300I’ve started teaching a graduate seminar on the cognitive neuroscience of sex-differences. The ambition is to carry out a collective close-reading of Cordelia Fine’s “Delusions of Gender: The Real Science Behind Sex Differences” (US: “How Our Minds, Society, and Neurosexism Create Difference“). Week by week the class is going to extract the arguments and check the references from each chapter of Fine’s book.

I mention this to explain why there is likely to be an increase in the number of gender-themed posts by me to mindhacks.com.

Here’s Fine summarising her argument in the introduction to the 2010 book:

There are sex differences in the brain. There are also large […] sex differences in who does what and who achieves what. It would make sense if these facts were connected in some way, and perhaps they are. But when we follow the trail of contemporary science we discover a surprising number of gaps, assumptions, inconsistencies, poor methodologies and leaps of faith.

This is a book about science works and how is made to work as much as it is a book about gender. It’s the Bad Science of  cognitive neuroscience.  Essential.

The troubled friendship of Tversky and Kahneman

Daniel Kahneman, by Pat Kinsella (detail)
Daniel Kahneman, by Pat Kinsella for the Chronicle Review (detail)

Writer Michael Lewis’s new book, “The Undoing Project: The Friendship That Changed Our Minds”, is about two of the most important figures in modern psychology, Amos Tversky and Daniel Kahneman.

In this extract for the Chronicle of Higher Education, Lewis describes the emotional tension between the pair towards the end of their collaboration. It’s a compelling ‘behind the scenes’ view of the human side to the foundational work of the heuristics and biases programme in psychology, as well as being brilliantly illustrated by Pat Kinsella.

One detail that caught my eye is this response by Amos Tversky to a critique of the work he did with Kahneman. As well as being something I’ve wanted to write myself on occasion, it illustrates the forthrightness which made Tversky a productive and difficult colleague:

the objections you raised against our experimental method are simply unsupported. In essence, you engage in the practice of criticizing a procedural departure without showing how the departure might account for the results obtained. You do not present either contradictory data or a plausible alternative interpretation of our findings. Instead, you express a strong bias against our method of data collection and in favor of yours. This position is certainly understandable, yet it is hardly convincing.

Link: A Bitter Ending: Daniel Kahneman, Amos Tversky, and the limits of collaboration

Annette Karmiloff-Smith has left the building

The brilliant developmental neuropsychologist Annette Karmiloff-Smith has passed away and one of the brightest lights into the psychology of children’s development has been dimmed.

She actually started her professional life as a simultaneous interpreter for the UN and then went on to study psychology and trained with Jean Piaget.

Karmiloff-Smith went into neuropsychology and starting rethinking some of the assumptions of how cognition was organised in the brain which, until then, had almost entirely been based on studies of adults with brain injury.

These studies showed that some mental abilities could be independently impaired after brain damage suggesting that there was a degree of ‘modularity’ in the organisation of cognitive functions.

But Karmiloff-Smith investigated children with developmental disorders, like autism or William’s syndrome, and showed that what seemed to be the ‘natural’ organisation of the brain in adults was actually a result of development itself – an approach she called neuroconstructivism.

In other words, developmental disorders were not ‘knocking out’ specific abilities but affecting the dynamics of neurodevelopment as the child interacted with the world.

If you want to hear more of Karmiloff-Smith’s life and work, her interview on BBC Radio 4’s The Life Scientific is well worth a listen.
 

Link to page of remembrance for Annette Karmiloff-Smith.

echo chambers: old psych, new tech

If you were surprised by the result of the Brexit vote in the UK or by the Trump victory in the US, you might live in an echo chamber – a self-reinforcing world of people who share the same opinions as you. Echo chambers are a problem, and not just because it means some people make incorrect predictions about political events. They threaten our democratic conversation, splitting up the common ground of assumption and fact that is needed for diverse people to talk to each other.

Echo chambers aren’t just a product of the internet and social media, however, but of how those things interact with fundamental features of human nature. Understand these features of human nature and maybe we can think creatively about ways to escape them.

Built-in bias

One thing that drives echo chambers is our tendency to associate with people like us. Sociologists call this homophily. We’re more likely to make connections with people who are similar to us. That’s true for ethnicity, age, gender, education and occupation (and, of course, geography), as well as a range of other dimensions. We’re also more likely to lose touch with people who aren’t like us, further strengthening the niches we find ourselves in. Homophily is one reason obesity can seem contagious – people who are at risk of gaining weight are disproportionately more likely to hang out with each other and share an environment that encourages obesity.

Another factor that drives the echo chamber is our psychological tendency to seek information that confirms what we already know – often called confirmation bias. Worse, even when presented with evidence to the contrary, we show a tendency to dismiss it and even harden our convictions. This means that even if you break into someone’s echo chamber armed with facts that contradict their view, you’re unlikely to persuade them with those facts alone.

News as information and identity

More and more of us get our news primarily from social media and use that same social media to discuss the news.

Social media takes our natural tendencies to associate with similar minded people and seek information that confirms and amplifies our convictions. Dan Kahan, professor of law and psychology at Yale, describes each of us switching between two modes of information processing – identity affirming and truth seeking. The result is that for issues that, for whatever reasons, become associated with a group identity, even the most informed or well educated can believe radically different things because believing those things is tied up with signalling group identity more than a pursuit of evidence.

Mitigating human foibles

Where we go from here isn’t clear. The fundamentals of human psychology won’t just go away, but they do change depending on the environment we’re in. If technology and the technological economy reinforce the echo chamber, we can work to reshape these forces so as to mitigate it.

We can recognise that a diverse and truth-seeking media is a public good. That means it is worth supporting – both in established forms like the BBC, and in new forms like Wikipedia and The Conversation.

We can support alternative funding models for non-public media. Paying for news may seem old-fashioned, but there are long-term benefits. New ways of doing it are popping up. Services such as Blendle let you access news stories that are behind a pay wall by offering a pay-per-article model.

Technology can also help with individual solutions to the echo chamber, if you’re so minded. For Twitter users, otherside.site let’s you view the feed of any other Twitter user, so if you want to know what Nigel Farage or Donald Trump read on Twitter, you can. (I wouldn’t bother with Trump. He only follows 41 people – mostly family and his own businesses. Now that’s an echo chamber.)

For Facebook users, politecho.org is a browser extension that shows the political biases of your friends and Facebook newsfeed. If you want a shortcut, this Wall Street Journal article puts Republican and Democratic Facebook feeds side-by-side.

Of course, these things don’t remove the echo chamber, but they do highlight the extent to which you’re in one, and – as with other addictions – recognising that you have a problem is the first step to recovery.

The ConversationThis article was originally published on The Conversation. Read the original article.

rational judges, not extraneous factors in decisions

The graph tells a drammatic story of irrationality, presented in the 2011 paper Extraneous factors in judicial decisions. What it shows is the outcome of parole board decisions, as ruled by judges, against the order those decisions were made. The circles show the meal breaks taken by the judges.

parole_decisionsAs you can see, the decisions change the further the judge gets from his/her last meal, dramatically decreasing from around 65% chance of a favourable decision if you are the first case after a meal break, to close to 0% if you are the last case in a long series before a break.

In their paper, the original authors argue that this effect of order truly is due to the judges’ hunger, and not a confound introduced by some other factor which affects the order of cases and their chances of success (the lawyers sit outside the closed doors of the court, for example, so can’t time their best cases to come just after a break – they don’t know when the judge is taking a meal; The effect survives additional analysis where severity of prisoner’s crime and length of sentence are factored it; and so on). The interpretation is that as the judges tire they more and more fall back on a simple heuristic – playing safe and refusing parole.

This seeming evidence of the irrationality of judges has been cited hundreds of times, in economics, psychology and legal scholarship. Now, a new analysis by Andreas Glöckner in the journal Judgement and Decision Making questions these conclusions.

Glöckner’s analysis doesn’t prove that extraneous factors weren’t influencing the judges, but he shows how the same effect could be produced by entirely rational judges interacting with the protocols required by the legal system.

The main analysis works like this: we know that favourable rulings take longer than unfavourable ones (~7 mins vs ~5 mins), and we assume that judges are able to guess how long a case will take to rule on before they begin it (from clues like the thickness of the file, the types of request made, the representation the prisoner has and so on). Finally, we assume judges have a time limit in mind for each of the three sessions of the day, and will avoid starting cases which they estimate will overrun the time limit for the current session.

It turns out that this kind of rational time-management is sufficient to  generate the drops in favourable outcomes. How this occurs isn’t straightforward and interacts with a quirk of original author’s data presentation (specifically their graph shows the order number of cases when the number of cases in each session varied day to day – so, for example, it shows that the 12th case after a break is least likely to be judged favourably, but there wasn’t always a 12 case in each session. So sessions in which there were more unfavourable cases were more likely to contribute to this data point).

This story of claim and counter-claim shows why psychologists prefer experiments, since only then can you truly isolate causal explanations (if you are a judge and willing to go without lunch please get in touch). Also, it shows the benefit of simulations for extending the horizons of our intuition. Glöckner’s achievement is to show in detail how some reasonable assumptions – including that of a rational judge – can generate a pattern which hitherto seemed only explainable by the influence of an irrelevant factor on the judges decisions. This doesn’t settle the matter, but it does mean we can’t be so confident that this graph shows what it is often claimed to show. The judges decisions may not be irrational after all, and the timing of the judges meal breaks may not be influencing parole decision outcome.

Original finding: Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011). Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences, 108(17), 6889-6892.

New analysis: Glöckner, A. (2016). The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated. Judgment and Decision Making, 11(6), 601-610.

Elsewhere I have written about how evidence of human irrationality is often over-egged : For argument’s sake: evidence that reason can change minds

 

Is psychosis an ‘immune disorder’?

A fascinating new study has just been published which found evidence for the immune system attacking a neuroreceptor in the brain in a small proportion of people with psychosis. It’s an interesting study that probably reflects what’s going to be a cultural tipping point for the idea of ‘immune system mental health problems’ or ‘madness as inflammation disorder’ but it’s worth being a little wary of the coming hype.

This new study, published in The Lancet Psychiatry, did blood tests on people who presented with their first episode of psychosis and looked for antibodies that attack specific receptors in the brain. Receptors are what receive neurotransmitters – the brain’s chemical signals – and allow information to be transferred around the nervous system, so disruption to these can cause brain disturbances.

The most scientifically interesting finding is that the research team found a type of antibody that attacks NMDA receptors in 7 patients (3%) out of 228, but zero controls.

The study found markers for other neuroreceptors that the immune system was attacking, but the reason the NMDA finding is so crucial is because it shows evidence of a condition called anti-NMDA receptor encephalitis which is known to cause episodes of psychosis that can be indistinguishable from ‘regular’ psychosis but for which the best treatment is dealing with the autoimmune problem.

It was only discovered in 2007 but there has been a long-running suspicion that it may be the best explanation for a small minority of cases of psychosis which can be easily misdiagnosed as schizophrenia.

Importantly, the findings from this research have been supported by another independent study that has just been published online. The two studies used different ranges for the concentration of NMDA antibodies they measured, but they came up with roughly the same figures.

It also chimes with a growing debate about the role of the immune system in mental health. A lot of this evidence is circumstantial but suggestive. For example, many of the genes associated (albeit weakly) with the diagnosis of schizophrenia are involved in the immune system – particularly in coding proteins for the major histocompatibility complex.

However, it’s worth being a little circumspect about this new enthusiasm for thinking of psychosis as an ‘immune disorder’.

Importantly, these new studies did blood tests, rather than checking cerebrospinal fluid – the fluid that your brain floats around in which lies on the other side of the blood-brain barrier, so we can’t be sure that these antibodies were actually affecting the brain in everyone found to have them. It’s likely, but not certain.

Also, we’re not sure to what extent anti-NMDA antibodies contribute to the chance of developing psychosis in everyone. Certainly there are some cases where it seems to be the main cause, but we’re not sure how that holds for all.

It’s also worth bearing in mind that the science over the role of the genes associated with the schizophrenia diagnosis in the immune system is certainly not settled. A recent large study compared the role of these genes in schizophrenia to known autoimmune disorders and concluded that the genes just don’t look like they’re actually impacting on the immune system.

There’s also a constant background of cultural enthusiasm in psychiatry to identify ‘biomarkers’ and anything that looks like a clear common biological pathway even for a small number of cases of ‘psychiatric’ problem gets a lot of airtime.

Curiously, in this case, Hollywood may also play a part.

A film called Brain On Fire has just been shown to film festivals and is being tested for a possible big release. It’s based on the (excellent) book of the same name by journalist Susannah Cahalan and describes her experience of developing psychosis only for it later to be discovered that she had anti-NMDA receptor encephalitis.

Hollywood has historically had a big effect on discussions about mental health and you can be sure that if the movie becomes a hit, popular media will be alive with discussions on ‘whether your mental health problems are really an immune problem’.

But taking a less glitzy view, in terms of these new studies, they probably reflect that a small percentage of people with psychosis, maybe 1-2%, have NMDA receptor-related immune problems that play an important role in the generation of their mental health difficulties.

It’s important not to underestimate the importance of these findings. It could potentially translate into more effective treatment for millions of people a year globally.

But in terms of psychosis as a whole, for which we know social adversity in its many forms plays a massive role, it’s just a small piece of the puzzle.
 

Link to locked Lancet Psychiatry study.

How liars create the illusion of truth

Repetition makes a fact seem more true, regardless of whether it is or not. Understanding this effect can help you avoid falling for propaganda, says psychologist Tom Stafford.

“Repeat a lie often enough and it becomes the truth”, is a law of propaganda often attributed to the Nazi Joseph Goebbels. Among psychologists something like this known as the “illusion of truth” effect. Here’s how a typical experiment on the effect works: participants rate how true trivia items are, things like “A prune is a dried plum”. Sometimes these items are true (like that one), but sometimes participants see a parallel version which isn’t true (something like “A date is a dried plum”).

After a break – of minutes or even weeks – the participants do the procedure again, but this time some of the items they rate are new, and some they saw before in the first phase. The key finding is that people tend to rate items they’ve seen before as more likely to be true, regardless of whether they are true or not, and seemingly for the sole reason that they are more familiar.

So, here, captured in the lab, seems to be the source for the saying that if you repeat a lie often enough it becomes the truth. And if you look around yourself, you may start to think that everyone from advertisers to politicians are taking advantage of this foible of human psychology.

But a reliable effect in the lab isn’t necessarily an important effect on people’s real-world beliefs. If you really could make a lie sound true by repetition, there’d be no need for all the other techniques of persuasion.

One obstacle is what you already know. Even if a lie sounds plausible, why would you set what you know aside just because you heard the lie repeatedly?

Recently, a team led by Lisa Fazio of Vanderbilt University set out to test how the illusion of truth effect interacts with our prior knowledge. Would it affect our existing knowledge? They used paired true and un-true statements, but also split their items according to how likely participants were to know the truth (so “The Pacific Ocean is the largest ocean on Earth” is an example of a “known” items, which also happens to be true, and “The Atlantic Ocean is the largest ocean on Earth” is an un-true item, for which people are likely to know the actual truth).

Their results show that the illusion of truth effect worked just as strongly for known as for unknown items, suggesting that prior knowledge won’t prevent repetition from swaying our judgements of plausibility.

To cover all bases, the researchers performed one study in which the participants were asked to rate how true each statement seemed on a six-point scale, and one where they just categorised each fact as “true” or “false”. Repetition pushed the average item up the six-point scale, and increased the odds that a statement would be categorised as true. For statements that were actually fact or fiction, known or unknown, repetition made them all seem more believable.

At first this looks like bad news for human rationality, but – and I can’t emphasise this strongly enough – when interpreting psychological science, you have to look at the actual numbers.

What Fazio and colleagues actually found, is that the biggest influence on whether a statement was judged to be true was… whether it actually was true. The repetition effect couldn’t mask the truth. With or without repetition, people were still more likely to believe the actual facts as opposed to the lies.

This shows something fundamental about how we update our beliefs – repetition has a power to make things sound more true, even when we know differently, but it doesn’t over-ride that knowledge

The next question has to be, why might that be? The answer is to do with the effort it takes to being rigidly logical about every piece of information you hear. If every time you heard something you assessed it against everything you already knew, you’d still be thinking about breakfast at supper-time. Because we need to make quick judgements, we adopt shortcuts – heuristics which are right more often than wrong. Relying on how often you’ve heard something to judge how truthful something feels is just one strategy. Any universe where truth gets repeated more often than lies, even if only 51% vs 49% will be one where this is a quick and dirty rule for judging facts.

If repetition was the only thing that influenced what we believed we’d be in trouble, but it isn’t. We can all bring to bear more extensive powers of reasoning, but we need to recognise they are a limited resource. Our minds are prey to the illusion of truth effect because our instinct is to use short-cuts in judging how plausible something is. Often this works. Sometimes it is misleading.

Once we know about the effect we can guard against it. Part of this is double-checking why we believe what we do – if something sounds plausible is it because it really is true, or have we just been told that repeatedly? This is why scholars are so mad about providing references – so we can track the origin on any claim, rather than having to take it on faith.

But part of guarding against the illusion is the obligation it puts on us to stop repeating falsehoods. We live in a world where the facts matter, and should matter. If you repeat things without bothering to check if they are true, you are helping to make a world where lies and truth are easier to confuse. So, please, think before you repeat.

This is my BBC Future column from the other week, the original is here. For more on this topic, see my ebook : For argument’s sake: evidence that reason can change minds (smashwords link here)

reinforcing your wiser self

phoneNautilus has a piece by David Perezcassar on how technology takes advantage of our animal instinct for variable reward schedules (Unreliable rewards trap us into addictive cell phone use, but they can also get us out).

It’s a great illustrated read about the scientific history of the ideas behind ‘persuasive technology’, and ends with a plea that perhaps we can hijack our weakness for variable reward schedules for better ends:

What is we set up a variable reward system to reward ourselves for the time spent away fro our phones & physically connecting with others? Even time spend meditating or reading without technological distractions is a heroic endeavor worthy of a prize

Which isn’t a bad idea, but the pattern of the reward schedule is only one factor in what makes an activity habit forming. The timing of a reward is more important than the reliability – it’s easier to train in habits with immediate than delayed rewards. The timing is so crucial that in the animal learning literature even a delay of 2 seconds between a lever press and the delivery of a food pellet impairs learning in rats. In experiments we did with humans a delay of 150ms we enough to hinder our participants connecting their own actions with a training signal.

So the dilemma for persuasive technology, and anyone who wants to free themselves from its hold, is not just how phones/emails/social media structure our rewards, but also the fact that they allow gratification at almost any moment. There are always new notifications, new news, and so phones let us have zero delay for the reward of checking our phones. If you want to focus on other things, like being a successful parent, friend or human the delays on the rewards of these are far larger (not to mention more nebulous).

The way I like to think about it is the conflict between the impatient, narrow, smaller self – the self that likes sweets and gossip and all things immediate gratification – and the wider, wiser self – the self than invests in the future and carers about the bigger picture. That self can win out, does win out as we make our stumbling journey into adulthood, but my hunch is we’re going to need a different framework from the one of reinforcement learning to do it

Nautilus article: Unreliable rewards trap us into addictive cell phone use, but they can also get us out

Mindhacks.com: post about reinforcement schedules, and how they might be used to break technology compulsion (from 2006 – just sayin’)

George Ainslie’s book Breakdown of Will is what happens if you go so deep into the reinforcement learning paradigm you explode its reductionism and reinvent the notion of the self. Mind-alteringly good.

Do students know what’s good for them?

Of course they do, and of course they don’t.

Putting a student at the centre of their own learning seems like fundamental pedagogy. The Constructivist approach to education emphasises the need for knowledge to reassembled in the mind of the learner, and the related impossibility of its direct transmission from the mind of the teacher. Believe this, and student input into how they learn must follow.

At the same time, we know there is a deep neurobiological connection between the machinery of reward in our brain, and that of learning. Both functions seem to be entangled in the subcortical circuitry of a network known as the basal ganglia. It’s perhaps not surprising that curiosity, which we all know personally to be a powerful motivator of learning, activates the same subcortical circuitry involved in the pleasurable anticipation of reward. Further, curiosity enhances memory, even for things you learn while your curiosity is aroused about something else.

This neurobiological alignment of enjoyment and learning isn’t mere coincidence. When building learning algorithms for embedding in learning robots, the basic rules of learning from experience have to be augmented with a drive to explore – curiosity! – so that they don’t become stuck repeating suboptimal habits. Whether it is motivated by curiosity or other factors, exploration seems to support enhanced learning in a range of domains from simple skills to more complex ideas.

Obviously we learn best when motivated, and when learning is fun, and allowing us to explore our curiosity is a way to allow both. However, putting the trajectory of their experience into students’ hands can go awry.

False beliefs impede learning

One reason is false beliefs about how much we know, or how we learn best. Psychologists studying memory have long documented such metacognitive errors, which include overconfidence, and a mistaken reliance on our familiarity with a thing as a guide to how well we understand it, or how well we’ll be able to recall it when tested (recognition and recall are in fact different cognitive processes). Sure enough, when tested in experiments people will over-rely on ineffective study strategies (like rereading, or reviewing the answers to questions, rather than testing their ability to generate the answers from the questions). Cramming is another ineffective study strategy, with experiment after experiment showing the benefit of spreading out your study rather than massing it all together. Obviously this requires being more organised, but my belief is that a metacognitive error supports students’ over-reliance on cramming – cramming feels good, because, for a moment, you feel familiar with all the information. The problem is that this feel-good familiarity isn’t the kind of memory that will support recall in an exam, but immature learners often don’t realise the extent of that.

In agreement with these findings from psychologists, education scholars have reacted against pure student-led or discovery learning, with one review summarising the findings from multiple distinct research programmes taking place over three decades: “In each case, guided discovery was more effective than pure discovery in helping students learn and transfer”.

The solution: balancing guided and discovery learning

This leaves us at a classic “middle way”, where pure student-led or teacher-led learning is ruled out. Some kind of guided exploration, structured study, or student choice in learning is obviously a necessity, but we’re not sure how much.

There’s an exciting future for research which informs us what the right blend of guided and discovery learning is, and which students and topics suit which exact blend. One strand of this is to take the cognitive psychology experiments which demonstrate a benefit of active choice learning over passive instruction and to tweak them so that we can see when passive instruction can be used to jump-start or augment active choice learning. One experiment from Kyle MacDonald and Michael Frank of Stanford University used a highly abstract concept learning task in which participants use trial and error to figure out a categorisation of different shapes. Previous research had shown that people learned faster if they were allowed to choose their own examples to receive feedback on, but this latest iteration of the experiment from MacDonald and Frank showed that an initial session of passive learning, where the examples were chosen for the learner boosted performance even further. Presumably this effect is due to the scaffolding in the structure of the concept-space that the passive learning gives the learner. This, and myriad experiments, are possible to show when and how active learning and instructor-led learning can be blended.

Education is about more than students learning the material on the syllabus. There is a meta-goal of producing students who are better able to learn for themselves. The same cognitive machinery in all of us might push us towards less effective strategies. The simple fact of being located within our own selfish consciousness means that even the best performers in the world need a coach to help them learn. But as we mature we can learn to better avoid pitfalls in our learning and evolve into better self-determining students. Ultimately the best education needs to keep its focus on that need to help each of us take on more and more responsibility for how we learn, whether that means submitting to others’ choices or exploring things for ourselves – or, often, a bit of both.

This post originally appeared on the NPJ ‘Science of Learning’ Community