Gender brain blogging

s-l300I’ve started teaching a graduate seminar on the cognitive neuroscience of sex-differences. The ambition is to carry out a collective close-reading of Cordelia Fine’s “Delusions of Gender: The Real Science Behind Sex Differences” (US: “How Our Minds, Society, and Neurosexism Create Difference“). Week by week the class is going to extract the arguments and check the references from each chapter of Fine’s book.

I mention this to explain why there is likely to be an increase in the number of gender-themed posts by me to mindhacks.com.

Here’s Fine summarising her argument in the introduction to the 2010 book:

There are sex differences in the brain. There are also large […] sex differences in who does what and who achieves what. It would make sense if these facts were connected in some way, and perhaps they are. But when we follow the trail of contemporary science we discover a surprising number of gaps, assumptions, inconsistencies, poor methodologies and leaps of faith.

This is a book about science works and how is made to work as much as it is a book about gender. It’s the Bad Science of  cognitive neuroscience.  Essential.

The troubled friendship of Tversky and Kahneman

Daniel Kahneman, by Pat Kinsella (detail)
Daniel Kahneman, by Pat Kinsella for the Chronicle Review (detail)

Writer Michael Lewis’s new book, “The Undoing Project: The Friendship That Changed Our Minds”, is about two of the most important figures in modern psychology, Amos Tversky and Daniel Kahneman.

In this extract for the Chronicle of Higher Education, Lewis describes the emotional tension between the pair towards the end of their collaboration. It’s a compelling ‘behind the scenes’ view of the human side to the foundational work of the heuristics and biases programme in psychology, as well as being brilliantly illustrated by Pat Kinsella.

One detail that caught my eye is this response by Amos Tversky to a critique of the work he did with Kahneman. As well as being something I’ve wanted to write myself on occasion, it illustrates the forthrightness which made Tversky a productive and difficult colleague:

the objections you raised against our experimental method are simply unsupported. In essence, you engage in the practice of criticizing a procedural departure without showing how the departure might account for the results obtained. You do not present either contradictory data or a plausible alternative interpretation of our findings. Instead, you express a strong bias against our method of data collection and in favor of yours. This position is certainly understandable, yet it is hardly convincing.

Link: A Bitter Ending: Daniel Kahneman, Amos Tversky, and the limits of collaboration

echo chambers: old psych, new tech

If you were surprised by the result of the Brexit vote in the UK or by the Trump victory in the US, you might live in an echo chamber – a self-reinforcing world of people who share the same opinions as you. Echo chambers are a problem, and not just because it means some people make incorrect predictions about political events. They threaten our democratic conversation, splitting up the common ground of assumption and fact that is needed for diverse people to talk to each other.

Echo chambers aren’t just a product of the internet and social media, however, but of how those things interact with fundamental features of human nature. Understand these features of human nature and maybe we can think creatively about ways to escape them.

Built-in bias

One thing that drives echo chambers is our tendency to associate with people like us. Sociologists call this homophily. We’re more likely to make connections with people who are similar to us. That’s true for ethnicity, age, gender, education and occupation (and, of course, geography), as well as a range of other dimensions. We’re also more likely to lose touch with people who aren’t like us, further strengthening the niches we find ourselves in. Homophily is one reason obesity can seem contagious – people who are at risk of gaining weight are disproportionately more likely to hang out with each other and share an environment that encourages obesity.

Another factor that drives the echo chamber is our psychological tendency to seek information that confirms what we already know – often called confirmation bias. Worse, even when presented with evidence to the contrary, we show a tendency to dismiss it and even harden our convictions. This means that even if you break into someone’s echo chamber armed with facts that contradict their view, you’re unlikely to persuade them with those facts alone.

News as information and identity

More and more of us get our news primarily from social media and use that same social media to discuss the news.

Social media takes our natural tendencies to associate with similar minded people and seek information that confirms and amplifies our convictions. Dan Kahan, professor of law and psychology at Yale, describes each of us switching between two modes of information processing – identity affirming and truth seeking. The result is that for issues that, for whatever reasons, become associated with a group identity, even the most informed or well educated can believe radically different things because believing those things is tied up with signalling group identity more than a pursuit of evidence.

Mitigating human foibles

Where we go from here isn’t clear. The fundamentals of human psychology won’t just go away, but they do change depending on the environment we’re in. If technology and the technological economy reinforce the echo chamber, we can work to reshape these forces so as to mitigate it.

We can recognise that a diverse and truth-seeking media is a public good. That means it is worth supporting – both in established forms like the BBC, and in new forms like Wikipedia and The Conversation.

We can support alternative funding models for non-public media. Paying for news may seem old-fashioned, but there are long-term benefits. New ways of doing it are popping up. Services such as Blendle let you access news stories that are behind a pay wall by offering a pay-per-article model.

Technology can also help with individual solutions to the echo chamber, if you’re so minded. For Twitter users, otherside.site let’s you view the feed of any other Twitter user, so if you want to know what Nigel Farage or Donald Trump read on Twitter, you can. (I wouldn’t bother with Trump. He only follows 41 people – mostly family and his own businesses. Now that’s an echo chamber.)

For Facebook users, politecho.org is a browser extension that shows the political biases of your friends and Facebook newsfeed. If you want a shortcut, this Wall Street Journal article puts Republican and Democratic Facebook feeds side-by-side.

Of course, these things don’t remove the echo chamber, but they do highlight the extent to which you’re in one, and – as with other addictions – recognising that you have a problem is the first step to recovery.

The ConversationThis article was originally published on The Conversation. Read the original article.

rational judges, not extraneous factors in decisions

The graph tells a drammatic story of irrationality, presented in the 2011 paper Extraneous factors in judicial decisions. What it shows is the outcome of parole board decisions, as ruled by judges, against the order those decisions were made. The circles show the meal breaks taken by the judges.

parole_decisionsAs you can see, the decisions change the further the judge gets from his/her last meal, dramatically decreasing from around 65% chance of a favourable decision if you are the first case after a meal break, to close to 0% if you are the last case in a long series before a break.

In their paper, the original authors argue that this effect of order truly is due to the judges’ hunger, and not a confound introduced by some other factor which affects the order of cases and their chances of success (the lawyers sit outside the closed doors of the court, for example, so can’t time their best cases to come just after a break – they don’t know when the judge is taking a meal; The effect survives additional analysis where severity of prisoner’s crime and length of sentence are factored it; and so on). The interpretation is that as the judges tire they more and more fall back on a simple heuristic – playing safe and refusing parole.

This seeming evidence of the irrationality of judges has been cited hundreds of times, in economics, psychology and legal scholarship. Now, a new analysis by Andreas Glöckner in the journal Judgement and Decision Making questions these conclusions.

Glöckner’s analysis doesn’t prove that extraneous factors weren’t influencing the judges, but he shows how the same effect could be produced by entirely rational judges interacting with the protocols required by the legal system.

The main analysis works like this: we know that favourable rulings take longer than unfavourable ones (~7 mins vs ~5 mins), and we assume that judges are able to guess how long a case will take to rule on before they begin it (from clues like the thickness of the file, the types of request made, the representation the prisoner has and so on). Finally, we assume judges have a time limit in mind for each of the three sessions of the day, and will avoid starting cases which they estimate will overrun the time limit for the current session.

It turns out that this kind of rational time-management is sufficient to  generate the drops in favourable outcomes. How this occurs isn’t straightforward and interacts with a quirk of original author’s data presentation (specifically their graph shows the order number of cases when the number of cases in each session varied day to day – so, for example, it shows that the 12th case after a break is least likely to be judged favourably, but there wasn’t always a 12 case in each session. So sessions in which there were more unfavourable cases were more likely to contribute to this data point).

This story of claim and counter-claim shows why psychologists prefer experiments, since only then can you truly isolate causal explanations (if you are a judge and willing to go without lunch please get in touch). Also, it shows the benefit of simulations for extending the horizons of our intuition. Glöckner’s achievement is to show in detail how some reasonable assumptions – including that of a rational judge – can generate a pattern which hitherto seemed only explainable by the influence of an irrelevant factor on the judges decisions. This doesn’t settle the matter, but it does mean we can’t be so confident that this graph shows what it is often claimed to show. The judges decisions may not be irrational after all, and the timing of the judges meal breaks may not be influencing parole decision outcome.

Original finding: Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011). Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences, 108(17), 6889-6892.

New analysis: Glöckner, A. (2016). The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated. Judgment and Decision Making, 11(6), 601-610.

Elsewhere I have written about how evidence of human irrationality is often over-egged : For argument’s sake: evidence that reason can change minds

 

How liars create the illusion of truth

Repetition makes a fact seem more true, regardless of whether it is or not. Understanding this effect can help you avoid falling for propaganda, says psychologist Tom Stafford.

“Repeat a lie often enough and it becomes the truth”, is a law of propaganda often attributed to the Nazi Joseph Goebbels. Among psychologists something like this known as the “illusion of truth” effect. Here’s how a typical experiment on the effect works: participants rate how true trivia items are, things like “A prune is a dried plum”. Sometimes these items are true (like that one), but sometimes participants see a parallel version which isn’t true (something like “A date is a dried plum”).

After a break – of minutes or even weeks – the participants do the procedure again, but this time some of the items they rate are new, and some they saw before in the first phase. The key finding is that people tend to rate items they’ve seen before as more likely to be true, regardless of whether they are true or not, and seemingly for the sole reason that they are more familiar.

So, here, captured in the lab, seems to be the source for the saying that if you repeat a lie often enough it becomes the truth. And if you look around yourself, you may start to think that everyone from advertisers to politicians are taking advantage of this foible of human psychology.

But a reliable effect in the lab isn’t necessarily an important effect on people’s real-world beliefs. If you really could make a lie sound true by repetition, there’d be no need for all the other techniques of persuasion.

One obstacle is what you already know. Even if a lie sounds plausible, why would you set what you know aside just because you heard the lie repeatedly?

Recently, a team led by Lisa Fazio of Vanderbilt University set out to test how the illusion of truth effect interacts with our prior knowledge. Would it affect our existing knowledge? They used paired true and un-true statements, but also split their items according to how likely participants were to know the truth (so “The Pacific Ocean is the largest ocean on Earth” is an example of a “known” items, which also happens to be true, and “The Atlantic Ocean is the largest ocean on Earth” is an un-true item, for which people are likely to know the actual truth).

Their results show that the illusion of truth effect worked just as strongly for known as for unknown items, suggesting that prior knowledge won’t prevent repetition from swaying our judgements of plausibility.

To cover all bases, the researchers performed one study in which the participants were asked to rate how true each statement seemed on a six-point scale, and one where they just categorised each fact as “true” or “false”. Repetition pushed the average item up the six-point scale, and increased the odds that a statement would be categorised as true. For statements that were actually fact or fiction, known or unknown, repetition made them all seem more believable.

At first this looks like bad news for human rationality, but – and I can’t emphasise this strongly enough – when interpreting psychological science, you have to look at the actual numbers.

What Fazio and colleagues actually found, is that the biggest influence on whether a statement was judged to be true was… whether it actually was true. The repetition effect couldn’t mask the truth. With or without repetition, people were still more likely to believe the actual facts as opposed to the lies.

This shows something fundamental about how we update our beliefs – repetition has a power to make things sound more true, even when we know differently, but it doesn’t over-ride that knowledge

The next question has to be, why might that be? The answer is to do with the effort it takes to being rigidly logical about every piece of information you hear. If every time you heard something you assessed it against everything you already knew, you’d still be thinking about breakfast at supper-time. Because we need to make quick judgements, we adopt shortcuts – heuristics which are right more often than wrong. Relying on how often you’ve heard something to judge how truthful something feels is just one strategy. Any universe where truth gets repeated more often than lies, even if only 51% vs 49% will be one where this is a quick and dirty rule for judging facts.

If repetition was the only thing that influenced what we believed we’d be in trouble, but it isn’t. We can all bring to bear more extensive powers of reasoning, but we need to recognise they are a limited resource. Our minds are prey to the illusion of truth effect because our instinct is to use short-cuts in judging how plausible something is. Often this works. Sometimes it is misleading.

Once we know about the effect we can guard against it. Part of this is double-checking why we believe what we do – if something sounds plausible is it because it really is true, or have we just been told that repeatedly? This is why scholars are so mad about providing references – so we can track the origin on any claim, rather than having to take it on faith.

But part of guarding against the illusion is the obligation it puts on us to stop repeating falsehoods. We live in a world where the facts matter, and should matter. If you repeat things without bothering to check if they are true, you are helping to make a world where lies and truth are easier to confuse. So, please, think before you repeat.

This is my BBC Future column from the other week, the original is here. For more on this topic, see my ebook : For argument’s sake: evidence that reason can change minds (smashwords link here)

reinforcing your wiser self

phoneNautilus has a piece by David Perezcassar on how technology takes advantage of our animal instinct for variable reward schedules (Unreliable rewards trap us into addictive cell phone use, but they can also get us out).

It’s a great illustrated read about the scientific history of the ideas behind ‘persuasive technology’, and ends with a plea that perhaps we can hijack our weakness for variable reward schedules for better ends:

What is we set up a variable reward system to reward ourselves for the time spent away fro our phones & physically connecting with others? Even time spend meditating or reading without technological distractions is a heroic endeavor worthy of a prize

Which isn’t a bad idea, but the pattern of the reward schedule is only one factor in what makes an activity habit forming. The timing of a reward is more important than the reliability – it’s easier to train in habits with immediate than delayed rewards. The timing is so crucial that in the animal learning literature even a delay of 2 seconds between a lever press and the delivery of a food pellet impairs learning in rats. In experiments we did with humans a delay of 150ms we enough to hinder our participants connecting their own actions with a training signal.

So the dilemma for persuasive technology, and anyone who wants to free themselves from its hold, is not just how phones/emails/social media structure our rewards, but also the fact that they allow gratification at almost any moment. There are always new notifications, new news, and so phones let us have zero delay for the reward of checking our phones. If you want to focus on other things, like being a successful parent, friend or human the delays on the rewards of these are far larger (not to mention more nebulous).

The way I like to think about it is the conflict between the impatient, narrow, smaller self – the self that likes sweets and gossip and all things immediate gratification – and the wider, wiser self – the self than invests in the future and carers about the bigger picture. That self can win out, does win out as we make our stumbling journey into adulthood, but my hunch is we’re going to need a different framework from the one of reinforcement learning to do it

Nautilus article: Unreliable rewards trap us into addictive cell phone use, but they can also get us out

Mindhacks.com: post about reinforcement schedules, and how they might be used to break technology compulsion (from 2006 – just sayin’)

George Ainslie’s book Breakdown of Will is what happens if you go so deep into the reinforcement learning paradigm you explode its reductionism and reinvent the notion of the self. Mind-alteringly good.