This map shows what white Europeans associate with race – and it makes for uncomfortable reading

File 20170425 12650 16jxfww
The ConversationThis new map shows how easily white Europeans associate black faces with negative ideas. The ConversationSince 2002, hundreds of thousands of people around the world have logged onto a website run by Harvard University called Project Implicit and taken an “implicit association test” (IAT), a rapid-response task which measures how easily you can pair items from different categories.To create this new map, we used data from a version of the test which presents white or black faces and positive or negative words. The result shows how easily our minds automatically make the link between the categories – what psychologists call an “implicit racial attitude”.Each country on the map is coloured according to the average score of test takers from that country. Redder countries show higher average bias, bluer countries show lower average bias, as the scale on the top of the map shows.Like a similar map which had been made for US states, our map shows variation in the extent of racial bias – but all European countries are racially biased when comparing blacks versus whites.

In every country in Europe, people are slower to associate blackness with positive words such as “good” or “nice” and faster to associate blackness with negative concepts such as “bad” or “evil”. But they are quicker to make the link between blackness and negative concepts in the Czech Republic or Lithuania than they are in Slovenia, the UK or Ireland.

No country had an average score below zero, which would reflect positive associations with blackness. In fact, none had an average score that was even close to zero, which would reflect neither positive nor negative racial associations.

A screeshot from the online IAT test.
IAT, Project Implict

Implicit bias

Overall, we have scores for 288,076 white Europeans, collected between 2002 and 2015, with sample sizes for each country shown on the left-hand side.

Because of the design of the test it is very difficult to deliberately control your score. Many people, including those who sincerely hold non-racist or even anti-racist beliefs, demonstrate positive implicit bias on the test. The exact meaning of implicit attitudes, and the IAT, are controversial, but we believe they reflect the automatic associations we hold in our minds, associations that develop over years of immersion in the social world.

Although we, as individuals, may not hold racist beliefs, the ideas we associate with race may be constructed by a culture which describes people of different ethnicities in consistent ways, and ways which are consistently more or less positive. Looked at like this, the IAT – which at best is a weak measure of individual psychology – may be most useful if individuals’ scores are aggregated to provide a reflection on the collective social world we inhabit.

The results shown in this map give detail to what we already expected – that across Europe racial attitudes are not neutral. Blackness has negative associations for white Europeans, and there are some interesting patterns in how the strength of these negative associations varies across the continent.

North and west Europe, on average, have less strong anti-black associations, although they still have anti-black associations on average. As you move south and east the strength of negative associations tends to increase – but not everywhere. The Balkans look like an exception, compared to surrounding countries. Is this because of some quirk about how people in the Balkans heard about Project Implicit, or because their prejudices aren’t orientated around a white-black axis? For now, we can only speculate.

Open questions

When interpreting the map there are at least two important qualifications to bear in mind.

The first is that the scores only reflect racial attitudes in one dimension: pairing white/black with goodness/badness. Our feelings about ethnicity have many more dimensions which aren’t captured by this measure.

The second is that the data comes from Europeans who visit the the US Project Implicit website, which is in English. We can be certain that the sample reflects a subset of the European population which are more internet-savvy than is typical. They are probably also younger, and more cosmopolitan. These factors are likely to underweight the extent of implicit racism in each country, so that the true levels of implicit racism are probably higher than shown on this map.

This new map is possible because Project Implicit release their data via the Open Science Framework. This site allows scientists to share the raw materials and data from their experiments, allowing anyone to check their working, or re-analyse the data, as we have done here. I believe that open tools and publishing methods like these are necessary to make science better and more reliable.

This article was originally published on The Conversation. Read the original article.

Edit 4/5/17. The colour scale chosen for this map emphasises the differences between countries. While that’s most important for working out what drives IAT scores, the main take-away from the map is that all of Europe is considerably not neutral. That conclusion is supported by a continuous colour scale, as used in this version of the map here

Why women don’t report sexual harassment

189px-milgram_experimentJulie A. Woodzicka (Washington and Lee University) and Marianne LaFrance (Yale) report an experiment reminiscent of Milgram’s famous studies of obedience to authority. Reminiscent both because it highlights the gap between how we imagine we’ll respond under pressure and how we actually do respond, and because it’s hard to imagine an ethics review board allowing it.

The study, reported in the Journal of Social Issues in 2001, involved the following (in their own words):

we devised a job interview in which a male interviewer asked female job applicants sexually harassing questions interspersed with more typical questions asked in such contexts.

The three sexually harassing questions were (1) Do you have a boyfriend? (2) Do people find you desirable? and (3) Do you think it is important for women to wear bras to work?

Participants, all women, average age 22, did not know they were in an experiment and were recruited through posters and newspaper adverts for a research assistant position.

The results illuminated what targets of harassment do not do. First, no one refused to answer: Interviewees answered every question irrespective of whether it was harassing or nonharassing. Second, among those asked the harassing questions, few responded with any form of confrontation or repudiation. Nonetheless, the responses revealed a variety of ways that respondents attempted to circumvent the situation posed by harassing questions.

Just as with the Milgram experiment, these results contrast with how participants from a companion study imagined they would respond when the scenario was described to them:

The majority (62%) anticipated that they would either ask the interviewer why he had asked the question or tell him that it was inappropriate. Further, over one quarter of the participants (28%) indicated that they would take more drastic measures by either leaving the interview or rudely confronting the interviewer. Notably, a large number of respondents (68%) indicated that they would refuse to answer at least one of the three harassing questions.

Part of the difference, the researchers argue, is that women imagining the harassing situation over-estimate the anger they will feel. When confronted with actual harassment, fear replaces anger, they claim. Women asked the harassing questions reported significantly higher rates of fear than women asked the merely surprising questions. Coding of facial expressions during the (secretly videoed) interviews revealed that the harassed women also smiled more – fake (non-Duchenne) smiles, presumably aimed at appeasing a harasser that they felt afraid of.

The research report doesn’t indicate what, if any, ethical review process the experiment was subject to.

Obviously it is an important topic, with disturbing and plausible findings. The researchers note that courts have previously interpreted inaction following harassment as indicative of some level of consent. But, despite the real-world relevance, is it a topic that is it important enough to justify employing a man to sexually harass unsuspecting women?

Reference: Woodzicka, J. A., & LaFrance, M. (2001). Real versus imagined gender harassment. Journal of Social Issues, 57(1), 15-30.

Previously: a series of Gender Brain Blogging

Much more previously: an essay I wrote arguing that moral failures are often defined by failures of imagination, not of reason: The Narrative Escape

The Social Priming Studies in “Thinking Fast and Slow” are not very replicable

train_wreck_at_montparnasse_1895In Daniel Kahneman’s “Thinking Fast and Slow” he introduces research on social priming – the idea that subtle cues in the environment may have significant, reliable effects on behaviour. In that book, published in 2011, Kahneman writes “disbelief is not an option” about these results. Since then, the evidence against the reliability of social priming research has been mounting.

In a new analysis, ‘Reconstruction of a Train Wreck: How Priming Research Went off the Rails‘, Ulrich Schimmack, Moritz Heene, and Kamini Kesavan review chapter 4 of Thinking Fast and Slow, picking out the references which provide evidence for social priming and calculating how statistically reliable they:

Their conclusion:

The results are eye-opening and jaw-dropping.  The chapter cites 12 articles and 11 of the 12 articles have an R-Index below 50.  The combined analysis of 31 studies reported in the 12 articles shows 100% significant results with average (median) observed power of 57% and an inflation rate of 43%.  …readers of… “Thinking Fast and Slow” should not consider the presented studies as scientific evidence that subtle cues in their environment can have strong effects on their behavior outside their awareness.

The argument is that the pattern of 100% significant results is near to impossible, even if the effects known were true, given the weak statistical power of the studies to detect true effects.

Remarkably, Kahneman responds in the comments:

What the blog gets absolutely right is that I placed too much faith in underpowered studies. …I have changed my views about the size of behavioral priming effects – they cannot be as large and as robust as my chapter suggested.

The original analysis, and Kahneman’s response are worth reading in full. Together they give a potted history of the replication crisis, and a summary of some of the prime causes (e.g. file draw effects), as well as showing off how mature psychological scientists can make, and respond to critique.

Original analysis: ‘Reconstruction of a Train Wreck: How Priming Research Went off the Rails‘, Ulrich Schimmack, Moritz Heene, and Kamini Kesavan. (Is it a paper? Is it a blogpost? Who knows?!)

Kahneman’s response

The troubled friendship of Tversky and Kahneman

Daniel Kahneman, by Pat Kinsella (detail)
Daniel Kahneman, by Pat Kinsella for the Chronicle Review (detail)

Writer Michael Lewis’s new book, “The Undoing Project: The Friendship That Changed Our Minds”, is about two of the most important figures in modern psychology, Amos Tversky and Daniel Kahneman.

In this extract for the Chronicle of Higher Education, Lewis describes the emotional tension between the pair towards the end of their collaboration. It’s a compelling ‘behind the scenes’ view of the human side to the foundational work of the heuristics and biases programme in psychology, as well as being brilliantly illustrated by Pat Kinsella.

One detail that caught my eye is this response by Amos Tversky to a critique of the work he did with Kahneman. As well as being something I’ve wanted to write myself on occasion, it illustrates the forthrightness which made Tversky a productive and difficult colleague:

the objections you raised against our experimental method are simply unsupported. In essence, you engage in the practice of criticizing a procedural departure without showing how the departure might account for the results obtained. You do not present either contradictory data or a plausible alternative interpretation of our findings. Instead, you express a strong bias against our method of data collection and in favor of yours. This position is certainly understandable, yet it is hardly convincing.

Link: A Bitter Ending: Daniel Kahneman, Amos Tversky, and the limits of collaboration

echo chambers: old psych, new tech

If you were surprised by the result of the Brexit vote in the UK or by the Trump victory in the US, you might live in an echo chamber – a self-reinforcing world of people who share the same opinions as you. Echo chambers are a problem, and not just because it means some people make incorrect predictions about political events. They threaten our democratic conversation, splitting up the common ground of assumption and fact that is needed for diverse people to talk to each other.

Echo chambers aren’t just a product of the internet and social media, however, but of how those things interact with fundamental features of human nature. Understand these features of human nature and maybe we can think creatively about ways to escape them.

Built-in bias

One thing that drives echo chambers is our tendency to associate with people like us. Sociologists call this homophily. We’re more likely to make connections with people who are similar to us. That’s true for ethnicity, age, gender, education and occupation (and, of course, geography), as well as a range of other dimensions. We’re also more likely to lose touch with people who aren’t like us, further strengthening the niches we find ourselves in. Homophily is one reason obesity can seem contagious – people who are at risk of gaining weight are disproportionately more likely to hang out with each other and share an environment that encourages obesity.

Another factor that drives the echo chamber is our psychological tendency to seek information that confirms what we already know – often called confirmation bias. Worse, even when presented with evidence to the contrary, we show a tendency to dismiss it and even harden our convictions. This means that even if you break into someone’s echo chamber armed with facts that contradict their view, you’re unlikely to persuade them with those facts alone.

News as information and identity

More and more of us get our news primarily from social media and use that same social media to discuss the news.

Social media takes our natural tendencies to associate with similar minded people and seek information that confirms and amplifies our convictions. Dan Kahan, professor of law and psychology at Yale, describes each of us switching between two modes of information processing – identity affirming and truth seeking. The result is that for issues that, for whatever reasons, become associated with a group identity, even the most informed or well educated can believe radically different things because believing those things is tied up with signalling group identity more than a pursuit of evidence.

Mitigating human foibles

Where we go from here isn’t clear. The fundamentals of human psychology won’t just go away, but they do change depending on the environment we’re in. If technology and the technological economy reinforce the echo chamber, we can work to reshape these forces so as to mitigate it.

We can recognise that a diverse and truth-seeking media is a public good. That means it is worth supporting – both in established forms like the BBC, and in new forms like Wikipedia and The Conversation.

We can support alternative funding models for non-public media. Paying for news may seem old-fashioned, but there are long-term benefits. New ways of doing it are popping up. Services such as Blendle let you access news stories that are behind a pay wall by offering a pay-per-article model.

Technology can also help with individual solutions to the echo chamber, if you’re so minded. For Twitter users, otherside.site let’s you view the feed of any other Twitter user, so if you want to know what Nigel Farage or Donald Trump read on Twitter, you can. (I wouldn’t bother with Trump. He only follows 41 people – mostly family and his own businesses. Now that’s an echo chamber.)

For Facebook users, politecho.org is a browser extension that shows the political biases of your friends and Facebook newsfeed. If you want a shortcut, this Wall Street Journal article puts Republican and Democratic Facebook feeds side-by-side.

Of course, these things don’t remove the echo chamber, but they do highlight the extent to which you’re in one, and – as with other addictions – recognising that you have a problem is the first step to recovery.

The ConversationThis article was originally published on The Conversation. Read the original article.

The mechanics of subtle discrimination: measuring ‘microaggresson’

Many people don’t even realise that they are discriminating based on race or gender. And they won’t believe that their unconscious actions have consequences until they see scientific evidence. Here it is.

The country in which I live has laws forbidding discrimination on the grounds of ethnicity, religion, sexuality or sex. We’ve come a long way since the days when the reverse was true – when homosexuality was illegal, for instance, or when women were barred from voting. But this doesn’t mean that prejudice is over, of course. Nowadays we need to be as concerned about subtler strains of prejudice as the kind of loud-mouthed racism and sexism that makes us ashamed of the past.

Subtle prejudice is the domain of unjustified assumptions, dog-whistles, and plain failure to make the effort to include people who are different from ourselves, or who don’t fit our expectations. One word for the expressions of subtle prejudice is ‘microaggressions’. These are things such as repeating a thoughtless stereotype, or too readily dismissing someone’s viewpoint – actions that may seem unworthy of comment, but can nevertheless marginalise an individual.

The people perpetrating these microaggressions may be completely unaware that they hold a prejudiced view. Psychologists distinguish between our explicit attitudes – which are the beliefs and feelings we’ll admit to – and our implicit attitudes – which are our beliefs and feelings which are revealed by our actions. So, for example, you might say that you are not a sexist, you might even say that you are anti-sexist, but if you interrupt women more than men in meetings you would be displaying a sexist implicit attitude – one which is very different from that non-sexist explicit attitude you profess.

‘Culture of victimhood’

The thing about subtle prejudice is that it is by definition subtle – lots of small differences in how people are treated, small asides, little jibes, ambiguous differences in how we treat one person compared to another. This makes it hard to measure, and hard to address, and – for some people – hard to take seriously.

This is the skeptical line of thought: when people complain about being treated differently in small ways they are being overly sensitive, trying to lay claim to a culture of victimhood. Small differences are just that – small. They don’t have large influences on life outcomes and aren’t where we should focus our attention.

Now you will have your own intuitions about that view, but my interest is in how you could test the idea that a thousand small cuts do add up. A classic experiment on the way race affects our interactions shows not only the myriad ways in which race can affect how we treat people, but shows in a clever way that even the most privileged of us would suffer if we were all subjected to subtle discrimination.

In the early 1970s, a team led by Carl Word at Princeton University recruited white students for an experiment they were told was about assessing the quality of job candidates. Unbeknown to them, the experiment was really about how they treated the supposed job candidates, and whether this was different based on whether they were white or black.

Despite believing their task was to find the best candidate, the white recruits treated candidates differently based on their race – sitting further away from them, and displaying fewer signs of engagement such as making eye-contact or leaning in during conversation. Follow-up work more recently has shown that this is still true, and that these nonverbal signs of friendliness weren’t related to their explicit attitudes, so operate independently from the participants’ avowed beliefs about race and racism.

So far the the Princeton experiment probably doesn’t tell anyone who has been treated differently because of their race anything they didn’t know from painful experience. The black candidates in this experiment were treated less well than the white candidates, not just in the nonverbal signals the interviewers gave off, but they were given 25% less time during the interviews on average as well. This alone would be an injustice, but how big a disadvantage is it to be treated like this?

Word’s second experiment gives us a handle on this. After collecting these measurements of nonverbal behaviour the research team recruited some new volunteers and trained them to react in the manner of the original experimental subjects. That is, they were trained to treat interview candidates as the original participants had treated white candidates: making eye contact, smiling, sitting closer, allowing them to speak for longer. And they were also trained to produce the treatment the black candidates received: less eye contact, fewer smiles and so on. All candidates were to be treated politely and fairly, with only the nonverbal cues varying.

Next, the researchers recruited more white Princeton undergraduates to play the role of job candidates, and they were randomly assigned to be nonverbally treated like the white candidates in the first experiment, or like the black candidates.

The results allow us to see the self-fulfilling prophesy of discrimination. The candidates who received the “black” nonverbal signals delivered a worse interview performance, as rated by independent judges. They made far more speech errors, in the form of hesitations, stutters, mistakes and incomplete sentences, and they chose to sit further away from the interviewer following a mid-interview interruption which caused them to retake their chairs.

It isn’t hard to see that in a winner-takes-all situation like a job interview, such differences could be enough to lose you a job opportunity. What’s remarkable is that the participants’ performance had been harmed by nonverbal differences of the kind that many of us might produce without intending or realising. Furthermore, the effect was seen in students from Princeton University, one of the world’s elite universities. If even a white, privileged elite suffer under this treatment we might expect even larger effects for people who don’t walk into high-pressure situations with those advantages.

Experiments like these don’t offer the whole truth about discrimination. Problems like racism are patterned by so much more than individual attitudes, and often supported by explicit prejudice as well as subtle prejudice. Racism will affect candidates before, during and after job interviews in many more ways than I’ve described. What this work does show is one way in which, even with good intentions, people’s reactions to minority groups can have powerful effects. Small differences can add up.

This is my BBC Future column from last week. The original is here.