a literary case of the exploding head

eOne of the most commented-upon posts on this blog is this from 2009, ‘Exploding head syndrome‘. The name stems from the 1920s, and describes an under-documented and mysterious condition in which the suffer experiences a viscerally loud explosion, as if occurring inside their own head.

I’m reading V.S.Naipaul’s “The Enigma of Arrival”, and the autobiographical main character experiences the same thing. Here we are on p93 of my edition of that novel:

In this dream there occurred always, at a critical moment in the dream narrative, what I can only describe as an explosion in my head. It was how every dream ended, with this explosion that threw me flat on my back, in the presence of people, in a street, a crowded room, or wherever, threw me into this degraded posture in the midst of standing people, threw me into the posture of sleep in which I found myself when I awakened. The explosion was so loud, so reverberating and slow in my head that I felt, with the part of my brain that miraculously could still think and draw conclusions, that I couldn’t possibly survive, that I was in fact dying, that the explosion this time, in this dream, regardless of the other dreams that had revealed themselves at the end as dreams, would kill, that I was consciously living through, or witnessing, my own death. And when I awoke my head felt queer, shaken up, exhausted; as though some discharge in my brain had in fact occurred.

The Enigma of Arrival on Goodreads
Vaughan’s 2009 post on Exploding Head Syndrome
Wikipedia: Exploding head syndrome

How curiosity can save you from political tribalism

Neither intelligence nor education can stop you from forming prejudiced opinions – but an inquisitive attitude may help you make wiser judgements.

Ask a left-wing Brit what they believe about the safety of nuclear power, and you can guess their answer. Ask a right-wing American about the risks posed by climate change, and you can also make a better guess than if you didn’t know their political affiliation. Issues like these feel like they should be informed by science, not our political tribes, but sadly, that’s not what happens.

Psychology has long shown that education and intelligence won’t stop your politics from shaping your broader worldview, even if those beliefs do not match the hard evidence. Instead, your ability to weigh up the facts may depend on a less well-recognised trait – curiosity.

The political lens

There is now a mountain of evidence to show that politics doesn’t just help predict people’s views on some scientific issues; it also affects how they interpret new information. This is why it is a mistake to think that you can somehow ‘correct’ people’s views on an issue by giving them more facts, since study after study has shown that people have a tendency to selectively reject facts that don’t fit with their existing views.

This leads to the odd situation that people who are most extreme in their anti-science views – for example skeptics of the risks of climate change – are more scientifically informed than those who hold anti-science views but less strongly.

But smarter people shouldn’t be susceptible to prejudice swaying their opinions, right? Wrong. Other research shows that people with the most education, highest mathematical abilities, and the strongest tendencies to be reflective about their beliefs are the most likely to resist information which should contradict their prejudices. This undermines the simplistic assumption that prejudices are the result of too much gut instinct and not enough deep thought. Rather, people who have the facility for deeper thought about an issue can use those cognitive powers to justify what they already believe and find reasons to dismiss apparently contrary evidence.

It’s a messy picture, and at first looks like a depressing one for those who care about science and reason. A glimmer of hope can be found in new research from a collaborative team of philosophers, film-makers and psychologists led by Dan Kahan of Yale University.

Kahan and his team were interested in politically biased information processing, but also in studying the audience for scientific documentaries and using this research to help film-makers. They developed two scales. The first measured a person’s scientific background, a fairly standard set of questions asking about knowledge of basic scientific facts and methods, as well as quantitative judgement and reasoning. The second scale was more innovative. The idea of this scale was to measure something related but independent – a person’s curiosity about scientific issues, not how much they already knew. This second scale was also innovative in how they measured scientific curiosity. As well as asking some questions, they also gave people choices about what material to read as part of a survey about reactions to news. If an individual chooses to read about science stories rather than sports or politics, their corresponding science curiosity score was marked up.

Armed with their scales, the team then set out to see how they predicted people’s opinions on public issues which should be informed by science. With the scientific knowledge scale the results were depressingly predictable. The left-wing participants – liberal Democrats – tended to judge issues such as global warming or fracking as significant risks to human health, safety or prosperity. The right-wing participants – conservative Republicans – were less likely to judge the issues as significant risks. What’s more, the liberals with more scientific background were most concerned about the risks, while the conservatives with more scientific background were least concerned. That’s right – higher levels of scientific education results in a greater polarisation between the groups, not less.

So much for scientific background, but scientific curiosity showed a different pattern. Differences between liberals and conservatives still remained – on average there was still a noticeable gap in their estimates of the risks – but their opinions were at least heading in the same direction. For fracking for example, more scientific curiosity was associated with more concern, for both liberals and conservatives.

The team confirmed this using an experiment which gave participants a choice of science stories, either in line with their existing beliefs, or surprising to them. Those participants who were high in scientific curiosity defied the predictions and selected stories which contradicted their existing beliefs – this held true whether they were liberal or conservative.

And, in case you are wondering, the results hold for issues in which political liberalism is associated with the anti-science beliefs, such as attitudes to GMO or vaccinations.

So, curiosity might just save us from using science to confirm our identity as members of a political tribe. It also shows that to promote a greater understanding of public issues, it is as important for educators to try and convey their excitement about science and the pleasures of finding out stuff, as it is to teach people some basic curriculum of facts.

This is my BBC Future column from last week. The original is here. My ebook ‘For argument’s sake: evidence that reason can change minds’ is out now

The mechanics of subtle discrimination: measuring ‘microaggresson’

Many people don’t even realise that they are discriminating based on race or gender. And they won’t believe that their unconscious actions have consequences until they see scientific evidence. Here it is.

The country in which I live has laws forbidding discrimination on the grounds of ethnicity, religion, sexuality or sex. We’ve come a long way since the days when the reverse was true – when homosexuality was illegal, for instance, or when women were barred from voting. But this doesn’t mean that prejudice is over, of course. Nowadays we need to be as concerned about subtler strains of prejudice as the kind of loud-mouthed racism and sexism that makes us ashamed of the past.

Subtle prejudice is the domain of unjustified assumptions, dog-whistles, and plain failure to make the effort to include people who are different from ourselves, or who don’t fit our expectations. One word for the expressions of subtle prejudice is ‘microaggressions’. These are things such as repeating a thoughtless stereotype, or too readily dismissing someone’s viewpoint – actions that may seem unworthy of comment, but can nevertheless marginalise an individual.

The people perpetrating these microaggressions may be completely unaware that they hold a prejudiced view. Psychologists distinguish between our explicit attitudes – which are the beliefs and feelings we’ll admit to – and our implicit attitudes – which are our beliefs and feelings which are revealed by our actions. So, for example, you might say that you are not a sexist, you might even say that you are anti-sexist, but if you interrupt women more than men in meetings you would be displaying a sexist implicit attitude – one which is very different from that non-sexist explicit attitude you profess.

‘Culture of victimhood’

The thing about subtle prejudice is that it is by definition subtle – lots of small differences in how people are treated, small asides, little jibes, ambiguous differences in how we treat one person compared to another. This makes it hard to measure, and hard to address, and – for some people – hard to take seriously.

This is the skeptical line of thought: when people complain about being treated differently in small ways they are being overly sensitive, trying to lay claim to a culture of victimhood. Small differences are just that – small. They don’t have large influences on life outcomes and aren’t where we should focus our attention.

Now you will have your own intuitions about that view, but my interest is in how you could test the idea that a thousand small cuts do add up. A classic experiment on the way race affects our interactions shows not only the myriad ways in which race can affect how we treat people, but shows in a clever way that even the most privileged of us would suffer if we were all subjected to subtle discrimination.

In the early 1970s, a team led by Carl Word at Princeton University recruited white students for an experiment they were told was about assessing the quality of job candidates. Unbeknown to them, the experiment was really about how they treated the supposed job candidates, and whether this was different based on whether they were white or black.

Despite believing their task was to find the best candidate, the white recruits treated candidates differently based on their race – sitting further away from them, and displaying fewer signs of engagement such as making eye-contact or leaning in during conversation. Follow-up work more recently has shown that this is still true, and that these nonverbal signs of friendliness weren’t related to their explicit attitudes, so operate independently from the participants’ avowed beliefs about race and racism.

So far the the Princeton experiment probably doesn’t tell anyone who has been treated differently because of their race anything they didn’t know from painful experience. The black candidates in this experiment were treated less well than the white candidates, not just in the nonverbal signals the interviewers gave off, but they were given 25% less time during the interviews on average as well. This alone would be an injustice, but how big a disadvantage is it to be treated like this?

Word’s second experiment gives us a handle on this. After collecting these measurements of nonverbal behaviour the research team recruited some new volunteers and trained them to react in the manner of the original experimental subjects. That is, they were trained to treat interview candidates as the original participants had treated white candidates: making eye contact, smiling, sitting closer, allowing them to speak for longer. And they were also trained to produce the treatment the black candidates received: less eye contact, fewer smiles and so on. All candidates were to be treated politely and fairly, with only the nonverbal cues varying.

Next, the researchers recruited more white Princeton undergraduates to play the role of job candidates, and they were randomly assigned to be nonverbally treated like the white candidates in the first experiment, or like the black candidates.

The results allow us to see the self-fulfilling prophesy of discrimination. The candidates who received the “black” nonverbal signals delivered a worse interview performance, as rated by independent judges. They made far more speech errors, in the form of hesitations, stutters, mistakes and incomplete sentences, and they chose to sit further away from the interviewer following a mid-interview interruption which caused them to retake their chairs.

It isn’t hard to see that in a winner-takes-all situation like a job interview, such differences could be enough to lose you a job opportunity. What’s remarkable is that the participants’ performance had been harmed by nonverbal differences of the kind that many of us might produce without intending or realising. Furthermore, the effect was seen in students from Princeton University, one of the world’s elite universities. If even a white, privileged elite suffer under this treatment we might expect even larger effects for people who don’t walk into high-pressure situations with those advantages.

Experiments like these don’t offer the whole truth about discrimination. Problems like racism are patterned by so much more than individual attitudes, and often supported by explicit prejudice as well as subtle prejudice. Racism will affect candidates before, during and after job interviews in many more ways than I’ve described. What this work does show is one way in which, even with good intentions, people’s reactions to minority groups can have powerful effects. Small differences can add up.

This is my BBC Future column from last week. The original is here.

Serendipity in psychological research

micDorothy Bishop has an excellent post ‘Ten serendipitous findings in psychology’, in which she lists ten celebrated discoveries which occurred by happy accident.

Each discovery is interesting in itself, but Prof Bishop puts the discoveries in the context of the recent discussion about preregistration (declaring in advance what you are looking for and how you’ll look). Does preregistration hinder serendipity? Absolutely not says Bishop, not least because the context of ‘discovery’ is never a one-off experiment.

Note that, in all cases, having made the initial unexpected observation – either from unstructured exploratory research, or in the course of investigating something else – the researchers went on to shore up the findings with further, hypothesis-driven experiments. What they did not do is to report just the initial observation, embellished with statistics, and then move on, as if the presence of a low p-value guaranteed the truth of the result.

(It’s hard not to read into these comments a criticism of some academic journals which seem happy to publish single experiments reporting surprising findings.)

Bishop’s list contains 3 findings from electrophysiology (recording brain cell activity directly with electrodes), which I think is notable. In these cases neural recording acts in the place of a microscope, allowing fairly direct observation of the system the scientist is investigating at a level of detail hitherto unavailable. It isn’t surprising to me that given a new tool of observation, the prepared mind of the scientists will make serendipitous discoveries. The catch is whether, for the rest of psychology, such observational tools exist. Many psychologists use their intuition to decide where to look, and experiments to test whether their intuition is correct. The important serendipitous discoveries from electrophysiology suggest that measures which are new ways of observing, rather than merely tests of ideas, must also be important for psychological discoveries. Do such observational measures exist?

Good tests make children fail – here’s why

Many parents and teachers are critical of the Standardised Assessment Tests (SATs) that have recently been taken by primary school children. One common complaint is that they are too hard. Teachers at my son’s school sent children home with example questions to quiz their parents on, hoping to show that getting full marks is next to impossible.

Invariably, when parents try out these tests, they focus on the most difficult or confusing items. Some parents and teachers can be heard complaining on social media that if they get questions wrong, surely the tests are too hard for ten-year-olds.

But how hard should tests for children be?

As a psychologist, I know we have some well-developed principles that can help us address the question. If we look at the SATs as measures of some kind of underlying ability, then we can turn to one of the oldest branches of psychology – “psychometrics” – for some guidance.

Getting it just right

A good test shouldn’t be too hard. If most people get most questions wrong, then you have what is called a “floor effect”. The result is that you can’t tell any difference in ability between the people taking the test.

If we started the school sports day high jump with the bar at two metres high (close to the world record), then we’d finish sports day with everybody getting the same – zero successful jumps – and no information about how good anyone is at the high jump.

But at the same time, a good test shouldn’t be too easy. If most people get everything right, then the effect is, as you might expected, called a “ceiling effect”. If everybody gets everything right then again we don’t get any information from the test.

The key idea is that tests must discriminate. In psychometric terms, the value of a test is about the match between the thing it is supposed to measure and the difficulty of the items on the test. If I wanted to gauge maths ability in six-year-olds and I gave them all an A-Level paper, we can presume that nearly everyone would score zero. Although the A-Level paper might be a good test, it is completely uninformative if it is badly matched to the ability of the people taking the test.

Here’s the rub: for a test to be sensitive to differences in ability, it must contain items which people get wrong. Actually, there’s a precise answer to the proportion that you should get wrong – in the most sensitive test it should be half of the items. Questions which you are 50% likely to get right are the ones which are most informative.

How we feel about measuring and labelling children according to their skill at taking these tests is a big issue, but it is important that we recognise that this is what tests do. A well designed test will make all children get some items wrong – it is inherent in their design. It is up to us how we conceptualise that: whether tests are an unnecessary distraction from true education, or a necessary challenge we all need to be exposed to.

Better tests?

If you adopt this psychometric perspective, it becomes clear that the tests we use are an inefficient way of measuring any individual child’s particular ability to do the test. Most children will be asked a bunch of questions which are too easy for them, before they get to the informative ones which are at the edge of their ability. Then they will go on to attempt a bunch of questions which are far too hard. And pity the people for who the test is poorly matched to their ability and consists mostly of questions they’ll get wrong – which is both uninformative in psychometric terms, and dispiriting emotionally.

A hundred years ago, when we began our modern fixation with testing and measuring, it was hard to avoid the waste where many uninformative and potentially depressing questions were asked. This was simply because all children had to take the same exam paper.

Nowadays, however, examiners can administer tests via computer, and algorithmically identify the most informative questions for each child’s ability – making the tests shorter, more accurate, and less focused on the experience of failure. You could throw in enough easy questions that no child would ever have the experience of getting most of the questions wrong. But still there’s no getting around the fact that an informative test has to contain questions most people sitting it will get wrong.

Even a good test can measure an educationally irrelevant ability (such as merely the ability to do the test, or memorise abstract grammar rules), or be used in ways that harm children. But having difficult items isn’t a problem with the SATs, it’s a problem with all tests.

The Conversation

This article was originally published on The Conversation. Read the original article.

information theory and psychology

I have read a good deal more about information theory and psychology than I can or care to remember. Much of it was a mere association of new terms with old and vague ideas. Presumably the hope was that a stirring in of new terms would clarify the old ideas by a sort of sympathetic magic.

From: John R. Piece’s 1961 An introduction to information theory: symbols, signals and noise. Plus ça change.

Pierce’s book is really quite wonderful and contains lots of chatty asides and examples, such as:

Gottlob Burmann, a German poet who lived from 1737 to 1805, wrote 130 poems, including a total of 20,000 words, without once using the letter R. Further, during the last seventeen years of his life, Burmann even omitted the letter from his daily conversation.

The two word games that trick almost everyone

270px-Cowicon.svgPlaying two classic schoolyard games can help us understand everything from sexism to the power of advertising.

There’s a word game we used to play at my school, or a sort of trick, and it works like this. You tell someone they have to answer some questions as quickly as possible, and then you rush at them the following:

“What’s one plus four?!”
“What’s five plus two?!”
“What’s seven take away three?!”
“Name a vegetable?!”

Nine times out of 10 people answer the last question with “Carrot”.

Now I don’t think the magic is in the maths questions. Probably they just warm your respondent up to answering questions rapidly. What is happening is that, for most people, most of the time, in all sorts of circumstances, carrot is simply the first vegetable that comes to mind.

This seemingly banal fact reveals something about how our minds organise information. There are dozens of vegetables, and depending on your love of fresh food you might recognise a good proportion. If you had to list them you’d probably forget a few you know, easily reaching a dozen and then slowing down. And when you’re pressured to name just one as quickly as possible, you forget even more and just reach for the most obvious vegetable you can think of – and often that’s a carrot.

In cognitive science, we say the carrot is “prototypical” – for our idea of a vegetable, it occupies the centre of the web of associations which defines the concept. You can test prototypicality directly by timing how long it takes someone to answer whether the object in question belongs to a particular category. We take longer to answer “yes” if asked “is a penguin a bird?” than if asked “is a robin a bird?”, for instance. Even when we know penguins are birds, the idea of penguins takes longer to connect to the category “bird” than more typical species.

So, something about our experience of school dinners, being told they’ll help us see in the dark, the 37 million tons of carrots the world consumes each year, and cartoon characters from Bugs Bunny to Olaf the Snowman, has helped carrots work their way into our minds as the prime example of a vegetable.

The benefit to this system of mental organisation is that the ideas which are most likely to be associated are also the ones which spring to mind when you need them. If I ask you to imagine a costumed superhero, you know they have a cape, can probably fly and there’s definitely a star-shaped bubble when they punch someone. Prototypes organise our experience of the world, telling us what to expect, whether it is a superhero or a job interview. Life would be impossible without them.

The drawback is that the things which connect together because of familiarity aren’t always the ones which should connect together because of logic. Another game we used to play proves this point. You ask someone to play along again and this time you ask them to say “Milk” 20 times as fast as they can. Then you challenge them to snap-respond to the question “What do cows drink?”. The fun is in seeing how many people answer “milk”. A surprising number do, allowing you to crow “Cows drink water, stupid!”. We drink milk, and the concept is closely connected to the idea of cows, so it is natural to accidentally pull out the answer “milk” when we’re fishing for the first thing that comes to mind in response to the ideas “drink” and “cow”.

Having a mind which supplies ready answers based on association is better than a mind which never supplies ready answers, but it can also produce blunders that are much more damaging than claiming cows drink milk. Every time we assume the doctor is a man and the nurse is woman, we’re falling victim to the ready answers of our mental prototypes of those professions. Such prototypes, however mistaken, may also underlie our readiness to assume a man will be a better CEO, or a philosophy professor won’t be a woman. If you let them guide how the world should be, rather than what it might be, you get into trouble pretty quickly.

Advertisers know the power of prototypes too, of course, which is why so much advertising appears to be style over substance. Their job isn’t to deliver a persuasive message, as such. They don’t want you to actively believe anything about their product being provably fun, tasty or healthy. Instead, they just want fun, taste or health to spring to mind when you think of their product (and the reverse). Worming their way into our mental associations is worth billions of dollars to the advertising industry, and it is based on a principle no more complicated than a childhood game which tries to trick you into saying “carrots”.

This is my BBC Future column from last week. The original is here. And, yes, I know that baby cows actually do drink milk.