Spike activity 29-04-2016

Quick links from the past week in mind and brain news:

This is how it feels to learn your memories are fiction. Good BBC Future piece on confabulation from an event with the fantastic Headway East London. However, not rare as the strap line claims.

Neuroskeptic covers an interesting study on the neural precursors of spontaneous thoughts.

Who Will Debunk The Debunkers? Good FiveThirtyEight piece on why debunking memes can be myth and rumour.

Psychological Science in the Public Interest has a long, detailed impressive review article on the causes of differences in sexual orientation.

Good piece in Gizmodo on why the brain’s ‘pain matrix’ probably isn’t a ‘pain matrix’. Ignore the headline, has nothing at all to do with how pain is ‘diagnosed’.

PrimeMind has an excellent piece on the false dream of less sleep and why you can almost never win against sleep deprivation.

Science probably does advance one funeral at a time, reports Vox, covering an intriguing study.

The Atlantic reports on a new meta-analysis suggesting the harmful effects of spanking children based on correlative evidence. Should we be doing RCTs of controversial social interventions? asked Ben Goldacre last year.

The impressive ‘dictionary in the brain study’ has been fairly badly reported – lots of mention of words ‘located’ in the brain and brain area’s lighting up. Stat has a short but appropriate critique.

The search for the terrorist ‘type’

BBC World Service has an excellent radio documentary on the history and practice of terrorist profiling.

Unlike many pieces on the psychology of terrorism, which tend to take a Hollywood view of the problem, it’s an insightful, critical and genuinely enlightening piece on the false promises and possibilities of applied psychology in the service of stopping terrorists.

Crucially, it looks at how the practice developed over time and how it’s been affected by the ‘war on terror’.

For decades researchers, academics and psychologists have wanted to know what kind of person becomes a terrorist. If there are pre-existing traits which make someone more likely to kill for their beliefs – well, that would be worth knowing… It’s a story which begins decades ago. But, with the threat from killers acting for so-called Islamic State, finding an answer has never felt more pressing.

Recommended.
 
Link to programme webpage, streaming and mp3.

The Devil’s Wager: when a wrong choice isn’t an error

Devil faceThe Devil looks you in the eyes and offers you a bet. Pick a number and if you successfully guess the total he’ll roll on two dice you get to keep your soul. If any other number comes up, you go to burn in eternal hellfire.

You call “7” and the Devil rolls the dice.

A two and a four, so the total is 6 — that’s bad news.

But let’s not dwell on the incandescent pain of your infinite and inescapable future, let’s think about your choice immediately before the dice were rolled.

Did you make a mistake? Was choosing “7” an error?

In one sense, obviously yes. You should have chosen 6.

But in another important sense you made the right choice. There are more combinations of dice outcomes that add to 7 than to any other number. The chances of winning if you bet 7 are higher than for any other single number.

The distinction is between a particular choice which happens to be wrong, and a choice strategy which is actually as good as you can do in the circumstances. If we replace the Devil’s Wager with the situations the world presents you, and your choice of number with your actions in response, then we have a handle on what psychologists mean when they talk about “cognitive error” or “bias”.

In psychology, the interesting errors are not decisions that just happen to turn out wrong. The interesting errors are decisions which people systematically get wrong, and get wrong in a particular way. As well as being predictable, these errors are interesting because they must be happening for a reason.

If you met a group of people who always bet “6” when gambling with the Devil, you’d be an incurious person if you assumed they were simply idiots. That judgement doesn’t lead anywhere. Instead, you’d want to find out what they believe that makes them think that’s the right choice strategy. Similarly, when psychologists find that people will pay more to keep something than they’d pay to obtain it or are influenced by irrelevant information in the judgements of risk, there’s no profit to labelling this “irrationality” and leaving it at that. The interesting question is why these choices seem common to so many people. What is it about our minds that disposes us to make these same errors, to have in common the same choice strategies?

You can get traction on the shape of possible answers from the Devil’s Wager example. In this scenario, why would you bet “6” rather than “7”? Here are three possible general reasons, and their explanations in the terms of the Devil’s Wager, and also a real example.

 

1. Strategy is optimised for a different environment

If you expected the Devil to role a single loaded die, rather than a fair pair of dice, then calling “6” would be the best strategy, rather than a sub-optimal one.
Analogously, you can understand a psychological bias by understanding which environment is it intended to match. If I love sugary foods so much it makes me fat, part of the explanation may be that my sugar cravings evolved at a point in human history when starvation was a bigger risk than obesity.

 

2. Strategy is designed for a bundle of choices

If you know you’ll only get to pick one number to cover multiple bets, your best strategy is to pick a number which works best over all bets. So if the Devil is going to give you best of ten, and most of the time he’ll roll a single loaded die, and only some times roll two fair dice, then “6” will give you the best total score, even though it is less likely to win for the two-fair-dice wager.

In general, what looks like a poor choice may be the result of strategy which treats a class of decisions as the same, and produces a good answer for that whole set. It is premature to call our decision making irrational if we look at a single choice, which is the focus of the psychologist’s experiment, and not the related set of choice of which it is part.

An example from the literature may be the Mere Exposure Effect, where we favour something we’ve seen before merely because we’ve seen it before. In experiments, this preference looks truly arbitrary, because the experiment decided which stimuli to expose us to and which to withhold, but in everyday life our familiarity with things tracks important variables such as how common, safe or sought out things are. The Mere Exposure Effect may result from a feature of our minds that assumes, all other things being equal, that familiar things are preferable, and that’s probably a good general strategy.

 

3. Strategy uses a different cost/benefit analysis

Obviously, we’re assuming everyone wants to save their soul and avoid damnation. If you felt like you didn’t deserve heaven, harps and angel wings, or that hellfire sounds comfortably warm, then you might avoid making the bet-winning optimal choice.

By extension, we should only call a choice irrational or suboptimal if we know what people are trying to optimise. For example, it looks like people systematically under-explore new ways of doing things when learning skills. Is this reliance on habit, similar to confirmation bias when exploring competing hypotheses, irrational? Well, in the sense that it slows your learning down, it isn’t optimal, but if it exists because exploration carries a risk (you might get the action catastrophically wrong, you might hurt yourself), or that the important thing is to minimise the cost of acting (and habitual movements require less energy), then it may in fact be better than reckless exploration.

 

So if we see a perplexing behaviour, we might reach for one of these explanations to explain it: The behaviour is right for a different environment, a wider set of choices, or a different cost/benefit analysis. Only when we are confident that we understand the environment (either evolutionary, or of training) which drives the behaviour, and the general class of choices of which it is part, and that we know which cost-benefit function the people making the choices are using, should we confidently say a choice is an error. Even then it is pretty unprofitable to call such behaviour irrational – we’d want to know why people make the error. Are they unable to calculate the right response? Mis-perceiving the situation?

A seemingly irrational behaviour is a good place to start investigating the psychology of decision making, but labelling behaviour irrational is a terrible place to stop. The topic really starts to get interesting when we start to ask why particular behaviours exist, and try to understand their rationality.

 

Previously/elsewhere:

Irrational? Decisions and decision making in context
My ebook: For argument’s sake: evidence that reason can change minds, which explores our over-enthusiasm for evidence that we’re irrational.

Spike activity 22-04-2016

Quick links from the past week in mind and brain news:

Nautilus has a fascinating piece on the science of practice and improving skills – not the same as just gaining experience.

The science behind the stoner lore of different strains of weed having distinctly different highs is taken apart by a great article in PrimeMind.

Science reports on recent findings from a cadaver study that casts doubts on whether tDCS can actually stimulate the brain at all.

Does mental illness enhance creativity? A good balanced look at the evidence from BBC Future.

Slate asks: Think Psychology’s Replication Crisis Is Bad? Welcome to the One in Medicine.

Should Therapists Write About Patients? Important personal piece published in The New York Times.

The Guardian has a brief first-person piece: The secret life of a trainee brain surgeon.

A data geek may have resurrected the much maligned field of serial killer profiling. Good piece in Boston Magazine.

A brief hallucinatory twilight

CC Licensed Photo by Flickr user Risto Kuulasmaa. Click for source.I’ve got an article in The Atlantic on the hypnagogic state – the brief hallucinatory period between wakefulness and sleep – and how it is being increasingly used as a tool to make sense of consciousness.

There is a brief time, between waking and sleep, when reality begins to warp. Rigid conscious thought starts to dissolve into the gently lapping waves of early stage dreaming and the world becomes a little more hallucinatory, your thoughts a little more untethered. Known as the hypnagogic state, it has received only erratic attention from researchers over the years, but a recent series of studies have renewed interest in this twilight period, with the hope it can reveal something fundamental about consciousness itself.

The hypnagogic state has been better dealt with by artists and writers over the years – Colderidge’s poem Kubla Khan apparently emerged out of hypnagogic reverie – albeit fuelled by opium

It has received only occasional attention from scientists, however. More recently, a spate of studies has come out showing some genuine mainstream interest in understanding hypnagogia as an interesting source of information about how consciousness is deconstructed as we enter sleep.

 

Link to article in The Atlantic on the hypnagogic state.

Irrational? Decisions and decision making in context

IMG_0034Nassim Nicholas Taleb, author of Fooled by Randomness:

Finally put my finger on what is wrong with the common belief in psychological findings that people “irrationally” overestimate tail probabilities, calling it a “bias”. Simply, these experimenters assume that people make a single decision in their lifetime! The entire field of psychology of decisions missed the point.

His argument seems to be that risks seem different if you view them from a lifetime perspective, where you might make choices about the same risk again and again, rather than consider as one-offs. What might be a mistake for a one-off risk could be a sensible strategy for the same risk repeated in a larger set.

He goes on to take a swipe at ‘Nudges’, the idea that you can base policies around various phenomena from the psychology of decision making. “Clearly”, he adds, “psychologists do not know how to use ‘probability'”.

This is maddeningly ignorant, but does have a grain of truth to it. The major part of the psychology of decision making is understanding why things that look like bias or error exist. If a phenomenon, such as overestimating low probability events, is pervasive, it must be for a reason. A choice that looks irrational when considered on its own might be the result of a sensible strategy when considered over a lifetime, or even over evolutionary time.

Some great research in decision making tries to go beyond simple bias phenomenon and ask what underlying choice is being optimised by our cognitive architecture. This approach gives us the Simple Heuristics Which Make Us Smart of Gerd Gigerenzer (which Taleb definitely knows about since he was a visiting fellow in Gigerenzer’s lab), as well as work which shows that people estimate risks differently if they experience the outcomes rather than being told about them, work which shows that our perceptual-motor system (which is often characterised as an optimal decision maker) has the same amount of bias as our more cognitive decisions; and work which shows that other animals, with less cognitive/representational capacity, make analogues of many classic decision making errors. This is where the interesting work in decision making is happening, and it all very much takes account of the wider context of individual decisions. So saying that the entire field missed the point seems…odd.

But the grain of truth the accusation is that the psychology of decision making has been popularised in a way that focusses on one-off decisions. The nudges of behavioural economics tend to be drammatic examples of small interventions which have large effects in one-off measures, such as giving people smaller plates makes them eat less. The problem with these interventions is that even if they work in the lab, they tend not to work long-term outside the lab. People are often doing what they do for a reason – and if you don’t affect the reasons you get the old behaviour reasserting itself as people simply adapt to any nudge you’ve introduced Although the British government is noted for introducing a ‘Nudge Unit‘ to apply behavioural science in government policies, less well known is a House of Lords Science and Technology Committee report ‘Behavioural Change’, which highlights the limitations of this approach (and is well worth reading to get an idea of the the importance of ideas beyond ‘nudging’ in behavioural change).

Taleb is right that we need to drop the idea that biases in decision making automatically attest to our irrationality. As often as not they reflect a deeper rationality in how our minds deal with risk, choice and reward. What’s sad is that he doesn’t recognise how much work on how to better understand bias already exists.

An echo of your former self

CC Licensed Image by Flickr user Karen Axelrad. Click for source.The journal Neurology has a brief case study reporting an intriguing form of auditory hallucination – hearing someone speaking in the voice of the last person you spoke to.

The phenomenon is called palinacousis and it usually takes the form of hallucinating an echo or repetition of the voice you’re listening to and it’s particularly associated with problems with the temporal lobes.

This case is a little different, however.

A 70-year-old right-handed white man was brought by his wife to the emergency room due to odd behavior for 2 days… According to the patient, he could not explain why people talking to him sounded strange, speaking in different voices which he heard before. For example, he would talk to a man and would hear him as talking with the voice of the woman he previously talked to. He thought it was funny and he could not concentrate on what the other person was saying because he would be laughing…

On occasion, he complained of hearing a very low-pitched intonation in people’s voices, including his own. At other times, he would hear a cyclical pattern of sounds that transitioned from noisy to silent. His most disturbing auditory symptoms persisted for several days and presented in 2 distinct forms. At first, he described hearing his deceased mother’s voice speaking to him through other people’s speech. Later on, he mentioned that after talking to one person, he would hear a second person speaking to him in the first person’s voice. He would also sometimes hear his voice as if it was the voice of the person he just spoke to. During physical therapy, the patient reported that therapist voices would suddenly change to those of people he had heard on television, which provoked uncontrollable fits of laughter.

In this case, the gentleman didn’t have damage to his temporal lobes, but a bleed that affected his right parietal lobe, which may have led to the atypical form of this hallucination.

In a recent paper, Sam Wilkinson and I noted that palinacousis is one example of an auditory hallucination that typically isn’t experienced as if you’re being communicated to by an external, illusory agent – which are perhaps the least common as most people hear hallucinated voices that appear as if they have some social characteristics.

However, it seems as if there’s even a social version of palinacousis where the echo is of someone’s voice form transposed on to the current speaker.
 

Link to PubMed entry for case study.