information theory and psychology

I have read a good deal more about information theory and psychology than I can or care to remember. Much of it was a mere association of new terms with old and vague ideas. Presumably the hope was that a stirring in of new terms would clarify the old ideas by a sort of sympathetic magic.

From: John R. Piece’s 1961 An introduction to information theory: symbols, signals and noise. Plus ça change.

Pierce’s book is really quite wonderful and contains lots of chatty asides and examples, such as:

Gottlob Burmann, a German poet who lived from 1737 to 1805, wrote 130 poems, including a total of 20,000 words, without once using the letter R. Further, during the last seventeen years of his life, Burmann even omitted the letter from his daily conversation.

The two word games that trick almost everyone

270px-Cowicon.svgPlaying two classic schoolyard games can help us understand everything from sexism to the power of advertising.

There’s a word game we used to play at my school, or a sort of trick, and it works like this. You tell someone they have to answer some questions as quickly as possible, and then you rush at them the following:

“What’s one plus four?!”
“What’s five plus two?!”
“What’s seven take away three?!”
“Name a vegetable?!”

Nine times out of 10 people answer the last question with “Carrot”.

Now I don’t think the magic is in the maths questions. Probably they just warm your respondent up to answering questions rapidly. What is happening is that, for most people, most of the time, in all sorts of circumstances, carrot is simply the first vegetable that comes to mind.

This seemingly banal fact reveals something about how our minds organise information. There are dozens of vegetables, and depending on your love of fresh food you might recognise a good proportion. If you had to list them you’d probably forget a few you know, easily reaching a dozen and then slowing down. And when you’re pressured to name just one as quickly as possible, you forget even more and just reach for the most obvious vegetable you can think of – and often that’s a carrot.

In cognitive science, we say the carrot is “prototypical” – for our idea of a vegetable, it occupies the centre of the web of associations which defines the concept. You can test prototypicality directly by timing how long it takes someone to answer whether the object in question belongs to a particular category. We take longer to answer “yes” if asked “is a penguin a bird?” than if asked “is a robin a bird?”, for instance. Even when we know penguins are birds, the idea of penguins takes longer to connect to the category “bird” than more typical species.

So, something about our experience of school dinners, being told they’ll help us see in the dark, the 37 million tons of carrots the world consumes each year, and cartoon characters from Bugs Bunny to Olaf the Snowman, has helped carrots work their way into our minds as the prime example of a vegetable.

The benefit to this system of mental organisation is that the ideas which are most likely to be associated are also the ones which spring to mind when you need them. If I ask you to imagine a costumed superhero, you know they have a cape, can probably fly and there’s definitely a star-shaped bubble when they punch someone. Prototypes organise our experience of the world, telling us what to expect, whether it is a superhero or a job interview. Life would be impossible without them.

The drawback is that the things which connect together because of familiarity aren’t always the ones which should connect together because of logic. Another game we used to play proves this point. You ask someone to play along again and this time you ask them to say “Milk” 20 times as fast as they can. Then you challenge them to snap-respond to the question “What do cows drink?”. The fun is in seeing how many people answer “milk”. A surprising number do, allowing you to crow “Cows drink water, stupid!”. We drink milk, and the concept is closely connected to the idea of cows, so it is natural to accidentally pull out the answer “milk” when we’re fishing for the first thing that comes to mind in response to the ideas “drink” and “cow”.

Having a mind which supplies ready answers based on association is better than a mind which never supplies ready answers, but it can also produce blunders that are much more damaging than claiming cows drink milk. Every time we assume the doctor is a man and the nurse is woman, we’re falling victim to the ready answers of our mental prototypes of those professions. Such prototypes, however mistaken, may also underlie our readiness to assume a man will be a better CEO, or a philosophy professor won’t be a woman. If you let them guide how the world should be, rather than what it might be, you get into trouble pretty quickly.

Advertisers know the power of prototypes too, of course, which is why so much advertising appears to be style over substance. Their job isn’t to deliver a persuasive message, as such. They don’t want you to actively believe anything about their product being provably fun, tasty or healthy. Instead, they just want fun, taste or health to spring to mind when you think of their product (and the reverse). Worming their way into our mental associations is worth billions of dollars to the advertising industry, and it is based on a principle no more complicated than a childhood game which tries to trick you into saying “carrots”.

This is my BBC Future column from last week. The original is here. And, yes, I know that baby cows actually do drink milk.

Is there a child mental health crisis?

CC Licensed Image from Wikimedia Commons. Click for source.It is now common for media reports to mention a ‘child mental health crisis’ with claims that anxiety and depression in children are rising to catastrophic levels. The evidence behind these claims can be a little hard to track down and when you do find it there seems little evidence for a ‘crisis’ but there are still reasons for us to be concerned.

The commonest claim is something to the effect that ‘current children show a 70% increase in rates of mental illness’ and this is usually sourced to the website of the UK child mental health charity Young Minds which states that “Among teenagers, rates of depression and anxiety have increased by 70% in the past 25 years, particularly since the mid 1980’s”

This is referenced to a pdf report by the Mental Health Foundation which references a “paper presented by Dr Lynne Friedli”, which probably means this pdf report which finally references this 2004 study by epidemiologist Stephan Collishaw.

Does this study show convincing evidence for a 70% increase in teenage mental health problems in the last 25 years? In short, no, for two important reasons.

The first is that the data is quite mixed – with both flatlines and increases at different times and in different groups – and the few statistically significant results may well be false positives because the study doesn’t control for running lots of analyses.

The second reason is because it looked at a 25-year period but only up to 1999 – so it is now 17 years out-of-date.

Lots of studies have been published since then, which we’ll look at in a minute, but these findings prompted the Nuffield Foundation to collect another phase of data in 2008 in exactly the same way as this original study, and they found that “the overall level of teenage mental health problems is no longer on the increase and may even be in decline.”

Putting both these studies together, this is typical of the sort of mixed picture that is common in these studies, making it hard to say whether there genuinely is an increase in child mental health problems or not.

This is reflected in data reported by three recent review papers on the area. Two articles focused on data from rating scales – questionnaires given to parents, teachers and occasionally children, and one paper focused on population studies that use diagnosis.

The first thing to say, is that there is no stand-out clear finding that child mental health problems are increasing in general, because the results are so mixed. It’s also worth saying that even where there is evidence of an increase, the effects are small to moderate. And because there is not a lot of data, the conclusions are quite provisional.

So is there evidence for a ‘child mental health crisis’? Probably not. Are there things to be concerned about – yes, there are.

Here’s perhaps what we can make out in terms of rough trends from the data.

It doesn’t seem there is an increase in child mental health problems for young children, that is, those below about 12. If anything, their mental health has been improving over the since the early 2000s. Here, however, the data is most scarce.

Globally, and lumping all children together, there is no convincing evidence for an increase in child mental health problems. One review of rating scale data suggests there is an increase, the other paper using the more rigorous systematic review approach suggests not – in line with the data from the review of diagnostic studies.

However, there does seem to be a trend for an increase in anxiety and depression in teenage girls. And data from the UK particularly does seem to show a mild-moderate upward trend for mental health problems in adolescents in general, in comparison to other countries where the data is much more mixed. Again, though, the data isn’t as solid as it needs to be.

This leaves open some important questions though. If we’re talking about a crisis – maybe the levels were already too high so even a drop means we’re still at ‘crisis level’. So one of the most important questions is – what would be an acceptable level of mental health problems in children?

The first answer that comes to mind is ‘zero’ and not unreasonably – but considering that some mental health problems arise from largely unavoidable life stresses, bereavements, natural disasters and accidents, it would be unrealistic to expect that no child suffered periods of disabling anxiety or depression.

This also raises the question of where we decide to make the cut-off for ’emotional problems’ or ’emotional disorders’ in comparison to ‘healthy emotions’. We need anxiety, sadness and anger but they can also become disabling. Deciding where we draw the line is key in answering questions about child mental health.

So there is no way of answering the question about ‘acceptable levels of mental health problems’ without raising the question of the appropriateness of how we define problems.

Similarly, a very common finding is huge variation between countries and cultures. Concepts, reporting, and the experience of emotions can vary greatly between different cultural groups, making it difficult to make direct comparisons across the globe.

For example, the broadly Western understanding of anxiety as a distinct psychological and emotional experience which can be understood separately from its bodily effects is not one shared by many cultures.

It’s worth saying that cultural changes occur not only between peoples but also over times. Are children more likely to report emotional distress in 2016 compared to 1974 even if they feel the same? Really, we don’t know.

All of which brings us to the question- why is there so much talk about a ‘mental health crisis’ in young people if there is no strong data that there is one?

Partly this is because the mental health of children is often a way of expressing concerns about societal changes. It’s “won’t someone think of the children” given a clinical sheen. But it is also important to realise that consultations and treatment for child mental health problems have genuinely rocketed, probably because of greater awareness and better treatment.

In the UK at least, it’s also clear that talk of a ‘child mental health crisis’ can refer to two things: concerns about rising levels of mental problems, but also concerns about the ragged state of child mental health services in Britain. There is a crisis in that more children are being referred for treatment and the underfunded services are barely keeping their head above water.

So talk of a ‘crisis in rising levels of child mental health problems’ is, on balance, an exaggeration, but we shouldn’t dismiss the trends that the data do suggest.

One of the strongest is the rise in anxiety and depression in teenage girls. We clearly have a long way to go, but the world has never been safer, more equal and more full of opportunities for our soon-to-be-women. Yet there seems to be a growing minority of girls affected by anxiety and depression.

At the very least, it should make us think about whether the society we are building is appropriately supporting the future 50% of the adult population.

The memory trap

CC Licensed Photo by Flickr user greeblie. Click for source.I had a piece in the Guardian on Saturday, ‘The way you’re revising may let you down in exams – and here’s why. In it I talk about a pervasive feature of our memories: that we tend to overestimate how much of a memory is ‘ours’, and how little is actually shared with other people, or the environment (see also the illusion of explanatory depth). This memory trap can combine with our instinct to make things easy for ourselves and result in us thinking we are learning when really we’re just flattering our feeling of familiarity with a topic.

Here’s the start of the piece:

Even the most dedicated study plan can be undone by a failure to understand how human memory works. Only when you’re aware of the trap set for us by overconfidence, can you most effectively deploy the study skills you already know about.
… even the best [study] advice can be useless if you don’t realise why it works. Understanding one fundamental principle of human memory can help you avoid wasting time studying the wrong way.

I go on to give four evidence-based pieces of revision advice, all of which – I hope – use psychology to show that some of our intuitions about how to study can’t be trusted.

Link: The way you’re revising may let you down in exams – and here’s why

Previously at the Guardian by me:

The science of learning: five classic studies

Five secrets to revising that can improve your grades

Spike activity 29-04-2016

Quick links from the past week in mind and brain news:

This is how it feels to learn your memories are fiction. Good BBC Future piece on confabulation from an event with the fantastic Headway East London. However, not rare as the strap line claims.

Neuroskeptic covers an interesting study on the neural precursors of spontaneous thoughts.

Who Will Debunk The Debunkers? Good FiveThirtyEight piece on why debunking memes can be myth and rumour.

Psychological Science in the Public Interest has a long, detailed impressive review article on the causes of differences in sexual orientation.

Good piece in Gizmodo on why the brain’s ‘pain matrix’ probably isn’t a ‘pain matrix’. Ignore the headline, has nothing at all to do with how pain is ‘diagnosed’.

PrimeMind has an excellent piece on the false dream of less sleep and why you can almost never win against sleep deprivation.

Science probably does advance one funeral at a time, reports Vox, covering an intriguing study.

The Atlantic reports on a new meta-analysis suggesting the harmful effects of spanking children based on correlative evidence. Should we be doing RCTs of controversial social interventions? asked Ben Goldacre last year.

The impressive ‘dictionary in the brain study’ has been fairly badly reported – lots of mention of words ‘located’ in the brain and brain area’s lighting up. Stat has a short but appropriate critique.

The search for the terrorist ‘type’

BBC World Service has an excellent radio documentary on the history and practice of terrorist profiling.

Unlike many pieces on the psychology of terrorism, which tend to take a Hollywood view of the problem, it’s an insightful, critical and genuinely enlightening piece on the false promises and possibilities of applied psychology in the service of stopping terrorists.

Crucially, it looks at how the practice developed over time and how it’s been affected by the ‘war on terror’.

For decades researchers, academics and psychologists have wanted to know what kind of person becomes a terrorist. If there are pre-existing traits which make someone more likely to kill for their beliefs – well, that would be worth knowing… It’s a story which begins decades ago. But, with the threat from killers acting for so-called Islamic State, finding an answer has never felt more pressing.

Recommended.
 
Link to programme webpage, streaming and mp3.

The Devil’s Wager: when a wrong choice isn’t an error

Devil faceThe Devil looks you in the eyes and offers you a bet. Pick a number and if you successfully guess the total he’ll roll on two dice you get to keep your soul. If any other number comes up, you go to burn in eternal hellfire.

You call “7” and the Devil rolls the dice.

A two and a four, so the total is 6 — that’s bad news.

But let’s not dwell on the incandescent pain of your infinite and inescapable future, let’s think about your choice immediately before the dice were rolled.

Did you make a mistake? Was choosing “7” an error?

In one sense, obviously yes. You should have chosen 6.

But in another important sense you made the right choice. There are more combinations of dice outcomes that add to 7 than to any other number. The chances of winning if you bet 7 are higher than for any other single number.

The distinction is between a particular choice which happens to be wrong, and a choice strategy which is actually as good as you can do in the circumstances. If we replace the Devil’s Wager with the situations the world presents you, and your choice of number with your actions in response, then we have a handle on what psychologists mean when they talk about “cognitive error” or “bias”.

In psychology, the interesting errors are not decisions that just happen to turn out wrong. The interesting errors are decisions which people systematically get wrong, and get wrong in a particular way. As well as being predictable, these errors are interesting because they must be happening for a reason.

If you met a group of people who always bet “6” when gambling with the Devil, you’d be an incurious person if you assumed they were simply idiots. That judgement doesn’t lead anywhere. Instead, you’d want to find out what they believe that makes them think that’s the right choice strategy. Similarly, when psychologists find that people will pay more to keep something than they’d pay to obtain it or are influenced by irrelevant information in the judgements of risk, there’s no profit to labelling this “irrationality” and leaving it at that. The interesting question is why these choices seem common to so many people. What is it about our minds that disposes us to make these same errors, to have in common the same choice strategies?

You can get traction on the shape of possible answers from the Devil’s Wager example. In this scenario, why would you bet “6” rather than “7”? Here are three possible general reasons, and their explanations in the terms of the Devil’s Wager, and also a real example.

 

1. Strategy is optimised for a different environment

If you expected the Devil to role a single loaded die, rather than a fair pair of dice, then calling “6” would be the best strategy, rather than a sub-optimal one.
Analogously, you can understand a psychological bias by understanding which environment is it intended to match. If I love sugary foods so much it makes me fat, part of the explanation may be that my sugar cravings evolved at a point in human history when starvation was a bigger risk than obesity.

 

2. Strategy is designed for a bundle of choices

If you know you’ll only get to pick one number to cover multiple bets, your best strategy is to pick a number which works best over all bets. So if the Devil is going to give you best of ten, and most of the time he’ll roll a single loaded die, and only some times roll two fair dice, then “6” will give you the best total score, even though it is less likely to win for the two-fair-dice wager.

In general, what looks like a poor choice may be the result of strategy which treats a class of decisions as the same, and produces a good answer for that whole set. It is premature to call our decision making irrational if we look at a single choice, which is the focus of the psychologist’s experiment, and not the related set of choice of which it is part.

An example from the literature may be the Mere Exposure Effect, where we favour something we’ve seen before merely because we’ve seen it before. In experiments, this preference looks truly arbitrary, because the experiment decided which stimuli to expose us to and which to withhold, but in everyday life our familiarity with things tracks important variables such as how common, safe or sought out things are. The Mere Exposure Effect may result from a feature of our minds that assumes, all other things being equal, that familiar things are preferable, and that’s probably a good general strategy.

 

3. Strategy uses a different cost/benefit analysis

Obviously, we’re assuming everyone wants to save their soul and avoid damnation. If you felt like you didn’t deserve heaven, harps and angel wings, or that hellfire sounds comfortably warm, then you might avoid making the bet-winning optimal choice.

By extension, we should only call a choice irrational or suboptimal if we know what people are trying to optimise. For example, it looks like people systematically under-explore new ways of doing things when learning skills. Is this reliance on habit, similar to confirmation bias when exploring competing hypotheses, irrational? Well, in the sense that it slows your learning down, it isn’t optimal, but if it exists because exploration carries a risk (you might get the action catastrophically wrong, you might hurt yourself), or that the important thing is to minimise the cost of acting (and habitual movements require less energy), then it may in fact be better than reckless exploration.

 

So if we see a perplexing behaviour, we might reach for one of these explanations to explain it: The behaviour is right for a different environment, a wider set of choices, or a different cost/benefit analysis. Only when we are confident that we understand the environment (either evolutionary, or of training) which drives the behaviour, and the general class of choices of which it is part, and that we know which cost-benefit function the people making the choices are using, should we confidently say a choice is an error. Even then it is pretty unprofitable to call such behaviour irrational – we’d want to know why people make the error. Are they unable to calculate the right response? Mis-perceiving the situation?

A seemingly irrational behaviour is a good place to start investigating the psychology of decision making, but labelling behaviour irrational is a terrible place to stop. The topic really starts to get interesting when we start to ask why particular behaviours exist, and try to understand their rationality.

 

Previously/elsewhere:

Irrational? Decisions and decision making in context
My ebook: For argument’s sake: evidence that reason can change minds, which explores our over-enthusiasm for evidence that we’re irrational.