information theory and psychology

I have read a good deal more about information theory and psychology than I can or care to remember. Much of it was a mere association of new terms with old and vague ideas. Presumably the hope was that a stirring in of new terms would clarify the old ideas by a sort of sympathetic magic.

From: John R. Piece’s 1961 An introduction to information theory: symbols, signals and noise. Plus ça change.

Pierce’s book is really quite wonderful and contains lots of chatty asides and examples, such as:

Gottlob Burmann, a German poet who lived from 1737 to 1805, wrote 130 poems, including a total of 20,000 words, without once using the letter R. Further, during the last seventeen years of his life, Burmann even omitted the letter from his daily conversation.

The two word games that trick almost everyone

270px-Cowicon.svgPlaying two classic schoolyard games can help us understand everything from sexism to the power of advertising.

There’s a word game we used to play at my school, or a sort of trick, and it works like this. You tell someone they have to answer some questions as quickly as possible, and then you rush at them the following:

“What’s one plus four?!”
“What’s five plus two?!”
“What’s seven take away three?!”
“Name a vegetable?!”

Nine times out of 10 people answer the last question with “Carrot”.

Now I don’t think the magic is in the maths questions. Probably they just warm your respondent up to answering questions rapidly. What is happening is that, for most people, most of the time, in all sorts of circumstances, carrot is simply the first vegetable that comes to mind.

This seemingly banal fact reveals something about how our minds organise information. There are dozens of vegetables, and depending on your love of fresh food you might recognise a good proportion. If you had to list them you’d probably forget a few you know, easily reaching a dozen and then slowing down. And when you’re pressured to name just one as quickly as possible, you forget even more and just reach for the most obvious vegetable you can think of – and often that’s a carrot.

In cognitive science, we say the carrot is “prototypical” – for our idea of a vegetable, it occupies the centre of the web of associations which defines the concept. You can test prototypicality directly by timing how long it takes someone to answer whether the object in question belongs to a particular category. We take longer to answer “yes” if asked “is a penguin a bird?” than if asked “is a robin a bird?”, for instance. Even when we know penguins are birds, the idea of penguins takes longer to connect to the category “bird” than more typical species.

So, something about our experience of school dinners, being told they’ll help us see in the dark, the 37 million tons of carrots the world consumes each year, and cartoon characters from Bugs Bunny to Olaf the Snowman, has helped carrots work their way into our minds as the prime example of a vegetable.

The benefit to this system of mental organisation is that the ideas which are most likely to be associated are also the ones which spring to mind when you need them. If I ask you to imagine a costumed superhero, you know they have a cape, can probably fly and there’s definitely a star-shaped bubble when they punch someone. Prototypes organise our experience of the world, telling us what to expect, whether it is a superhero or a job interview. Life would be impossible without them.

The drawback is that the things which connect together because of familiarity aren’t always the ones which should connect together because of logic. Another game we used to play proves this point. You ask someone to play along again and this time you ask them to say “Milk” 20 times as fast as they can. Then you challenge them to snap-respond to the question “What do cows drink?”. The fun is in seeing how many people answer “milk”. A surprising number do, allowing you to crow “Cows drink water, stupid!”. We drink milk, and the concept is closely connected to the idea of cows, so it is natural to accidentally pull out the answer “milk” when we’re fishing for the first thing that comes to mind in response to the ideas “drink” and “cow”.

Having a mind which supplies ready answers based on association is better than a mind which never supplies ready answers, but it can also produce blunders that are much more damaging than claiming cows drink milk. Every time we assume the doctor is a man and the nurse is woman, we’re falling victim to the ready answers of our mental prototypes of those professions. Such prototypes, however mistaken, may also underlie our readiness to assume a man will be a better CEO, or a philosophy professor won’t be a woman. If you let them guide how the world should be, rather than what it might be, you get into trouble pretty quickly.

Advertisers know the power of prototypes too, of course, which is why so much advertising appears to be style over substance. Their job isn’t to deliver a persuasive message, as such. They don’t want you to actively believe anything about their product being provably fun, tasty or healthy. Instead, they just want fun, taste or health to spring to mind when you think of their product (and the reverse). Worming their way into our mental associations is worth billions of dollars to the advertising industry, and it is based on a principle no more complicated than a childhood game which tries to trick you into saying “carrots”.

This is my BBC Future column from last week. The original is here. And, yes, I know that baby cows actually do drink milk.

The memory trap

CC Licensed Photo by Flickr user greeblie. Click for source.I had a piece in the Guardian on Saturday, ‘The way you’re revising may let you down in exams – and here’s why. In it I talk about a pervasive feature of our memories: that we tend to overestimate how much of a memory is ‘ours’, and how little is actually shared with other people, or the environment (see also the illusion of explanatory depth). This memory trap can combine with our instinct to make things easy for ourselves and result in us thinking we are learning when really we’re just flattering our feeling of familiarity with a topic.

Here’s the start of the piece:

Even the most dedicated study plan can be undone by a failure to understand how human memory works. Only when you’re aware of the trap set for us by overconfidence, can you most effectively deploy the study skills you already know about.
… even the best [study] advice can be useless if you don’t realise why it works. Understanding one fundamental principle of human memory can help you avoid wasting time studying the wrong way.

I go on to give four evidence-based pieces of revision advice, all of which – I hope – use psychology to show that some of our intuitions about how to study can’t be trusted.

Link: The way you’re revising may let you down in exams – and here’s why

Previously at the Guardian by me:

The science of learning: five classic studies

Five secrets to revising that can improve your grades

The Devil’s Wager: when a wrong choice isn’t an error

Devil faceThe Devil looks you in the eyes and offers you a bet. Pick a number and if you successfully guess the total he’ll roll on two dice you get to keep your soul. If any other number comes up, you go to burn in eternal hellfire.

You call “7” and the Devil rolls the dice.

A two and a four, so the total is 6 — that’s bad news.

But let’s not dwell on the incandescent pain of your infinite and inescapable future, let’s think about your choice immediately before the dice were rolled.

Did you make a mistake? Was choosing “7” an error?

In one sense, obviously yes. You should have chosen 6.

But in another important sense you made the right choice. There are more combinations of dice outcomes that add to 7 than to any other number. The chances of winning if you bet 7 are higher than for any other single number.

The distinction is between a particular choice which happens to be wrong, and a choice strategy which is actually as good as you can do in the circumstances. If we replace the Devil’s Wager with the situations the world presents you, and your choice of number with your actions in response, then we have a handle on what psychologists mean when they talk about “cognitive error” or “bias”.

In psychology, the interesting errors are not decisions that just happen to turn out wrong. The interesting errors are decisions which people systematically get wrong, and get wrong in a particular way. As well as being predictable, these errors are interesting because they must be happening for a reason.

If you met a group of people who always bet “6” when gambling with the Devil, you’d be an incurious person if you assumed they were simply idiots. That judgement doesn’t lead anywhere. Instead, you’d want to find out what they believe that makes them think that’s the right choice strategy. Similarly, when psychologists find that people will pay more to keep something than they’d pay to obtain it or are influenced by irrelevant information in the judgements of risk, there’s no profit to labelling this “irrationality” and leaving it at that. The interesting question is why these choices seem common to so many people. What is it about our minds that disposes us to make these same errors, to have in common the same choice strategies?

You can get traction on the shape of possible answers from the Devil’s Wager example. In this scenario, why would you bet “6” rather than “7”? Here are three possible general reasons, and their explanations in the terms of the Devil’s Wager, and also a real example.

 

1. Strategy is optimised for a different environment

If you expected the Devil to role a single loaded die, rather than a fair pair of dice, then calling “6” would be the best strategy, rather than a sub-optimal one.
Analogously, you can understand a psychological bias by understanding which environment is it intended to match. If I love sugary foods so much it makes me fat, part of the explanation may be that my sugar cravings evolved at a point in human history when starvation was a bigger risk than obesity.

 

2. Strategy is designed for a bundle of choices

If you know you’ll only get to pick one number to cover multiple bets, your best strategy is to pick a number which works best over all bets. So if the Devil is going to give you best of ten, and most of the time he’ll roll a single loaded die, and only some times roll two fair dice, then “6” will give you the best total score, even though it is less likely to win for the two-fair-dice wager.

In general, what looks like a poor choice may be the result of strategy which treats a class of decisions as the same, and produces a good answer for that whole set. It is premature to call our decision making irrational if we look at a single choice, which is the focus of the psychologist’s experiment, and not the related set of choice of which it is part.

An example from the literature may be the Mere Exposure Effect, where we favour something we’ve seen before merely because we’ve seen it before. In experiments, this preference looks truly arbitrary, because the experiment decided which stimuli to expose us to and which to withhold, but in everyday life our familiarity with things tracks important variables such as how common, safe or sought out things are. The Mere Exposure Effect may result from a feature of our minds that assumes, all other things being equal, that familiar things are preferable, and that’s probably a good general strategy.

 

3. Strategy uses a different cost/benefit analysis

Obviously, we’re assuming everyone wants to save their soul and avoid damnation. If you felt like you didn’t deserve heaven, harps and angel wings, or that hellfire sounds comfortably warm, then you might avoid making the bet-winning optimal choice.

By extension, we should only call a choice irrational or suboptimal if we know what people are trying to optimise. For example, it looks like people systematically under-explore new ways of doing things when learning skills. Is this reliance on habit, similar to confirmation bias when exploring competing hypotheses, irrational? Well, in the sense that it slows your learning down, it isn’t optimal, but if it exists because exploration carries a risk (you might get the action catastrophically wrong, you might hurt yourself), or that the important thing is to minimise the cost of acting (and habitual movements require less energy), then it may in fact be better than reckless exploration.

 

So if we see a perplexing behaviour, we might reach for one of these explanations to explain it: The behaviour is right for a different environment, a wider set of choices, or a different cost/benefit analysis. Only when we are confident that we understand the environment (either evolutionary, or of training) which drives the behaviour, and the general class of choices of which it is part, and that we know which cost-benefit function the people making the choices are using, should we confidently say a choice is an error. Even then it is pretty unprofitable to call such behaviour irrational – we’d want to know why people make the error. Are they unable to calculate the right response? Mis-perceiving the situation?

A seemingly irrational behaviour is a good place to start investigating the psychology of decision making, but labelling behaviour irrational is a terrible place to stop. The topic really starts to get interesting when we start to ask why particular behaviours exist, and try to understand their rationality.

 

Previously/elsewhere:

Irrational? Decisions and decision making in context
My ebook: For argument’s sake: evidence that reason can change minds, which explores our over-enthusiasm for evidence that we’re irrational.

Irrational? Decisions and decision making in context

IMG_0034Nassim Nicholas Taleb, author of Fooled by Randomness:

Finally put my finger on what is wrong with the common belief in psychological findings that people “irrationally” overestimate tail probabilities, calling it a “bias”. Simply, these experimenters assume that people make a single decision in their lifetime! The entire field of psychology of decisions missed the point.

His argument seems to be that risks seem different if you view them from a lifetime perspective, where you might make choices about the same risk again and again, rather than consider as one-offs. What might be a mistake for a one-off risk could be a sensible strategy for the same risk repeated in a larger set.

He goes on to take a swipe at ‘Nudges’, the idea that you can base policies around various phenomena from the psychology of decision making. “Clearly”, he adds, “psychologists do not know how to use ‘probability'”.

This is maddeningly ignorant, but does have a grain of truth to it. The major part of the psychology of decision making is understanding why things that look like bias or error exist. If a phenomenon, such as overestimating low probability events, is pervasive, it must be for a reason. A choice that looks irrational when considered on its own might be the result of a sensible strategy when considered over a lifetime, or even over evolutionary time.

Some great research in decision making tries to go beyond simple bias phenomenon and ask what underlying choice is being optimised by our cognitive architecture. This approach gives us the Simple Heuristics Which Make Us Smart of Gerd Gigerenzer (which Taleb definitely knows about since he was a visiting fellow in Gigerenzer’s lab), as well as work which shows that people estimate risks differently if they experience the outcomes rather than being told about them, work which shows that our perceptual-motor system (which is often characterised as an optimal decision maker) has the same amount of bias as our more cognitive decisions; and work which shows that other animals, with less cognitive/representational capacity, make analogues of many classic decision making errors. This is where the interesting work in decision making is happening, and it all very much takes account of the wider context of individual decisions. So saying that the entire field missed the point seems…odd.

But the grain of truth the accusation is that the psychology of decision making has been popularised in a way that focusses on one-off decisions. The nudges of behavioural economics tend to be drammatic examples of small interventions which have large effects in one-off measures, such as giving people smaller plates makes them eat less. The problem with these interventions is that even if they work in the lab, they tend not to work long-term outside the lab. People are often doing what they do for a reason – and if you don’t affect the reasons you get the old behaviour reasserting itself as people simply adapt to any nudge you’ve introduced Although the British government is noted for introducing a ‘Nudge Unit‘ to apply behavioural science in government policies, less well known is a House of Lords Science and Technology Committee report ‘Behavioural Change’, which highlights the limitations of this approach (and is well worth reading to get an idea of the the importance of ideas beyond ‘nudging’ in behavioural change).

Taleb is right that we need to drop the idea that biases in decision making automatically attest to our irrationality. As often as not they reflect a deeper rationality in how our minds deal with risk, choice and reward. What’s sad is that he doesn’t recognise how much work on how to better understand bias already exists.

Why you forget what you came for when you enter the room

Forgetting why you entered a room is called the “Doorway Effect”, and it may reveal as much about the strengths of human memory, as it does the weaknesses, says psychologist Tom Stafford.

We’ve all done it. Run upstairs to get your keys, but forget that it is them you’re looking for once you get to the bedroom. Open the fridge door and reach for the middle shelf only to realise that we can’t remember why we opened the fridge in the first place. Or wait for a moment to interrupt a friend to find that the burning issue that made us want to interrupt has now vanished from our minds just as we come to speak: “What did I want to say again?” we ask a confused audience, who all think “how should we know?!”

Although these errors can be embarrassing, they are also common. It’s known as the “Doorway Effect”, and it reveals some important features of how our minds are organised. Understanding this might help us appreciate those temporary moments of forgetfulness as more than just an annoyance (although they will still be annoying).

These features of our minds are perhaps best illustrated by a story about a woman who meets three builders on their lunch break. “What are you doing today?” she asks the first. “I’m putting brick after sodding brick on top of another,” sighs the first. “What are you doing today?” she asks the second. “I’m building a wall,” is the simple reply. But the third builder swells with pride when asked, and replies: “I’m building a cathedral!”

Maybe you heard that story as encouragement to think of the big picture, but to the psychologist in you the important moral is that any action has to be thought of at multiple levels if you are going to carry it out successfully. The third builder might have the most inspiring view of their day-job, but nobody can build a cathedral without figuring out how to successfully put one brick on top of another like the first builder.

As we move through our days our attention shifts between these levels – from our goals and ambitions, to plans and strategies, and to the lowest levels, our concrete actions. When things are going well, often in familiar situations, we keep our attention on what we want and how we do it seems to take care of itself. If you’re a skilled driver then you manage the gears, indicators and wheel automatically, and your attention is probably caught up in the less routine business of navigating the traffic or talking to your passengers. When things are less routine we have to shift our attention to the details of what we’re doing, taking our minds off the bigger picture for a moment. Hence the pause in conversation as the driver gets to a tricky junction, or the engine starts to make a funny sound.

The way our attention moves up and down the hierarchy of action is what allows us to carry out complex behaviours, stitching together a coherent plan over multiple moments, in multiple places or requiring multiple actions.

The Doorway Effect occurs when our attention moves between levels, and it reflects the reliance of our memories – even memories for what we were about to do – on the environment we’re in.

Imagine that we’re going upstairs to get our keys and forget that it is the keys we came for as soon as we enter the bedroom. Psychologically, what has happened is that the plan (“Keys!”) has been forgotten even in the middle of implementing a necessary part of the strategy (“Go to bedroom!”). Probably the plan itself is part of a larger plan (“Get ready to leave the house!”), which is part of plans on a wider and wider scale (“Go to work!”, “Keep my job!”, “Be a productive and responsible citizen”, or whatever). Each scale requires attention at some point. Somewhere in navigating this complex hierarchy the need for keys popped into mind, and like a circus performer setting plates spinning on poles, your attention focussed on it long enough to construct a plan, but then moved on to the next plate (this time, either walking to the bedroom, or wondering who left their clothes on the stairs again, or what you’re going to do when you get to work or one of a million other things that it takes to build a life).

And sometimes spinning plates fall. Our memories, even for our goals, are embedded in webs of associations. That can be the physical environment in which we form them, which is why revisiting our childhood home can bring back a flood of previously forgotten memories, or it can be the mental environment – the set of things we were just thinking about when that thing popped into mind.

The Doorway Effect occurs because we change both the physical and mental environments, moving to a different room and thinking about different things. That hastily thought up goal, which was probably only one plate among the many we’re trying to spin, gets forgotten when the context changes.

It’s a window into how we manage to coordinate complex actions, matching plans with actions in a way that – most of the time – allows us to put the right bricks in the right place to build the cathedral of our lives.

This is my BBC Future column from Tuesday. The original is here

3 salvoes in the reproducibility crisis

cannonThe reproducibility crisis in Psychology rumbles on. For the uninitiated, this is the general brouhaha we’re having over how reliable published psychological research is. I wrote a piece on this in 2013, which now sounds a little complacent, and unnecessarily focussed on just one area of psychology, given the extent of the problems since uncovered in the way research is manufactured (or maybe not, see below). Anyway, in the last week or so there have been three interesting developments

Despair

Michael Inzlicht blogged his ruminations on the state of the field of social psychology, and they’re not rosy : “We erred, and we erred badly“, he writes. It is a profound testament to the depth of the current concerns about the reliability of psychology when such a senior scientist begins to doubt the reality of some of the phenomenon upon which he has built his career investigating.

As someone who has been doing research for nearly twenty years, I now can’t help but wonder if the topics I chose to study are in fact real and robust. Have I been chasing puffs of smoke for all these years?

Don’t panic!

But not everyone is worried. A team of Harvard A-listers, including Timothy Wilson and Daniel Gilbert, have released press release announcing a commentary on the “Reproducibility Project: Psychology”. This was an attempt to estimate the reliability of a large sample of phenomena from the psychology literature (Short introduction in Nature here). The paper from this project was picked as one of the most important of 2015 by the journal Science.

There project is a huge effort, which is open to multiple interpretations. The Harvard team’s press release is headlined “No evidence of a replicability crisis in psychological science” and claimed “reproducibility of psychological science is indistinguishable from 100%”, as well as calling from the project to put effort into repairing the damage done to the reputation of psychological research. I’d link to the press release, but it looks like between me learning of it yesterday and coming to write about it today this material has been pulled from the internet. The commentary announced was due to be released on March the 4th, so we wait with baited breath for the good news about why we don’t need to worry about the reliability of psychology research. Come on boys, we need some good news.

UPDATE 3rd March: The website is back! No Evidence for a Replicability Crisis in Psychological Science. Commentary here, and response

…But whatever you do, optimally weight evidence

Speaking of the Reproducibility Project, Alexander Etz produced a great Bayesian reanalysis of the data from that project (possible because it is all open access, via the Open Science Framework). This take on the project is a great example of how open science allows people to more easily build on your results, as well as being a vital complement to the original report – not least because it stops you naively accepting any simple statistical report of the what the reproducibility project ‘means’ (e.g. “30% of studies do not replicate” etc). Etz and Joachim Vandekerckhove have now upgraded the analysis to a paper, which is available (open access, natch) in PLoS One : “A Bayesian Perspective on the Reproducibility Project: Psychology“. And their interpretation of the reliability of psychology, as informed by the reproducibility project?

Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak …The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication…We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature

How to formulate a good resolution

We could spend all year living healthier, more productive lives, so why do we only decide to make the change at the start of the year? BBC Future’s psychologist Tom Stafford explains.

Many of us will start 2016 with resolutions – to get fit, learn a new skill, eat differently. If we really want to do these things, why did we wait until an arbitrary date which marks nothing more important than a timekeeping convention? The answer tells us something important about the psychology of motivation, and about what popular theories of self-control miss out.

What we want isn’t straightforward. At bedtime you might want to get up early and go for a run, but when your alarm goes off you find you actually want a lie-in. When exam day comes around you might want to be the kind of person who spent the afternoons studying, but on each of those afternoons you instead wanted to hang out with your friends.

You could see these contradictions as failures of our self-control: impulses for temporary pleasures manage to somehow override our longer-term interests. One fashionable theory of self-control, proposed by Roy Baumeister at Florida State University, is the ‘ego-depletion’ account. This theory states that self-control is like a muscle. This means you can exhaust it in the short-term – meaning that every temptation you resist makes it more likely that you’ll yield to the next temptation, even if it is a temptation to do something entirely different.

Some lab experiments appear to support this limited resource model of willpower. People who had to resist the temptation to eat chocolates were subsequently less successful at solving difficult puzzles which required the willpower to muster up enough concentration to complete them, for instance. Studies of court records, meanwhile, found that the more decisions a parole board judge makes without a meal break, the less lenient they become. Perhaps at the end of a long morning, the self-control necessary for a more deliberated judgement has sapped away, causing them to rely on a harsher “keep them locked up” policy.

A corollary of the ‘like a muscle’ theory is that in the long term, you can strengthen your willpower with practice. So, for example, Baumeister found that people who were assigned two weeks of trying to keep their back straight whenever possible showed improved willpower when asked back into the lab.

Yet the ‘ego-depletion’ theory has critics. My issue with it is that it reduces our willpower to something akin to oil in a tank. Not only does this seem too simplistic, but it sidesteps the core problem of self-control: who or what is controlling who or what? Why is it even the case that we can want both to yield to a temptation, and want to resist it at the same time?

Also, and more importantly, that theory also doesn’t give an explanation why we wait for New Year’s Day to begin exerting our self-control. If your willpower is a muscle, you should start building it up as soon as possible, rather than wait for an arbitrary date.

A battle of wills

Another explanation may answer these questions, although it isn’t as fashionable as ego-depletion. George Ainslie’s book ‘Breakdown of Will‘ puts forward a theory of the self and self-control which uses game theory to explain why we have trouble with our impulses, and why our attempts to control them take the form they do.

Ainslie’s account begins with the idea that we have, within us, a myriad of competing impulses, which exist on different time-scales: the you that wants to stay in bed five more minutes, the you that wants to start the day with a run, the you that wants to be fit for the half-marathon in April. Importantly, the relative power of these impulses changes as they get nearer in time: the early start wins against the lie-in the day before, but it is a different matter at 5am. Ainslie has a detailed account of why this is, and it has some important implications for our self-control.

According to this theory, our preferences are unstable and inconsistent, the product of a war between our competing impulses, good and bad, short and long-term. A New Year’s resolution could therefore be seen as an alliance between these competing motivations, and like any alliance, it can easily fall apart. Addictions are a good example, because the long-term goal (“not to be an alcoholic”) requires the coordination of many small goals (“not to have a drink at 4pm;” “not at 5pm;” “not at 6pm,” and so on), none of which is essential. You can have a drink at 4pm and still be a moderate drinker. You can even have a drink also at 5pm, but somewhere along the line all these small choices add up to a failure to keep to the wider goal. Similarly, if you want to get fit in 2016, you don’t have to go for a jog on 1 January, or even on 2 January, but if you don’t start doing exercise on one particular day then you will never meet your larger goal.

From Ainslie’s perspective willpower is a bargaining game played by the forces within ourselves, and like any conflict of interest, if the boundary between acceptable and unacceptable isn’t clearly defined then small infractions can quickly escalate. For this reason, Ainslie says, resolutions cluster around ‘clean lines’, sharp distinctions around which no quibble is brooked. The line between moderate and problem drinking isn’t clear (and liable to be even less clear around your fourth glass), but the line between teetotal and drinker is crystal.

This is why advice on good habits is often of the form “Do X every day”, and why diets tend to absolutes: “No gluten;” “No dessert;” “Fasting on Tuesdays and Thursdays”. We know that if we leave the interpretation open to doubt, although our intentions are good, we’ll undermine our resolutions when we’re under the influence of our more immediate impulses.

And, so, Ainslie gives us an answer to why our resolutions start on 1 January. The date is completely arbitrary, but it provides a clean line between our old and new selves.

The practical upshot of the theory is that if you make a resolution, you should formulate it so that at every point in time it is absolutely clear whether you are sticking to it or not. The clear lines are arbitrary, but they help the truce between our competing interests hold.

Good luck for your 2016 resolutions!

Cognitive Sciences Stack Exchange

Cognitive Sciences Stack Exchange is a question and answer forum for Cognitive Science. The Stack Exchange model works well for computer programming and now cogsci.stackexchange.com is one of the 150+ sites in their family, which includes topics as diverse as academia, mythology and pets.

There’s a dedicated community of people answering questions and voting on answers, producing  a great resource patterned around the questions people have on Cognitive Science topics. Three examples:

So head over, if you have questions, or if you can lend an evidence-based, citation-supported, hand in working on answers:

Link: Cognitive Sciences Stack Exchange

The Peer Reviewers’ Openness Initiative

pro_lockThe Peer Reviewers’ Openness Initiative” is a grassroots attempt to promote open science by organising academics’ work as reviewers. All academics spend countless hours on peer review, a task which is unpaid, often pretty thankless, and yet employs their unique and hard-won skills as scholars. We do this, despite misgivings about the current state of scholarly publishing, because we know that good science depends on review and criticism.

Often this work is hampered because papers don’t disclose the data upon which the conclusions were drawn, or even share the materials used in the experiments. When journal articles only appeared in print and space was limited this was excusable. It no longer is.

The Peer Reviewers’ Openness Initiative is a pledge scholars can take, saying that they will not recommend for publication any article which does not make the data, materials and analysis code publicly available. You can read the exact details of the initiative here and you can sign it here.

The good of society, and for the good of science, everybody should be able to benefit from, and criticise, in all details, scientific work. Good science is open science.

Link: The Peer Reviewers’ Openness Initiative

5 classic studies of learning

Photo by Wellcome and Flickr user Rebecca-Lee. Click for source.I have a piece in the Guardian, ‘The science of learning: five classic studies‘. Here’s the intro:

A few classic studies help to define the way we think about the science of learning. A classic study isn’t classic just because it uncovered a new fact, but because it neatly demonstrates a profound truth about how we learn – often at the same time showing up our unjustified assumptions about how our minds work.

My picks for five classics of learning were:

  • Bartlett’s “War of the Ghosts”
  • Skinner’s operant conditioning
  • work on dissociable memory systems by Larry Squire and colleagues
  • de Groot’s studies of expertise in chess grandmasters, and ….
  • Anders Ericcson’s work on deliberate practice (of ‘ten thousands hours’ fame)

Obviously, that’s just my choice (and you can read my reasons in the article). Did I choose right? Or is there a classic study of learning I missed? Answers in the comments.

Link: ‘The science of learning: five classic studies

Why do we forget names?

A reader, Dan, asks “Why do we forget people’s names when we first meet them? I can remember all kinds of other details about a person but completely forget their name. Even after a lengthy, in-depth conversation. It’s really embarrassing.”

Fortunately the answer involves learning something fundamental about the nature of memory. It also provides a solution that can help you to avoid the embarrassing social situation of having spoken to someone for an hour, only to have forgotten their name.

To know why this happens you have to recognise that our memories aren’t a simple filing system, with separate folders for each kind of information and a really brightly coloured folder labelled “Names”.

Rather, our minds are associative. They are built out of patterns of interconnected information. This is why we daydream: you notice that the book you’re reading was printed in Paris, and that Paris is home to the Eiffel Tower, that your cousin Mary visited last summer, and Mary loves pistachio ice-cream. Say, I wonder if she ate a pistachio ice cream while up the Tower? It goes on and on like that, each item connected to every other, not by logic but by coincidence of time, place, how you learnt the information and what it means.

The same associative network means you can guess a question from the answer. Answer: “Eiffel Tower?” Question: “Paris’s most famous landmark.” This makes memory useful, because you can often go as easily from the content to the label as vice versa: “what is in the top drawer?” isn’t a very interesting question, but it becomes so when you want the answer “where are my keys?”.

So memory is built like this on purpose, and now we can see the reason why we forget names. Our memories are amazing, but they respond to how many associations we make with new information, not with how badly we want to remember it.

When you meet someone for the first time you learn their name, but for your memory it is probably an arbitrary piece of information unconnected to anything else you know, and unconnected to all the other things you later learn about them. After your conversation, in which you probably learn about their job, and their hobbies, and their family or whatever, all this information becomes linked in your memory. Imagine you are talking to a guy with a blue shirt who likes fishing and works selling cars, but would rather give it up to sell fishing gear. Now if you can remember one bit of information (“sell cars”) you can follow the chain to the others (“sells cars but wants to give it up”, “wants to give it up to sell fishing gear”, “loves fishing” and so on). The trouble is that your new friend’s name doesn’t get a look in because it is simply a piece of arbitrary information you didn’t connect to anything else about the conversation.

Fortunately, there are ways to strengthen those links so it does become entrenched with the other memories. Here’s how to remember the name, using some basic principles of memory.

First, you should repeat any name said to you. Practice is one of the golden rules of learning: more practice makes stronger memories. In addition, when you use someone’s name you are linking it to yourself, in the physical act of saying it, but also to the current topic of the conversation in your memory (“So, James, just what is it about fishing that makes you love it so much?”).

Second, you should try to link the name you have just learnt to something you already know. It doesn’t matter if the link is completely silly, it is just important that you find some connection to help the name stick in memory. For example, maybe the guy is called James, and your high school buddy was called James, and although this guy is wearing a blue shirt, high school James only ever wore black, so he’d never wear blue. It’s a silly made up association, but it can help you remember.

Finally, you need to try to link their name to something else about them. If it was me I’d grab the first thing to come to mind to bridge between the name and something I’ve learnt about them. For example, James is a sort of biblical name, you get the King James bible after all, and James begins with J, just like Jonah in the bible who was swallowed by the whale, and this James likes fishing, but I bet he prefers catching them to being caught by them.

It doesn’t matter if the links you make are outlandish or weird. You don’t have to tell anyone. In fact, probably it is best if you don’t tell anyone, especially your new friend! But the links will help create a web of association in your memory, and that web will stop their name falling out of your mind when it is time to introduce them to someone else.

And if you’re sceptical, try this quick test. I’ve mentioned three names during this article. I bet you can remember James, who isn’t Jonah. And probably you can remember cousin Mary (or at least what kind of ice cream she likes). But you can you remember the name of the reader who asked the question? That’s the only one I introduced without elaborating some connections around the name, and that’s why I’ll bet it is the only one you’ve forgotten.

This is my BBC Future column from last week. The original is here

No more Type I/II error confusion

Type I and Type II errors are, respectively, when you allow a statistical test to convinces you of a false effect, and when you allow a statistical test to convince you to dismiss a true effect. Despite being fundamentally important concepts, they are terribly named. Who can ever remember which way around the two errors go? Well now I can, thanks to a comment from a friend I thought so useful I made it into a picture:

Boycriedwolfbarlow

a gold-standard study on brain training

The headlines

The Telegraph: Alzheimer’s disease: Online brain training “improves daily lives of over-60s”

Daily Mail: The quiz that makes over-60s better cooks: Computer brain games ‘stave off mental decline’

Yorkshire Post: Brain training study is “truly significant”

The story

A new trial shows the benefits of online ‘brain training’ exercises including improvements in everyday tasks, such as shopping, cooking and managing home finances.

What they actually did

A team led by Clive Ballard of King’s College London recruited people to a trial of online “brain training” exercises. Nearly 7,000 people over the age of 50 took part, and they were randomly assigned to one of three groups. One group did reasoning and problem solving tasks. A second group practised cognitive skills tasks, such as memory and attention training, and a third control group did a task which involved looking for information on the internet.

After six months, the reasoning and cognitive skills groups showed benefits compared with the control group. The main measure of the study was participants’ own reports of their ability to cope with daily activities. This was measured using something called the instrumental activities of daily living scale. (To give an example, you get a point if you are able to prepare your meals without assistance, and no points if you need help). The participants also showed benefits in short-term memory, judgements of grammatical accuracy and ability to learn new words.

Many of these benefits looked as if they accrued after just three months of regular practice, completing an average of five sessions a week. The benefits also seemed to affect those who went into the trial with the lowest performance, suggesting that such exercises may benefit those who are at risk of mild cognitive impairment (a precursor to dementia).

How plausible is this?

This is gold-standard research. The study was designed to the highest standards, as would be required if you were testing a new drug: a double-blind randomised control trial in which participants were assigned at random to the different treatment groups, and weren’t told which group they were in (nor what the researcher’s theory was). Large numbers of people took part, meaning that the study had a reasonable chance of detecting an effect of the treatment if it was there. The study design was also pre-registered on a database of clinical trials, meaning that the results couldn’t be buried if they turned out to be different from what the researchers (or funders) wanted, and the researchers declared in advance what their analysis would focus on.

So, overall, this is serious evidence that cognitive training exercises may bring some benefits, not just on similar cognitive tasks, but also on the everyday activities that are important for independent living among the older population.

Tom’s take

This kind of research is what “brain training” needs. Too many people – including those who just want to make some money – have leapt on the idea without the evidence that these kind of tasks can benefit anything other than performance on similar tasks. Because the evidence for broad benefits of cognitive training exercises is sparse, this study makes an important contribution to the supporters’ camp, although it far from settles the matter.

Why might you still be sceptical? Well there are some potential flaws in this study. It is useful to speculate on the effect these flaws might have had, even if only as an exercise to draw out the general lessons for interpreting this kind of research.

First up is the choice of control task. The benefits of the exercises tested in this research are only relative benefits compared with the scores of those who carried out the control task. If a different control task had been chosen maybe the benefits wouldn’t look so large. For example, we know that physical exercise has long-term and profound benefits for cognitive function. If the control group had been going for a brisk walk everyday, maybe the relative benefits of these computerised exercises would have vanished.

Or just go for a walk
http://www.shutterstock.com

Another possible distortion of the figures could have arisen as a result of people dropping out during the course of the trial. If people who were likely to score well were more likely to drop out of the control group (perhaps because it wasn’t challenging enough), then this would leave poor performers in the control group and so artificially inflate the relative benefits of being in the cognitive exercises group. More people did drop out of the control group, but it isn’t clear from reading the paper if the researchers’ analysis took steps to account for the effect this might have had on the results.

And finally, the really impressive result from this study is the benefit for the activities of daily living scale (the benefit for other cognitive abilities perhaps isn’t too surprising). This suggests a broad benefit of the cognitive exercises, something which other studies have had difficulty showing. However, it is important to note that this outcome was based on a self-report by the participants. There wasn’t any independent or objective verification, meaning that something as simple as people feeling more confident about themselves after having competed the study could skew the results.

None of these three possible flaws mean we should ignore this result, but questions like these mean that we will need follow up research before we can be certain that cognitive training brings benefits on mental function in older adults.

For now, the implications of the current state of brain training research are:

Don’t pay money for any “brain training” programme. There isn’t any evidence that commercially available exercises have any benefit over the kinds of tasks and problems you can access for free.

Do exercise. Your brain is a machine that runs on blood, and it is never too late to improve the blood supply to the brain through increased physical activity. How long have you been on the computer? Could it be time for a brisk walk round the garden or to the shops? (Younger people, take note, exercise in youth benefits mental function in older age)

A key feature of this study was that the exercises in the treatment group got progressively more difficult as the participants practised. The real benefit may not be from these exercises as such, but from continually facing new mental challenges. So, whatever your hobbies, perhaps – just perhaps – make sure you are learning something new as well as enjoying whatever you already know.

Read more

The original study: The Effect of an Online Cognitive Training Package in Healthy Older Adults: An Online Randomized Controlled Trial

Oliver Burkeman writes: http://www.theguardian.com/science/2014/jan/04/can-i-increase-my-brain-power

The New Yorker (2013): http://www.newyorker.com/tech/elements/brain-games-are-bogus

The Conversation

This article was originally published on The Conversation. Read the original article.

Web of illusion: how the internet affects our confidence in what we know

The internet can give us the illusion of knowledge, making us think we are smarter than we really are. Fortunately, there may be a cure for our arrogance, writes psychologist Tom Stafford.

The internet has a reputation for harbouring know-it-alls. Commenters on articles, bloggers, even your old school friends on Facebook all seem to swell with confidence in their understanding of exactly how the world works (and they are eager to share that understanding with everyone and anyone who will listen). Now, new research reveals that just having access to the world’s information can induce an illusion of overconfidence in our own wisdom. Fortunately the research also shares clues as to how that overconfidence can be corrected.

Specifically, we are looking at how the internet affects our thinking about what we know, a topic psychologists call metacognition. When you know you are boasting, you are being dishonest, but you haven’t made any actual error in estimating your ability. If you sincerely believe you know more than you do then you have made an error. The research suggests that an illusion of understanding may actually be incredibly common, and that this metacognitive error emerges in new ways in the age of the internet.

In a new paper, Matt Fisher of Yale University, considers a particular type of thinking known as transactive memory, which is the idea that we rely on other people and other parts of the world – books, objects – to remember things for us. If you’ve ever left something you needed for work by the door the night before, then you’ve been using transactive memory.

Part of this phenomenon is the tendency to then confuse what we really know in our personal memories, with what we have easy access to, the knowledge that is readily available in the world, or with which we are merely familiar without actually understanding in depth. It can feel like we understand how a car works, the argument goes, when in fact we are merely familiar with making it work. I press the accelerator and it goes forward, neglecting to realise that I don’t really know how it goes forward.

Fisher and colleagues were interested in how this tendency interacts with the internet age. They asked people to provide answers to factual questions, such as “Why are there time zones?”. Half of the participants were instructed to look up the answers on the internet before answering, half were told not to look up the answers on the internet. Next, all participants were asked how confidently they could explain the answers to a second series of questions (seperate, but also factual, questions such as “Why are cloudy nights warmer?” or “How is vinegar made?”).

Sure enough, people who had just been searching the internet for information were significantly more confident about their understanding of the second set of questions. Follow up studies confirmed that these people really did think the knowledge was theirs: they were still more confident if asked to indicate their response on a scale representing different levels of understanding with pictures of brain-scan activity (a ploy that was meant to emphasise that the information was there, in their heads). The confidence effect even persisted when the control group were provided answer material and the internet-search group were instructed to search for a site containing the exact same answer material. Something about actively searching for information on the internet specifically generated an illusion that the  knowledge was in the participants’ own heads.

If the feeling of controlling information generates overconfidence in our own wisdom, it might seem that the internet is an engine for turning us all into bores. Fortunately another study, also published this year, suggests a partial cure.

Amanda Ferguson of the University of Toronto and colleagues ran a similar study, except the set-up was in reverse: they asked participants to provide answers first and, if they didn’t know them, search the internet afterwards for the correct information (in the control condition participants who said “I don’t know” were let off the hook and just moved on to the next question). In this set up, people with access to the internet were actually less willing to give answers in the first place than people in the no internet condition. For these guys, access to the internet shut them up, rather than encouraging them to claim that they knew it all. Looking more closely at their judgements, it seems the effect wasn’t simply that the fact-checking had undermined their confidence. Those that knew they could fall back on the web to check the correct answer didn’t report feeling less confident within themselves, yet they were still less likely to share the information and show off their knowledge.

So, putting people in a position where they could be fact-checked made them more cautious in their initial claims. The implication I draw from this is that one way of fighting a know-it-all, if you have the energy, is to let them know that they are going to be thoroughly checked on whether they are right or wrong. It might not stop them researching a long answer with the internet, but it should slow them down, and diminish the feeling that just because the internet knows some information, they do to.

It is frequently asked if the internet is changing how we think. The answer, this research shows, is that the internet is giving new fuel to the way we’ve always thought. It can be both a cause of overconfidence,  when we mistake the boundary between what we know and what is available to us over the web, and it can be a cause of uncertainty, when we anticipate that we’ll be fact-checked using the web on the claims we make. Our tendencies to overestimate what we know, to use information that is readily available as a substitute for our own knowledge, and to worry about being caught out are all constants on how we think. The internet slots into this tangled cognitive ecosystem, from which endless new forms evolve.

This is my BBC Future column from earlier this week. The original is here

Statistical fallacy impairs post-publication mood

banksyNo scientific paper is perfect, but a recent result on the affect of mood on colour perception is getting a particularly rough ride post-publication. Thorstenson and colleagues published their paper this summer in Psychological Science, claiming that people who were sad had impaired colour perception along the blue-yellow colour axis but not along the red-green colour axis. Pubpeer – a site where scholars can anonymously discuss papers after publication – has a critique of the paper, which observes that the paper commits a known flaw in its analysis.

The flaw, anonymous comments suggest, is that a difference between the two types of colour perception is claimed, but this isn’t actually tested by the paper – instead it shows that mood significantly affects blue-yellow perception, but does not significantly affect red-green perception. If there is enough evidence that one effect is significant, but not enough evidence for the second being significant, that doesn’t mean that the two effects are different from each other. Analogously, if you can prove that one suspect was present at a crime scene, but can’t prove the other was, that doesn’t mean that you have proved that the two suspects were in different places.

This mistake in analysis  – which is far from unique to this paper – is discussed in a classic 2011 paper by Nieuwenhuis and colleagues: Erroneous analyses of interactions in neuroscience: a problem of significance. At the time of writing the sentiment on Pubpeer is that the paper should be retracted – in effect striking it from the scientific record.

With commentary like this, you can see why Pubpeer has previously been the target of legal action by aggrieved researchers who feel the site unfairly maligns their work.

(h/t to Daniël Lakens and jjodx on twitter)

UPDATE 5/11/15: It’s been retracted