The Peer Reviewers’ Openness Initiative

pro_lockThe Peer Reviewers’ Openness Initiative” is a grassroots attempt to promote open science by organising academics’ work as reviewers. All academics spend countless hours on peer review, a task which is unpaid, often pretty thankless, and yet employs their unique and hard-won skills as scholars. We do this, despite misgivings about the current state of scholarly publishing, because we know that good science depends on review and criticism.

Often this work is hampered because papers don’t disclose the data upon which the conclusions were drawn, or even share the materials used in the experiments. When journal articles only appeared in print and space was limited this was excusable. It no longer is.

The Peer Reviewers’ Openness Initiative is a pledge scholars can take, saying that they will not recommend for publication any article which does not make the data, materials and analysis code publicly available. You can read the exact details of the initiative here and you can sign it here.

The good of society, and for the good of science, everybody should be able to benefit from, and criticise, in all details, scientific work. Good science is open science.

Link: The Peer Reviewers’ Openness Initiative

5 classic studies of learning

Photo by Wellcome and Flickr user Rebecca-Lee. Click for source.I have a piece in the Guardian, ‘The science of learning: five classic studies‘. Here’s the intro:

A few classic studies help to define the way we think about the science of learning. A classic study isn’t classic just because it uncovered a new fact, but because it neatly demonstrates a profound truth about how we learn – often at the same time showing up our unjustified assumptions about how our minds work.

My picks for five classics of learning were:

  • Bartlett’s “War of the Ghosts”
  • Skinner’s operant conditioning
  • work on dissociable memory systems by Larry Squire and colleagues
  • de Groot’s studies of expertise in chess grandmasters, and ….
  • Anders Ericcson’s work on deliberate practice (of ‘ten thousands hours’ fame)

Obviously, that’s just my choice (and you can read my reasons in the article). Did I choose right? Or is there a classic study of learning I missed? Answers in the comments.

Link: ‘The science of learning: five classic studies

Why do we forget names?

A reader, Dan, asks “Why do we forget people’s names when we first meet them? I can remember all kinds of other details about a person but completely forget their name. Even after a lengthy, in-depth conversation. It’s really embarrassing.”

Fortunately the answer involves learning something fundamental about the nature of memory. It also provides a solution that can help you to avoid the embarrassing social situation of having spoken to someone for an hour, only to have forgotten their name.

To know why this happens you have to recognise that our memories aren’t a simple filing system, with separate folders for each kind of information and a really brightly coloured folder labelled “Names”.

Rather, our minds are associative. They are built out of patterns of interconnected information. This is why we daydream: you notice that the book you’re reading was printed in Paris, and that Paris is home to the Eiffel Tower, that your cousin Mary visited last summer, and Mary loves pistachio ice-cream. Say, I wonder if she ate a pistachio ice cream while up the Tower? It goes on and on like that, each item connected to every other, not by logic but by coincidence of time, place, how you learnt the information and what it means.

The same associative network means you can guess a question from the answer. Answer: “Eiffel Tower?” Question: “Paris’s most famous landmark.” This makes memory useful, because you can often go as easily from the content to the label as vice versa: “what is in the top drawer?” isn’t a very interesting question, but it becomes so when you want the answer “where are my keys?”.

So memory is built like this on purpose, and now we can see the reason why we forget names. Our memories are amazing, but they respond to how many associations we make with new information, not with how badly we want to remember it.

When you meet someone for the first time you learn their name, but for your memory it is probably an arbitrary piece of information unconnected to anything else you know, and unconnected to all the other things you later learn about them. After your conversation, in which you probably learn about their job, and their hobbies, and their family or whatever, all this information becomes linked in your memory. Imagine you are talking to a guy with a blue shirt who likes fishing and works selling cars, but would rather give it up to sell fishing gear. Now if you can remember one bit of information (“sell cars”) you can follow the chain to the others (“sells cars but wants to give it up”, “wants to give it up to sell fishing gear”, “loves fishing” and so on). The trouble is that your new friend’s name doesn’t get a look in because it is simply a piece of arbitrary information you didn’t connect to anything else about the conversation.

Fortunately, there are ways to strengthen those links so it does become entrenched with the other memories. Here’s how to remember the name, using some basic principles of memory.

First, you should repeat any name said to you. Practice is one of the golden rules of learning: more practice makes stronger memories. In addition, when you use someone’s name you are linking it to yourself, in the physical act of saying it, but also to the current topic of the conversation in your memory (“So, James, just what is it about fishing that makes you love it so much?”).

Second, you should try to link the name you have just learnt to something you already know. It doesn’t matter if the link is completely silly, it is just important that you find some connection to help the name stick in memory. For example, maybe the guy is called James, and your high school buddy was called James, and although this guy is wearing a blue shirt, high school James only ever wore black, so he’d never wear blue. It’s a silly made up association, but it can help you remember.

Finally, you need to try to link their name to something else about them. If it was me I’d grab the first thing to come to mind to bridge between the name and something I’ve learnt about them. For example, James is a sort of biblical name, you get the King James bible after all, and James begins with J, just like Jonah in the bible who was swallowed by the whale, and this James likes fishing, but I bet he prefers catching them to being caught by them.

It doesn’t matter if the links you make are outlandish or weird. You don’t have to tell anyone. In fact, probably it is best if you don’t tell anyone, especially your new friend! But the links will help create a web of association in your memory, and that web will stop their name falling out of your mind when it is time to introduce them to someone else.

And if you’re sceptical, try this quick test. I’ve mentioned three names during this article. I bet you can remember James, who isn’t Jonah. And probably you can remember cousin Mary (or at least what kind of ice cream she likes). But you can you remember the name of the reader who asked the question? That’s the only one I introduced without elaborating some connections around the name, and that’s why I’ll bet it is the only one you’ve forgotten.

This is my BBC Future column from last week. The original is here

No more Type I/II error confusion

Type I and Type II errors are, respectively, when you allow a statistical test to convinces you of a false effect, and when you allow a statistical test to convince you to dismiss a true effect. Despite being fundamentally important concepts, they are terribly named. Who can ever remember which way around the two errors go? Well now I can, thanks to a comment from a friend I thought so useful I made it into a picture:

Boycriedwolfbarlow

a gold-standard study on brain training

The headlines

The Telegraph: Alzheimer’s disease: Online brain training “improves daily lives of over-60s”

Daily Mail: The quiz that makes over-60s better cooks: Computer brain games ‘stave off mental decline’

Yorkshire Post: Brain training study is “truly significant”

The story

A new trial shows the benefits of online ‘brain training’ exercises including improvements in everyday tasks, such as shopping, cooking and managing home finances.

What they actually did

A team led by Clive Ballard of King’s College London recruited people to a trial of online “brain training” exercises. Nearly 7,000 people over the age of 50 took part, and they were randomly assigned to one of three groups. One group did reasoning and problem solving tasks. A second group practised cognitive skills tasks, such as memory and attention training, and a third control group did a task which involved looking for information on the internet.

After six months, the reasoning and cognitive skills groups showed benefits compared with the control group. The main measure of the study was participants’ own reports of their ability to cope with daily activities. This was measured using something called the instrumental activities of daily living scale. (To give an example, you get a point if you are able to prepare your meals without assistance, and no points if you need help). The participants also showed benefits in short-term memory, judgements of grammatical accuracy and ability to learn new words.

Many of these benefits looked as if they accrued after just three months of regular practice, completing an average of five sessions a week. The benefits also seemed to affect those who went into the trial with the lowest performance, suggesting that such exercises may benefit those who are at risk of mild cognitive impairment (a precursor to dementia).

How plausible is this?

This is gold-standard research. The study was designed to the highest standards, as would be required if you were testing a new drug: a double-blind randomised control trial in which participants were assigned at random to the different treatment groups, and weren’t told which group they were in (nor what the researcher’s theory was). Large numbers of people took part, meaning that the study had a reasonable chance of detecting an effect of the treatment if it was there. The study design was also pre-registered on a database of clinical trials, meaning that the results couldn’t be buried if they turned out to be different from what the researchers (or funders) wanted, and the researchers declared in advance what their analysis would focus on.

So, overall, this is serious evidence that cognitive training exercises may bring some benefits, not just on similar cognitive tasks, but also on the everyday activities that are important for independent living among the older population.

Tom’s take

This kind of research is what “brain training” needs. Too many people – including those who just want to make some money – have leapt on the idea without the evidence that these kind of tasks can benefit anything other than performance on similar tasks. Because the evidence for broad benefits of cognitive training exercises is sparse, this study makes an important contribution to the supporters’ camp, although it far from settles the matter.

Why might you still be sceptical? Well there are some potential flaws in this study. It is useful to speculate on the effect these flaws might have had, even if only as an exercise to draw out the general lessons for interpreting this kind of research.

First up is the choice of control task. The benefits of the exercises tested in this research are only relative benefits compared with the scores of those who carried out the control task. If a different control task had been chosen maybe the benefits wouldn’t look so large. For example, we know that physical exercise has long-term and profound benefits for cognitive function. If the control group had been going for a brisk walk everyday, maybe the relative benefits of these computerised exercises would have vanished.

Or just go for a walk
http://www.shutterstock.com

Another possible distortion of the figures could have arisen as a result of people dropping out during the course of the trial. If people who were likely to score well were more likely to drop out of the control group (perhaps because it wasn’t challenging enough), then this would leave poor performers in the control group and so artificially inflate the relative benefits of being in the cognitive exercises group. More people did drop out of the control group, but it isn’t clear from reading the paper if the researchers’ analysis took steps to account for the effect this might have had on the results.

And finally, the really impressive result from this study is the benefit for the activities of daily living scale (the benefit for other cognitive abilities perhaps isn’t too surprising). This suggests a broad benefit of the cognitive exercises, something which other studies have had difficulty showing. However, it is important to note that this outcome was based on a self-report by the participants. There wasn’t any independent or objective verification, meaning that something as simple as people feeling more confident about themselves after having competed the study could skew the results.

None of these three possible flaws mean we should ignore this result, but questions like these mean that we will need follow up research before we can be certain that cognitive training brings benefits on mental function in older adults.

For now, the implications of the current state of brain training research are:

Don’t pay money for any “brain training” programme. There isn’t any evidence that commercially available exercises have any benefit over the kinds of tasks and problems you can access for free.

Do exercise. Your brain is a machine that runs on blood, and it is never too late to improve the blood supply to the brain through increased physical activity. How long have you been on the computer? Could it be time for a brisk walk round the garden or to the shops? (Younger people, take note, exercise in youth benefits mental function in older age)

A key feature of this study was that the exercises in the treatment group got progressively more difficult as the participants practised. The real benefit may not be from these exercises as such, but from continually facing new mental challenges. So, whatever your hobbies, perhaps – just perhaps – make sure you are learning something new as well as enjoying whatever you already know.

Read more

The original study: The Effect of an Online Cognitive Training Package in Healthy Older Adults: An Online Randomized Controlled Trial

Oliver Burkeman writes: http://www.theguardian.com/science/2014/jan/04/can-i-increase-my-brain-power

The New Yorker (2013): http://www.newyorker.com/tech/elements/brain-games-are-bogus

The Conversation

This article was originally published on The Conversation. Read the original article.

Web of illusion: how the internet affects our confidence in what we know

The internet can give us the illusion of knowledge, making us think we are smarter than we really are. Fortunately, there may be a cure for our arrogance, writes psychologist Tom Stafford.

The internet has a reputation for harbouring know-it-alls. Commenters on articles, bloggers, even your old school friends on Facebook all seem to swell with confidence in their understanding of exactly how the world works (and they are eager to share that understanding with everyone and anyone who will listen). Now, new research reveals that just having access to the world’s information can induce an illusion of overconfidence in our own wisdom. Fortunately the research also shares clues as to how that overconfidence can be corrected.

Specifically, we are looking at how the internet affects our thinking about what we know, a topic psychologists call metacognition. When you know you are boasting, you are being dishonest, but you haven’t made any actual error in estimating your ability. If you sincerely believe you know more than you do then you have made an error. The research suggests that an illusion of understanding may actually be incredibly common, and that this metacognitive error emerges in new ways in the age of the internet.

In a new paper, Matt Fisher of Yale University, considers a particular type of thinking known as transactive memory, which is the idea that we rely on other people and other parts of the world – books, objects – to remember things for us. If you’ve ever left something you needed for work by the door the night before, then you’ve been using transactive memory.

Part of this phenomenon is the tendency to then confuse what we really know in our personal memories, with what we have easy access to, the knowledge that is readily available in the world, or with which we are merely familiar without actually understanding in depth. It can feel like we understand how a car works, the argument goes, when in fact we are merely familiar with making it work. I press the accelerator and it goes forward, neglecting to realise that I don’t really know how it goes forward.

Fisher and colleagues were interested in how this tendency interacts with the internet age. They asked people to provide answers to factual questions, such as “Why are there time zones?”. Half of the participants were instructed to look up the answers on the internet before answering, half were told not to look up the answers on the internet. Next, all participants were asked how confidently they could explain the answers to a second series of questions (seperate, but also factual, questions such as “Why are cloudy nights warmer?” or “How is vinegar made?”).

Sure enough, people who had just been searching the internet for information were significantly more confident about their understanding of the second set of questions. Follow up studies confirmed that these people really did think the knowledge was theirs: they were still more confident if asked to indicate their response on a scale representing different levels of understanding with pictures of brain-scan activity (a ploy that was meant to emphasise that the information was there, in their heads). The confidence effect even persisted when the control group were provided answer material and the internet-search group were instructed to search for a site containing the exact same answer material. Something about actively searching for information on the internet specifically generated an illusion that the  knowledge was in the participants’ own heads.

If the feeling of controlling information generates overconfidence in our own wisdom, it might seem that the internet is an engine for turning us all into bores. Fortunately another study, also published this year, suggests a partial cure.

Amanda Ferguson of the University of Toronto and colleagues ran a similar study, except the set-up was in reverse: they asked participants to provide answers first and, if they didn’t know them, search the internet afterwards for the correct information (in the control condition participants who said “I don’t know” were let off the hook and just moved on to the next question). In this set up, people with access to the internet were actually less willing to give answers in the first place than people in the no internet condition. For these guys, access to the internet shut them up, rather than encouraging them to claim that they knew it all. Looking more closely at their judgements, it seems the effect wasn’t simply that the fact-checking had undermined their confidence. Those that knew they could fall back on the web to check the correct answer didn’t report feeling less confident within themselves, yet they were still less likely to share the information and show off their knowledge.

So, putting people in a position where they could be fact-checked made them more cautious in their initial claims. The implication I draw from this is that one way of fighting a know-it-all, if you have the energy, is to let them know that they are going to be thoroughly checked on whether they are right or wrong. It might not stop them researching a long answer with the internet, but it should slow them down, and diminish the feeling that just because the internet knows some information, they do to.

It is frequently asked if the internet is changing how we think. The answer, this research shows, is that the internet is giving new fuel to the way we’ve always thought. It can be both a cause of overconfidence,  when we mistake the boundary between what we know and what is available to us over the web, and it can be a cause of uncertainty, when we anticipate that we’ll be fact-checked using the web on the claims we make. Our tendencies to overestimate what we know, to use information that is readily available as a substitute for our own knowledge, and to worry about being caught out are all constants on how we think. The internet slots into this tangled cognitive ecosystem, from which endless new forms evolve.

This is my BBC Future column from earlier this week. The original is here

Statistical fallacy impairs post-publication mood

banksyNo scientific paper is perfect, but a recent result on the affect of mood on colour perception is getting a particularly rough ride post-publication. Thorstenson and colleagues published their paper this summer in Psychological Science, claiming that people who were sad had impaired colour perception along the blue-yellow colour axis but not along the red-green colour axis. Pubpeer – a site where scholars can anonymously discuss papers after publication – has a critique of the paper, which observes that the paper commits a known flaw in its analysis.

The flaw, anonymous comments suggest, is that a difference between the two types of colour perception is claimed, but this isn’t actually tested by the paper – instead it shows that mood significantly affects blue-yellow perception, but does not significantly affect red-green perception. If there is enough evidence that one effect is significant, but not enough evidence for the second being significant, that doesn’t mean that the two effects are different from each other. Analogously, if you can prove that one suspect was present at a crime scene, but can’t prove the other was, that doesn’t mean that you have proved that the two suspects were in different places.

This mistake in analysis  – which is far from unique to this paper – is discussed in a classic 2011 paper by Nieuwenhuis and colleagues: Erroneous analyses of interactions in neuroscience: a problem of significance. At the time of writing the sentiment on Pubpeer is that the paper should be retracted – in effect striking it from the scientific record.

With commentary like this, you can see why Pubpeer has previously been the target of legal action by aggrieved researchers who feel the site unfairly maligns their work.

(h/t to Daniël Lakens and jjodx on twitter)

UPDATE 5/11/15: It’s been retracted