A Million Core Silicon Brain

For those of you who like to get your geek on (and rumour has it, they can be found reading this blog) the Computerphile channel just had a video interview with Steve Furber of the Human Brain Project who talks about the custom hardware that’s going to run their neural net simulations.

Furber is better known as one of the designers of the BBC Micro and the ARM microprocessor but has more recently been involved in the SpiNNaker project which is the basis of the Neuromorphic Computing Platform for the Human Brain Project.

Fascinating interview with a man who clearly likes the word toroid.

Spike activity 21-08-2015

Quick links from the past week in mind and brain news:

Be wary of studies that link mental illness with creativity or high IQ. Good piece in The Guardian.

Nautilus has a piece on the lost dream journal of neuroscientist Santiago Ramon y Cajal.

Video games are tackling mental health with mixed results. Great piece in Engadget.

The Globe and Mail asks how we spot the next ‘lone wolf’ terrorist and looks at some of the latest research which has changed what people look for.

A third of young Americans say they aren’t 100% heterosexual according to a YouGov survey. 4% class themselves as ‘completely homosexual’, a further 3% as ‘predominantly homosexual’.

National Geographic reports on a study suggesting that three-quarters of handprints in ancient cave art were left by women.

Psychiatry is reinventing itself thanks to advances in biology says NIMH Chief Thomas Insel in New Scientist. Presumably a very slow reinvention that doesn’t seem to change treatment very much.

Wired report that IBM have a close-to-production neuromorphic chip. Big news.

Most people are resilient after trauma. Good piece in BBC Future.

Psychological science in intelligence service operations

CC Licensed Photo by Flickr user nolifebeforecoffee. Click for source.I’ve got an article in today’s Observer about how British intelligence services are applying psychological science in their deception and infiltration operations.

Unfortunately, the online version has been given a headline which is both frivolous and wrong (“Britain’s ‘Twitter troops’ have ways of making you think…”). The ‘Twitter troops’ name was given to the UK Army’s ‘influence operations specialists’ the 77th Brigade whom the article is not focused on and whom I only mention to note their frivolous nickname.

Actually, the piece focuses on GCHQ’s Joint Threat Research Intelligence Group or JTRIG whose job it is to “discredit, disrupt, delay, deny, degrade, and deter” opponents mainly through online deception operations.

Some of the Snowden leaks have specifically focused on the psychological theory and evidence-base behind their operations which is exactly what I discuss in the article.

Controversially, not only were terrorists and hostile states listed as opponents who could pose a national security threat, but also domestic criminals and activist groups. JTRIG’s work seems primarily to involve electronic communications, and can include practical measures such as hacking computers and flooding phones with junk messages. But it also attempts to influence people socially through deception, infiltration, mass persuasion and, occasionally, it seems, sexual “honeypot” stings. The Human Science Operations Cell appears to be a specialist section of JTRIG dedicated to providing psychological support for this work.

It’s a fascinating story and there’s more at the link below.

Link to article on psychological science in intelligence service ops.

Spike activity 14-07-2015

Quick links from the past week in mind and brain news:

Trends and fashions in the science of neurotransmitters. Neuroskeptic looks at this seasons hottest brain chemicals.

MIT Tech Reviews has an interesting piece on the new wave of normal hearing enhancement hearing aids.

Sorry Paleo diet aficionados, carbs were probably essentially to our evolving brains in early human history. Good piece in The New York Times.

National Geographic has a piece on how some isolated tribes in the Amazon are initiating contact and how it’s causing a rethink of existing policies.

Brain imaging research is often wrong. This researcher wants to change that. Great interview with Russ Poldrack in Vox.

Neurocritic asks: Will machine learning create new diagnostic categories, or just refine the ones we already have?

The Obscure Neuroscience Problem That’s Plaguing VR. Interesting Wired piece on the physiological challenges of virtual reality.

The Atlantic has thought-provoking article on ‘Learning Empathy From the Dead’ – the effects of corpse dissection on medical students’ empathy.

The amygdala is NOT the brain’s fear center. Joseph LeDoux sings it from his new blog I Got a Mind to Tell You.

Good edition of ABC Radio’s Philosopher’s Zone on dreaming.

Postmortemgirl has a great guide to postmortem brain studies in mental health.

Digital tech, the BMJ, and The Baroness

CC Licensed Photo by Flickr user World Bank Photo Collection. Click for source.The British Medical Journal just published an editorial by me, Dorothy Bishop and Andrew Przybylski about the debate over digital technology and young people that focuses on Susan Greenfield’s mostly, it has to be said, unhelpful contributions.

Through appearances, interviews, and a recent book Susan Greenfield, a senior research fellow at Lincoln College, Oxford, has promoted the idea that internet use and computer games can have harmful effects on the brain, emotions, and behaviour, and she draws a parallel between the effects of digital technology and climate change. Despite repeated calls for her to publish these claims in the peer reviewed scientific literature, where clinical researchers can check how well they are supported by evidence, this has not happened, and the claims have largely been aired in the media. As scientists working in mental health, developmental neuropsychology, and the psychological impact of digital technology, we are concerned that Greenfield’s claims are not based on a fair scientific appraisal of the evidence, often confuse correlation for causation, give undue weight to anecdote and poor quality studies, and are misleading to parents and the public at large.

It continues from there.

I was also on Channel 4 News last night, debating The Baroness, and they seem to put some of their programme online as YouTube clips so if our section turns up online, I’ll post it here.

UPDATE: It disappeared on the Channel 4 site but it seems to be archived on Yahoo of all places. Either way you can now view it here.

Greenfield was lovely, as on the previous occasion we met. Actually, she didn’t remember meeting me before, despite the fact she specifically invited me to debate her on this topic at a All-Party Parliamentary Group in 2010, but I suspect what was a markedly atypical experience for me, was probably pretty humdrum for her.

Either way, she trotted out the same justifications. ‘I’ve written a book.’ ‘It contains 250 references.’ ‘The internet could trigger autistic-like traits.’

Dorothy Bishop has had a look at those 250 references and they’re not very convincing but actually our main message is shared by pretty much everyone who’s debated Greenfield over the years: describe your claims in a scientific paper and submitted them to a peer-reviewed journal so they can be examined through the rigour of the scientific process.

Oddly, Greenfield continues to publish peer-reviewed papers from her work on the neuroscience of Alzheimer’s disease but refuses to do so for her claims on digital technology and the brain.

It’s a remarkable case of scientific double standards and the public really deserves better.

Link to ‘The debate over digital technology and young people’ in the BMJ.

So the video of my debate with Greenfield is up online but it seems like you can’t embed it so you’ll have to follow this link to watch it.

Watching it back, one thing really stands out: Greenfield’s bizarre and continuing insistence that using the internet could ‘trigger’ autistic-like symptoms in young people, saying that most kids with autism are not diagnosed until age five and many use computers before.

This shows a fundamental misunderstanding of what autism is, and how diagnosis is done. Autism is diagnosed both on presentation (how you are at the time) and history (how you have been throughout your life) and to get a diagnosis of autism spectrum disorder you have to demonstrate both. So by definition, being ‘turned autistic’ at age 4 or 5 doesn’t even make sense diagnostically, let alone scientifically, as we know autism is a life-long neurodevelopmental condition.

Intuitions about free will and the brain

Libet’s classifc experiment on the neuroscience of free will tells us more about our intuition than about our actual freedom

It is perhaps the most famous experiment in neuroscience. In 1983, Benjamin Libet sparked controversy with his demonstration that our sense of free will may be an illusion, a controversy that has only increased ever since.

Libet’s experiment has three vital components: a choice, a measure of brain activity and a clock.

The choice is to move either your left or right arm. In the original version of the experiment this is by flicking your wrist; in some versions of the experiment it is to raise your left or right finger. Libet’s participants were instructed to “let the urge [to move] appear on its own at any time without any pre-planning or concentration on when to act”. The precise time at which you move is recorded from the muscles of your arm.

The measure of brain activity is taken via electrodes on the scalp. When the electrodes are placed over the motor cortex (roughly along the middle of the head), a different electrical signal appears between right and left as you plan and execute a movement on either the left or right.

The clock is specially designed to allow participants to discern sub-second changes. This clock has a single dot, which travels around the face of the clock every 2.56 seconds. This means that by reporting position you are reporting time. If we assume you can report position accurately to 5 degree angle, that means you can use this clock to report time to within 36 milliseconds – that’s 36 thousandths of a second.

Putting these ingredients together, Libet took one extra vital measurement. He asked participants to report, using the clock, exactly the point when they made the decision to move.

Physiologists had known for decades that a fraction of a second before you actually move the electrical signals in your brain change. So it was in Libet’s experiment, a fraction of a second before participants moved, a reliable change could be recorded using the electrodes. But the explosive result was when participants reported deciding to move. This occurred in between the electric change in the brain and the actual movement. This means, as sure as cause follows effect, that the feeling of deciding couldn’t be a timely report of whatever was causing the movement. The electrode recording showed that the decision had – in some sense – already been made before the participants were aware of having taken action. The brain signals were changing before the subjective experience of taking a decision occurred.

Had participants’ brains already made the decision? Was the feeling of choosing just an illusion? Controversy has raged ever since. There is far more to the discussion about neuroscience and free will than this one experiment, but its simplicity has allowed it to capture the imagination of many who think our status as biological creatures limits our free will, as well as those who argue that free will survives the challenge of our minds being firmly grounded in our biological brains.

Part of the appeal of the Libet experiment is due to two pervasive intuitions we have about the mind. Without these intuitions the experiment doesn’t seem so surprising.

The first intuition is the feeling that our minds are a separate thing from our physical selves – a natural dualism that pushes us to believe that the mind is a pure, abstract place, free from biological constraints. A moment’s thought about the last time you were grumpy because you were hungry shatters this illusion, but I’d argue that it is still a persistent theme in our thinking. Why else would we be the least surprised that it is possible to find neural correlates of mental events? If we really believed, in our heart of hearts, that the mind is based in the brain, then we would know that every mental change must have a corresponding change in the brain.

The second pervasive intuition, which makes us surprised by the Libet experiment, is the belief that we know our own minds. This is the belief that our subjective experience of making decisions is an accurate report of how that decision is made. The mind is like a machine – as long as it runs right, we are happily ignorant of how it works. It is only when mistakes or contradictions arise that we’re drawn to look under the hood: Why didn’t I notice that exit? How could I forget that person’s name? Why does the feeling of deciding come after the brain changes associated with decision making?

There’s no reason to think that we are reliable reporters of every aspect of our minds. Psychology, in fact, gives us lots of examples of where we often get things wrong. The feeling of deciding in the Libet experiment may be a complete illusion – maybe the real decision really is made ‘by our brains’ somehow – or maybe it is just that the feeling of deciding is delayed from our actual deciding. Just because we erroneously report the timing of the decision, doesn’t mean we weren’t intimately involved in it, in whatever meaningful sense that can be.

More is written about the Libet experiment every year. It has spawned an academic industry investigating the neuroscience of free will. There are many criticisms and rebuttals, with debate raging about how and if the experiment is relevant to the freedom of our everyday choices. Even supporters of Libet have to admit that the situation used in the experiment may be too artificial to be a direct model of real everyday choices. But the basic experiment continues to inspire discussion and provoke new thoughts about the way our freedom is rooted in our brains. And that, I’d argue, is due to the way it helps us confront our intuitions about the way the mind works, and to see that things are more complex than we instinctively imagine.

This is my latest column for BBC Future. The original is here. You may also enjoy this recent post on mindhacks.com Critical strategies for free will experiments

Critical strategies for free will experiments

waveBenjamin Libet’s experiment on the neuroscience of free will needs little introduction. (If you do need an introduction, it’s the topic of my latest column for BBC Future). His reports that the subjective feeling of making a choice only come after the brain signals indicating a choice has been made are famous, and have produced controversy ever since they were published in the 1980s.

For a simple experiment, Libet’s paradigm admits to a large number of interpretations, which I think is an important lesson. Here are some common, and less common, critiques of the experiment:

The Disconnect Criticism

The choice required from Libet’s participants was trivial and inconsequential. Moreover, they were specifically told to make the choice without any reason (“let the urge [to move] appear on its own at any time without any pre-planning or concentration on when to act”). A common criticism is that this kind of choice has little to tell us about everyday choices which are considered, consequential or which are actively trying to involve ourselves in.

The timing criticism(s)

Dennett discusses how the original interpretation of the experiment assumes that the choosing self exists at a particular point and at particular time – so, for example, maybe in some central ‘Cartesian Theatre’ in which information from motor cortex and visual cortex come together, but crucially, does not have direct report of (say) the information about timing gathered by the visual cortex. Even in a freely choosing self, there will be timing delays as information on the clock time is ‘connected up’ with information on when the movement decision was made. These, Dennett argues, could produce the result Libet saw without indicating a fatal compromise for free choice.

My spin on this is that the Libet result shows, minimally, that we don’t accurately know the timing of our decisions, but inaccurate judgements about the timing of decisions doesn’t mean that we don’t actually make the decisions themselves that are consequential.

Spontaneous activity

Aaron Schurger and colleagues have a nice paper in which they argue that Libet’s results can be explained by variations in spontaneous activity before actions are taken. They argue that the movement system is constantly experiencing sub-threshold variation in activity, so that at any particular point in time you are more or less close to performing any particular act. Participants in the Libet paradigm, asked to make a spontaneous act, take advantage of this variability – effectively lowering their threshold for action and waiting until the covert fluctuations are large enough to trigger a movement. Importantly, this reading weakens the link between the ‘onset’ of movements and the delayed subjective experience of making a movement. If the movement is triggered by random fluctuations (observable in the rise of the electrode signal) then there isn’t a distinct ‘decision to act’ in the motor system, so we can’t say that the subjective decision to act reliably comes afterwards.

The ‘only deterministic on average’ criticism

The specific electrode signal which is used to time the decision to move in the brain is called the readiness potential (RP). Electrode readings are highly variable, so the onset of the RP is a statistical artefact, produced by averaging over many trials (40 in Libet’s case). This means we lose the ability to detect, trial-by-trial, the relation between the brain activity related to movement and the subjective experience. Libet reports this in his original paper [1] (‘only the average RP for the whole series could be meaningfully recorded’, p634). On occasion the subjective decision time (which Libet calls W) comes before the time of even the average RP, not after (p635: “instances in which individual W time preceded onset time of averaged RP numbered zero in 26 series [out of 36] – which means that 28% of series saw at least one instance of W occurring before the RP).

The experiment showed strong reliability, but not complete reliability (the difference is described by Libet as ‘generally’ occurring and as being ‘fairly consistent’, p636). What happened next to Libet’s result is a common trick of psychologists. A statistical pattern is discovered and then reality is described as if the pattern is the complete description: “The brain change occurs before the choice”.

Although such generalities are very useful, they are misleading if we forget that they are only averagely true, not always true. I don’t think Libet’s experiment would have the imaginative hold if the result was summarised as “The brain change usually occurs before the choice”.

A consistent, but not universal, pattern in the brain before a choice has the flavour of a prediction, rather than a compulsion. Sure, before we make a choice there are antecedents in the brain – it would be weird if there weren’t – but since these don’t have any necessary consequence for what we choose, so what?

To my mind the demonstration that you can use fMRI to reproduce the Libet effect but with brain signals changing up to 10 seconds before the movement (and an above chance accuracy at predicting the movement made), only reinforces this point. We all believe that the mind has something to do with the brain, so finding patterns in the brain at one point which predict actions in the mind at a later point isn’t surprising. The fMRI result, and perhaps Libet’s experiment, rely as much on our false intuition about dualism as conclusively demonstrating anything new about freewill.

Link: my column Why do we intuitively believe we have free will?

Fifty psychological terms to just, well, be aware of

CC Licensed Photo by Flickr user greeblie. Click for source.Frontiers in Psychology has just published an article on ‘Fifty psychological and psychiatric terms to avoid’. These sorts of “here’s how to talk about” articles are popular but themselves can often be misleading, and the same applies to this one.

The article supposedly contains 50 “inaccurate, misleading, misused, ambiguous, and logically confused words and phrases”.

The first thing to say is that by recommending that people avoid certain words or phrases, the article is violating its own recommendations. That may seem like a trivial point but it isn’t when you’re giving advice about how to use language in scientific discussion.

It’s fine to use even plainly wrong terms to discuss how they’re used, the multiple meanings and misconceptions behind them. In fact, a lot of scientific writing does exactly this. When there are misconceptions that may cloud people’s understanding, it’s best to address them head on rather than avoid them.

Sometimes following the recommendations for ‘phrases to avoid’ would actually hinder this process.

For example, the piece recommends you avoid the term ‘autism epidemic’ as there is no good evidence that there is an actual epidemic. But this is not advice about language, it’s just an empirical point. According to this list, all the research that has used the term, to discuss the actual evidence in contrary to the popular idea, should have avoided the term and presumably referred to it as ‘the concept that shall not be named’.

The article also recommends against using ‘ambiguous’ words but this recommendation would basically kill the English language as many words have multiple meanings – like the word ‘meaning’ for example – but that doesn’t mean you should avoid them.

If you’re a fan of pedantry you may want to go through the article and highlight where the authors have used other ambiguous psychological phrases (starter for 10, “memory”) and post it to some obscure corner of the internet.

Many of the recommendations also rely on you agreeing with the narrow definition and limits of use that the authors premise their argument on. Do you agree that “antidepressant medication” means that the medication has a selective and specific effect on depression and no other conditions – as the authors suggest? Or do you think this just describes a property of the medication? This is exactly how medication description works throughout medicine. Aspirin is an analgesic medication and an anti-inflammatory medication, as well as having other properties. No banning needed here.

And in fact, this sort of naming is just a property of language. If you talk about an ‘off-road vehicle’, and someone pipes up to tell you “actually, off-road vehicles can also go on-road so I recommend you avoid that description” I recommend you ignore them.

The same applies to many of the definitions in this list. The ‘chemical imbalance’ theory of depression is not empirically supported, so don’t claim it is, but feel free to use the phrase if you want to discuss this misconception. Some conditions genuinely do involve a chemical imbalance though – like the accumulation of copper in Wilson’s disease, so you can use the phrase accurately in this case, being aware of how its misused in other contexts. Don’t avoid it, just use it clearly.

With ‘Lie detector test’ no accurate test has ever been devised to detect lies. But you may be writing about research which is trying to develop one or research that has tested the idea. ‘No difference between groups’ is fine if there is genuinely no difference in your measure between the groups (i.e. they both score exactly the same).

Some of the recommendations are essentially based on the premise that you ‘shouldn’t use the term except for how it was first defined or defined where we think is the authoritative source’. This is just daft advice. Terms evolve over time. Definitions shift and change. The article recommends against using ‘Fetish’ except for in its DSM-5 definition, despite the fact this is different to how it’s used commonly and how it’s widely used in other academic literature. ‘Splitting’ is widely used in a form to mean ‘team splitting’ which the article says is ‘wrong’. It isn’t wrong – the term has just evolved.

I think philosophers would be surprised to hear ‘reductionism’ is a term to be avoided – given the massive literature on reductionism. Similarly, sociologists might be a bit baffled by ‘medical model’ being a banned phrase, given the debates over it and, unsurprisingly, its meaning.

Some of the advice is just plain wrong. Don’t use “Prevalence of trait X” says the article because apparently prevalence only applies to things that are either present or absent and “not dimensionally distributed in the population, such as personality traits and intelligence”. Many traits are defined by cut-off scores along dimensionally defined constructs, making them categorical. If you couldn’t talk about the prevalence in this way, we’d be unable to talk about prevalence of intellectual disability (widely defined as involving an IQ of less than 70) or dementia – which is diagnosed by a cut-off score on dimensionally varying neuropsychological test performance.

Some of the recommended terms to avoid are probably best avoided in most contexts (“hard-wired”, “love molecule”) and some are inherently self-contradictory (“Observable symptom”, “Hierarchical stepwise regression”) but again, use them if you want to discuss how they’re used.

I have to say, the piece reminds me of Stephen Pinker’s criticism of ‘language mavens’ who have come up with rules for their particular version of English which they decide others must follow.

To be honest, I think the Frontiers in Psychology article is well-worth reading. It’s a great guide to how some concepts are used in different ways, but it’s not good advice for things to avoid.

The best advice is probably: communicate clearly, bearing in mind that terms and concepts can have multiple meanings and your audience may not be aware of which you want to communicate, so make an effort to clarify where needed.

Link to Frontiers in Psychology article.

Laughter as a window on the infant mind

What makes a baby laugh? The answer might reveal a lot about the making of our minds, says Tom Stafford.

What makes babies laugh? It sounds like one of the most fun questions a researcher could investigate, but there’s a serious scientific reason why Caspar Addyman wants to find out.

He’s not the first to ask this question. Darwin studied laughter in his infant son, and Freud formed a theory that our tendency to laugh originates in a sense of superiority. So we take pleasure at seeing another’s suffering – slapstick style pratfalls and accidents being good examples – because it isn’t us.

The great psychologist of human development, Jean Piaget, thought that babies’ laughter could be used to see into their minds. If you laugh, you must ‘get the joke’ to some degree – a good joke is balanced in between being completely unexpected and confusing and being predictable and boring. Studying when babies laugh might therefore be a great way of gaining insight into how they understand the world, he reasoned. But although he proposed this in the 1940s, this idea remains to be properly tested. Despite the fact that some very famous investigators have studied the topic, it has been neglected by modern psychology.

Addyman, of Birkbeck, University of London, is out to change that. He believes we can use laughter to get at exactly how infants understand the world. He’s completed the world’s largest and most comprehensive survey of what makes babies laugh, presenting his initial results at the International Conference on Infant Studies, Berlin, last year. Via his website he surveyed more than 1000 parents from around the world, asking them questions about when, where and why their babies laugh.The results are – like the research topic – heart-warming. A baby’s first smile comes at about six weeks, their first laugh at about three and a half months (although some took three times as long to laugh, so don’t worry if your baby hasn’t cracked its first cackle just yet). Peekaboo is a sure-fire favourite for making babies laugh (for a variety of reasons I’ve written about here), but tickling is the single most reported reason that babies laugh.

Importantly, from the very first chuckle, the survey responses show that babies are laughing with other people, and at what they do. The mere physical sensation of something being ticklish isn’t enough. Nor is it enough to see something disappear or appear suddenly. It’s only funny when an adult makes these things happen for the baby. This shows that way before babies walk, or talk, they – and their laughter – are social. If you tickle a baby they apparently laugh because you are tickling them, not just because they are being tickled.

What’s more, babies don’t tend to laugh at people falling over. They are far more likely to laugh when they fall over, rather than someone else, or when other people are happy, rather than when they are sad or unpleasantly surprised. From these results, Freud’s theory (which, in any case, was developed based on clinical interviews with adults, rather than any rigorous formal study of actual children) – looks dead wrong.

Although parents report that boy babies laugh slightly more than girl babies, both genders find mummy and daddy equally funny.

Addyman continues to collect data, and hopes that as the results become clearer he’ll be able to use his analysis to show how laughter tracks babies’ developing understanding of the world – how surprise gives way to anticipation, for example, as their ability to remember objects comes online.

Despite the scientific potential, baby laughter is, as a research topic, “strangely neglected”, according to Addyman. Part of the reason is the difficulty of making babies laugh reliably in the lab, although he plans to tackle this in the next phase of the project. But partly the topic has been neglected, he says, because it isn’t viewed as a subject for ‘proper’ science to look into. This is a prejudice Addyman hopes to overturn – for him, the study of laughter is certainly no joke.

This is my BBC Future column from Tuesday. The original is here. If you are a parent you can contribute to the science of how babies develop at Dr Addyman’s babylaughter.net (specialising in laughter) or at babylovesscience.com (which covers humour as well as other topics).

Spike activity 24-07-2015

Quick links from the past week in mind and brain news:

Why does the concept of ‘schizophrenia’ still persist? Great post from Psychodiagnosticator.

Nature reviews two new movies on notorious psychology experiments: the Stanford Prison Experiment and Milgram’s conformity experiments.

Can the thought of money make people more conservative? Another social priming effect bites the dust Neuroskeptic with a great analysis.

The Psychologist has a transcript of a recent ‘teenagers debunked’ talk at the Latitude Festival.

Oliver Sack’s excellent biography On The Move serialised on BBC Radio 4. Streaming only, online for a month only, but definitely worth it.

Science reports a new study finding that the ‘rise in autism’ is likely due to diagnostic substitution as intellectual disability diagnoses have fallen by the same amount.

Great piece in the New England Journal of Medicine on placebo effects in medicine.

The New York Times has an op-ed on ‘Psychiatry’s Identity Crisis’.

Brain Crash is an innovative online documentary from the BBC where you have to piece together a car crash and brain injury for other people’s memories.

Gamasutra has an absolutely fascinating piece on innovative behavioural approaches to abusive gamers.

Are online experiment participants paying attention?

factoryOnline testing is sure to play a large part in the future of Psychology. Using Mechanical Turk or other crowdsourcing sites for research, psychologists can quickly and easily gather data for any study where the responses can be provided online. One concern, however, is that online samples may be less motivated to pay attention to the tasks they are participating in. Not only is nobody watching how they do these online experiments, they whole experience is framed as a work-for-cash gig, so there is pressure to complete any activity as quickly and with as low effort as possible. To the extent that the online participants are satisficing or skimping on their attention, can we trust the data?

A newly submitted paper uses data from the Many Labs 3 project, which recruited over 3000 participants from both online and University campus samples, to test the idea that online samples are different from the traditional offline samples used by academic psychologists:

The findings strike a note of optimism, if you’re into online testing (perhaps less so if you use traditional university samples):

Mechanical Turk workers report paying more attention and exerting more effort than undergraduate students. Mechanical Turk workers were also more likely to pass an instructional manipulation check than undergraduate students. Based on these results, it appears that concerns over participant inattentiveness may be more applicable to samples recruited from traditional university participant pools than from Mechanical Turk

This fits with previous reports showing high consistency when classic effects are tested online, and with reports that satisficing may have been very high in offline samples, we just weren’t testing for it.

However, an issue I haven’t seen discussed is whether, because of the relatively small pool of participants taking experiments on MTurk, online participants have an opportunity to get familiar with typical instructional manipulation checks (AKA ‘catch questions’, which are designed to check if you are paying attention). If online participants adapt to our manipulation checks, then the very experiments which set out to test if they are paying more attention may not be reliable.

Link: new paperGraduating from Undergrads: Are Mechanical Turk Workers More Attentive than Undergraduate Participants?

This paper provides a useful overview: Conducting perception research over the internet: a tutorial review

Conspiracy theory as character flaw

NatureBrainPhilosophy professor Quassim Cassam has a piece in Aeon arguing that conspiracy theorists should be understood in terms of the intellectual vices. It is a dead-end, he says, to try to understand the reasons someone gives for believing a conspiracy theory. Consider someone called Oliver who believes that 9/11 was an inside job:

Usually, when philosophers try to explain why someone believes things (weird or otherwise), they focus on that person’s reasons rather than their character traits. On this view, the way to explain why Oliver believes that 9/11 was an inside job is to identify his reasons for believing this, and the person who is in the best position to tell you his reasons is Oliver. When you explain Oliver’s belief by giving his reasons, you are giving a ‘rationalising explanation’ of his belief.

The problem with this is that rationalising explanations take you only so far. If you ask Oliver why he believes 9/11 was an inside job he will, of course, be only too pleased to give you his reasons: it had to be an inside job, he insists, because aircraft impacts couldn’t have brought down the towers. He is wrong about that, but at any rate that’s his story and he is sticking to it. What he has done, in effect, is to explain one of his questionable beliefs by reference to another no less questionable belief.

So the problem is not their beliefs as such, but why the person came to have the whole set of (misguided) beliefs in the first place. The way to understand conspiracists is in terms of their intellectual character, Cassam argues, the vices and virtues that guide as us as thinking beings.

A problem with this account is that – looking at the current evidence – character flaws don’t seem that strong a predictor of conspiracist beliefs. The contrast is with the factors that have demonstrable influence on people’s unusual beliefs. For example, we know that social influence and common cognitive biases have a large, and measurable, effect on what we believe. The evidence isn’t so good on how intellectual character traits such as closed/open-mindedness, skepticism/gullibility are constituted and might affect conspiracist beliefs. That could be because the personality/character trait approach is inherently limited, or just that there is more work to do. One thing is certain, whatever the intellectual vices are that lead to conspiracy theory beliefs, they are not uncommon. One study suggested that 50% of the public endorse at least one conspiracy theory.

Link : Bad Thinkers by Quassim Cassam

Paper on personality and conspiracy theories: Unanswered questions: A preliminary investigation of personality and individual difference predictors of 9/11 conspiracist beliefs

Paper on widespread endorsement of conspiracy theories: Conspiracy Theories and the Paranoid Style(s) of Mass Opinion

Previously on Mindhacks.com That’s what they want you to believe

And a side note, this view that the problem with conspiracy theorists isn’t the beliefs helps explain why throwing facts at them doesn’t help, better to highlight the fallacies in how they are thinking.

Spike activity 13-07-2015

A slightly belated Spike Activity to capture some of the responses to the APA report plus quick links from the past week in mind and brain news:

APA makes a non-apology on Twitter and gets panned in response.

“the organization’s long-standing ethics director, Stephen Behnke, had been removed from his position as a result of the report and signaled that other firings or sanctions could follow” according to the Washington Post.

Psychologist accused of enabling US torture backed by former FBI chief, reports The Guardian. The wrangling begins.

PsychCentral editor John Grohol resigns from the APA in protest at the ethical failings.

Remarkable comments from long-time anti-torture campaigners Stephen Soldz and Steven Reisner made to a board meeting of the APA: “I see that some of the people who need to go are in this room. That in itself tells me that you don’t really yet understand the seriousness of your situation.”

European Federation of Psychology Associations releases statement on APA revelations: “Interrogations are a NO-GO zone for psychologists” – which seems to confuse interrogations, which can be done ethically and benefit from psychological input, and torture, which cannot.

Jean Maria Arrigo, the psychologist who warned of torture collusion and was subjected to a smear campaign is vindicated by the report, reports The Guardian.

And now on to more pleasant, non-torture, non-complete institutional breakdown in ethical responsibility news…

What It’s Like to Be Profoundly ‘Face-Blind’. Interesting piece from the Science of Us.

Wired reports that Bitcoins can be ‘stolen from your brain’. A bit of an exaggeration but a fascinating story nonetheless.

Could Travelling Waves Upset Cognitive Neuroscience? asks Neuroskeptic.

The New Yorker has a great three-part series on sleep and sleeplessness.

Robotic shelves! MIT Tech Review has the video. To the bunkers!

APA facilitated CIA torture programme at highest levels

The long-awaited independent report, commissioned by the American Psychological Association, into the role of the organisation in the CIA’s torture programme has cited direct collusion at the highest levels of the APA to ensure psychologists could participate in abusive interrogation practices.

Reporter James Risen, who has been chasing the story for some time, revealed the damning report and its conclusions in an article for The New York Times but the text of the 524 page report more than speaks for itself. From page 9:

Our investigation determined that key APA officials, principally the APA Ethics Director joined and supported at times by other APA officials, colluded with important DoD [Department of Defense] officials to have APA issue loose, high-level ethical guidelines that did not constrain DoD in any greater fashion than existing DoD interrogation guidelines. We concluded that APA’s principal motive in doing so was to align APA and curry favor with DoD. There were two other important motives: to create a good public-relations response, and to keep the growth of psychology unrestrained in this area.

We also found that in the three years following the adoption of the 2005 PENS [Psychological Ethics and National Security] Task Force report as APA policy, APA officials engaged in a pattern of secret collaboration with DoD officials to defeat efforts by the APA Council of Representatives to introduce and pass resolutions that would have definitively prohibited psychologists from participating in interrogations at Guantanamo Bay and other U.S. detention centers abroad. The principal APA official involved in these efforts was once again the APA Ethics Director, who effectively formed an undisclosed joint venture with a small number of DoD officials to ensure that APA’s statements and actions fell squarely in line with DoD’s goals and preferences. In numerous confidential email exchanges and conversations, the APA Ethics Director regularly sought and received pre-clearance from an influential, senior psychology leader in the U.S. Army Special Operations Command before determining what APA’s position should be, what its public statements should say, and what strategy to pursue on this issue.

The report is vindication for the long-time critics of the APA who have accused the organisation of a deliberate cover-up in its role in the CIA’s torture programme.

Nevertheless, even critics might be surprised at the level of collusion which was more direct and explicit than many had suspected. Or perhaps, suspected would ever be revealed.

The APA have released a statement saying “Our internal checks and balances failed to detect the collusion, or properly acknowledge a significant conflict of interest, nor did they provide meaningful field guidance for psychologists” and pledges a number of significant reforms to prevent psychologists from being involved in abusive practices including the vetting of all changes to ethics guidance.

The repercussions are likely to be significant and long-lasting not least as the full contents of the reports 524 pages are fully digested.

Link to article in The New York Times.
Link to full text of report from the APA.

CBT is becoming less effective, like everything else

‘Researchers have found that Cognitive Behavioural Therapy is roughly half as effective in treating depression as it used to be’ writes Oliver Burkeman in The Guardian, arguing that this is why CBT is ‘falling out of favour’. It’s worth saying that CBT seems as popular as ever, but even if it was in decline, it probably wouldn’t be due to diminishing effectiveness – because this sort of reduction in effect is common across a range of treatments.

Burkeman is commenting on a new meta-analysis that reports that more recent trials of CBT for depression find it to be less effective than older trials but this pattern is common as treatments are more thoroughly tested. This has been reported in antipsychotics, antidepressants and treatments for OCD to name but a few.

Interestingly, one commonly cited reason treatments become less effective in trials is because response to placebo is increasing, meaning many treatments seem to lose their relative potency over time.

Counter-intuitively, for something considered to be ‘an inert control condition’ the placebo response is very sensitive to the design of the trial, so even comparing placebo against several rather than one active treatment can affect placebo response.

This has led people to suggest lots of ‘placebo’ hacks. “In clinical trials,” noted one 2013 paper in Drug Discovery, “the placebo effect should be minimized to optimize drug–placebo difference”.

It’s interesting that it is still not entirely clear whether this approach is ‘revealing’ the true effects of the treatment or just another way of ‘spinning’ trials for the increasingly worried pharmaceutical and therapy industries.

The reasons for the declining treatment effects over time are also likely to include different types of patients selected into trials, more methodologically sound research practices meaning less chance of optimistic measuring and reporting, the fact that if chance gives you a falsely inflated treatment effect first time round it is more likely to be re-tested than initially less impressive first trials, and the fact that older known treatments might bring a whole load of expectations with them that brand new treatments don’t.

The bottom line is that lots of our treatments, across medicine as a whole, have quite modest effects when compared to placebo. But if placebo represents an attempt to address the problem, it provides quite a boost to the moderate effects that the treatment itself brings.

So the reports of the death of CBT have been greatly exaggerated but this is mostly due to the fact that lots of treatments start to look less impressive when they’ve been around for a while. This is less due to them ‘losing’ their effect and more likely due to us more accurately measuring their true but more modest effect over time.

Computation is a lens

CC Licensed Photo from Flickr user Jared Tarbell. Click for source.“Face It,” says psychologist Gary Marcus in The New York Times, “Your Brain is a Computer”. The op-ed argues for understanding the brain in terms of computation which opens up to the interesting question – what does it mean for a brain to compute?

Marcus makes a clear distinction between thinking that the brain is built along the same lines as modern computer hardware, which is clearly false, while arguing that its purpose is to calculate and compute. “The sooner we can figure out what kind of computer the brain is,” he says, “the better.”

In this line of thinking, the mind is considered to be the brain’s computations at work and should be able to be described in terms of formal mathematics.

The idea that the mind and brain can be described in terms of information processing is the main contention of cognitive science but this raises a key but little asked question – is the brain a computer or is computation just a convenient way of describing its function?

Here’s an example if the distinction isn’t clear. If you throw a stone you can describe its trajectory using calculus. Here we could ask a similar question: is the stone ‘computing’ the answer to a calculus equation that describes its flight, or is calculus just a convenient way of describing its trajectory?

In one sense the stone is ‘computing’. The physical properties of the stone and its interaction with gravity produce the same outcome as the equation. But in another sense, it isn’t, because we don’t really see the stone as inherently ‘computing’ anything.

This may seem like a trivial example but there are in fact a whole series of analog computers that use the physical properties of one system to give the answer to an entirely different problem. If analog computers are ‘really’ computing, why not our stone?

If this is the case, what makes brains any more or less of a computer than flying rocks, chemical reactions, or the path of radio waves? Here the question just dissolves into dust. Brains may be computers but then so is everything, so asking the question doesn’t tell us anything specific about the nature of brains.

One counter-point to this is to say that brains need to algorithmically adjust to a changing environment to aid survival which is why neurons encode properties (such as patterns of light stimulation) in another form (such as neuronal firing) which perhaps makes them a computer in a way that flying stones aren’t.

But this definition would also include plants that also encode physical properties through chemical signalling to allow them to adapt to their environment.

It is worth noting that there are other philosophical objections to the idea that brains are computers, largely based on the the hard problem of consciousness (in brief – could maths ever feel?).

And then there are arguments based on the boundaries of computation. If the brain is a computer based on its physical properties and the blood is part of that system, does the blood also compute? Does the body compute? Does the ecosystem?

Psychologists drawing on the tradition of ecological psychology and JJ Gibson suggest that much of what is thought of as ‘information processing’ is actually done through the evolutionary adaptation of the body to the environment.

So are brains computers? They can be if you want them to be. The concept of computation is a tool. Probably the most useful one we have, but if you say the brain is a computer and nothing else, you may be limiting the way you can understand it.

Link to ‘Face It, Your Brain Is a Computer’ in The NYT.


Get every new post delivered to your Inbox.

Join 26,450 other followers