Towards a nuanced view of mental distress

In the latest edition of The Psychologist I’m involved in a debate with John Cromby about whether our understanding of mental illness is mired in the past.

He thinks it is, I think it isn’t, and we kick off from there.

The article is readable online with a free registration but I’ve put the unrestricted version online as a pdf if you want to read it straight away.

Much of the debate is over the role of biological explanations in understanding mental distress which I think is widely understood by many.

Hopefully, amid the knockabout, the debate gets to clarify some of that.

Either way, I hope it raises a few useful reflections.

Link to ‘Are understandings of mental illness mired in the past?’ (free reg).
pdf of full debate.

The wrong sort of discussion

The Times Higher Education has an article on post-publication peer review, and whether it will survive legal challenges

The legal action launched by a US scientist who claims that anonymous comments questioning his science cost him a lucrative job offer has raised further questions about the potential for post-publication peer review to replace pre-publication review.

The article chimes with comments made by several prominent Psychologists who have been at the centre of controversies and decried the way their work has been discussed outside of the normal channels of the academic journals.

Earlier this year the head of a clinical trial of Tamiflu wrote to the British Medical Journal to protest that a BMJ journalist had solicited independent critique of the stats used in his work – “going beyond the reasonable response to a press release”.

John Bargh (Yale University) in his now infamous ‘nothing in their heads’ blogpost accused the open access journal PLoS of lacking “the usual high scientific journal standards of peer-review scrutiny”, and accussed Ed Yong – laughably – of “superficial online science journalism”. He concluded:

“I am not so much worried about the impact on science of essentially self-published failures to replicate as much as I’m worried about your ability to trust supposedly reputable online media sources for accurate information on psychological science.”

Simone Schnall (University of Cambridge) is a social psychologist whose work has also been at the centre of the discussion about replication (backstory, independent replication of her work recently reported). She has recently written that ‘no critical discussion is possible’ on social media, where ‘judgments are made quickly nowadays in social psychology and definitively’.

See also this comment from a scientist when a controversial paper which suggested that many correlations in fMRI studies of social psychological constructs were impossibly high was widely discussed before publication: . “I was shocked, this is not the way that scientific discourse should take place.”

The common theme is a lack of faith in the uncontrolled scientific discussion that now happens in public, before and after publication in the journal-sanctioned official record. Coupled, perhaps, with a lack of faith in other people to understand – let alone run – psychological research. Scientific discussion has always been uncontrolled, of course, the differences now are in how open the discussion is, and who takes part. Pre social media, ‘insider’ discussions of specialist topics took place inside psychology departments, and at conference dinners and other social gatherings of researchers. My optimistic take is that social media allows access to people who would not normally have it due to constraints on geography, finance or privilege. Social media means that if you’re in the wrong institution, aren’t funded, or if you have someone to look after at home that means you can’t fly to the conference, you can still experience and contribute to specialist discussions – that’s a massive and positive change and one we should protect as we work out how scientific discussion should take place in the 21st century.

Link: Simone Schnall’s comments in full: blog, video

Previously: Stafford, T., & Bell, V. (2012). Brain network: social media and the cognitive scientist. Trends in Cognitive Sciences, 16(10), 489–490. doi:10.1016/j.tics.2012.08.001

Previously What Jason Mitchell’s ‘On the emptiness of failed replications’ gets right, which includes some less optimistic notes on the current digital disruption of scholarly ways of working

Distraction effects

I’ve been puzzling over this tweet from Jeff Rouder:


Surely, I thought, psychology is built out of effects. What could be wrong with focussing on testing which ones are reliable?

But I think I’ve got it now. The thing about effects is that they show you – an experimental psychologist – can construct a situation where some factor you are interested in is important, relative to all the other factors (which you have managed to hold constant).

To see why this might be a problem, consider this paper by Tsay (2013): “Sight over sound in the judgment of music performance”. This was a study which asked people to select the winners of a classical music competition from 6 second clips of them performing. Some participants got the audio, so they could only hear the performance; others got the video, so they could only see the performance; and some got both audio and video. Only those participants who watched the video, without sound, could select the actual competition winners at above chance level. This demonstrates a significant bias effect of sight in judgements of music performance.

To understand the limited importance of this effect, contrast with the overclaims made by the paper: “people actually depend primarily on visual information when making judgments about music performance” (in the abstract) and “[Musicians] relegate the sound of music to the role of noise” (the concluding line). Contrary to these claims the study doesn’t show that looks dominate sound in how we assess music. It isn’t the case that our musical taste is mostly determined by how musicians look.

The Tsay studies took the 3 finalists from classical music competitions – the best of the best of expert musicians – and used brief clips of their performances as stimuli. By my reckoning, this scenario removes almost all differences in quality of the musical performance. Evidence in support for this is that Tsay didn’t find any difference in performance between non-expert participants and professional musicians. This fact strongly suggests that she has designed a task in which it is impossible to bring any musical knowledge to bear. musical knowledge isn’t an important factor.

This is why it isn’t reasonable to conclude that people are making judgments about musical performance in general. The clips don’t let you judge relative musical quality, but – for these equally almost equally matched performances – they do let you reflect the same biases as the judges, biases which include an influence of appearance as well as sound. The bias matters, not least because it obviously affects who won, but proving it exists is completely separate from the matter of whether the overall judgements of music, is affected more by sight or sound.

Further, there’s every reason to think that the conclusion from the study of the bias effect gives the opposite conclusion to the study of overall importance. In these experiments sight dominates sound, because differences due to sound have been controlled out. In most situations where we decide our music preferences, sounds is obviously massively more important.

Many psychological effects are impressive tribute to the skill of experimenters in designing situations where most factors are held equal, allowing us to highlight the role of subtle psychological factors. But we shouldn’t let this blind us to the fact that the existence of an effect due to a psychological factor isn’t the same as showing how important this factor is relative to all others, nor is it the same as showing that our effect will hold when all these other factors start varying.

Link: Are classical music competitions judged on looks? – critique of Tsay (2013) written for The Conversation

Link: A good twitter thread on the related issue of effect size – and yah-boo to anyone who says you can’t have a substantive discussion on social media

UPDATE: The paper does give evidence that the sound stimuli used do influence people’s judgements systemmatically – it was incorrect of me to say that differences due to sound have been removed. I have corrected the post to reflect what I believe the study shows: that differences due to sound have been minimised, so that differences in looks are emphasised.

Social psychology has lost its balance

Images by DeviantArt user bakablue08. Click for source.The New Yorker has an interesting article about a lack of political diversity in social psychology and how that may be leading to a climate of bias against conservative researchers, ideas and the evidence that might support them.

Some of the evidence for a bias against conservative thinking in social psychology goes back some years, and the article gives a good account of the empirical work as well as the debate.

However, the issue was recently raised again by morality researcher Jonathan Haidt leading to a renewed reflection on the extent of the problem.

There is a case to be made that, despite the imbalance, no formal changes need to be made, and that, on the whole, despite its problems, social psychology continues to function remarkably well and regularly produces high-quality research. Controversial work gets done. Even studies that directly challenge the field—like Haidt’s—are publicized and inspire healthy debate…

And yet the evidence for more substantial bias, against both individuals and research topics and directions, is hard to dismiss—and the hostility that some social psychologists have expressed toward the data suggests that self-correction may not be an adequate remedy.

A timely reminder of the eternal truth that bias is entirely non-partisan, and if you’ve not heard it before, a pointer to a great BBC Radio documentary that outlines how it works equally across people of every political stripe.

Link to ‘Is Social Psychology Biased Against Republicans?’

Problems with Bargh’s definition of unconscious

iceberg_cutI have a new paper out in Frontiers in Psychology: The perspectival shift: how experiments on unconscious processing don’t justify the claims made for them. There has been ongoing consternation about the reliability of some psychology research, particularly studies which make claims about unconscious (social) priming. However, even if we assume that the empirical results are reliable, the question remains whether the claims made for the power of the unconscious make any sense. I argue that they often don’t.

Here’s something from the intro:

In this commentary I draw attention to certain limitations on the inferences which can be drawn about participant’s awareness from the experimental methods which are routine in social priming research. Specifically, I argue that (1) a widely employed definition of unconscious processing, promoted by John Bargh is incoherent (2) many experiments involve a perspectival sleight of hand taking factors identified from comparison of average group performance and inappropriately ascribing them to the reasoning of individual participants.

The problem, I claim, is that many studies on ‘unconscious processing’, follow John Bargh in defining unconscious as meaning “not reported at the time”. This means that experimenters over-diagnose unconscious influence, when the possibility remains that participants were completely conscious of the influence of the stimili, but may not be reporting them because they have forgotten, worry about sounding silly or because the importance of the stimuli is genuinely trivial compared to other factors.

It is this last point which makes up the ‘perspectival shift’ of the title. Experiments on social priming usually work by comparing some measure (e.g. walking speed or reaction time) across two groups. My argument is that the factors which make up the total behaviour for each individual will be many and various. The single factor which the experimenter is interested in may have a non-zero effect, yet can still justifiably escape report by the majority of participants. To make this point concrete: if I ask you to judge how likeable someone is on the 1 to 7 scale, your judgement will be influenced by many factors, such as if they are like you, if you are in a good mood, the content of your interaction with the person, if they really are likeable and so on. Can we really expect participants to report an effect due to something that only the experimenter sees variation in, such as whether they are holding a hot drink or a cold drink at the time of judgement? We might as well expect them to report the effect due to them growing up in Europe rather than Asia, or being born in 1988 not 1938 (both surely non-zero effects in my hypothetical experiment).

More on this argument, and what I think it means, in the paper:

Stafford, T. (2014) The perspectival shift: how experiments on unconscious processing don’t justify the claims made for them. Frontiers in Psychology, 5, 1067. doi:10.3389/fpsyg.2014.01067

I originally started writing this commentary as a response to this paper by Julie Huang and John Bargh, which I believe is severely careless with the language it uses to discuss unconscious processing (and so a good example of the conceptual trouble you can get into if you start believing the hype around social priming).

Full disclosure: I am funded by the Leverhulme Trust to work on a project looking at the philosophy and psychology of implicit bias. This post is cross-posted on the project blog.

Seeing ourselves through the eyes of the machine

I’ve got an article in The Observer about how our inventions have profoundly shaped how we view ourselves because we’ve traditionally looked to technology for metaphors of human nature.

We tend to think that we understand ourselves and then create technologies to take advantage of that new knowledge but it usually happens the other way round – we invent something new and then use that as a metaphor to explain the mind and brain.

As history has moved on, the mind has been variously explained in terms of a wax tablets, a house with many rooms, pressures and fluids, phonograph recordings, telegraph signalling, and computing.

The idea that these are metaphors sometimes gets lost which, in some ways, is quite worrying.

It could be that we’ve reached “the end of history” as far as neuroscience goes and that everything we’ll ever say about the brain will be based on our current “brain as calculation” metaphors. But if this is not the case, there is a danger that we’ll sideline aspects of human nature that don’t easily fit the concept. Our subjective experience, emotions and the constantly varying awareness of our own minds have traditionally been much harder to understand as forms of “information processing”. Importantly, these aspects of mental life are exactly where things tend to go awry in mental illness, and it may be that our main approach for understanding the mind and brain is insufficient for tackling problems such as depression and psychosis. It could be we simply need more time with our current concepts, but history might show us that our destiny lies in another metaphor, perhaps from a future technology.

I mention Douwe Draaisma’s book Metaphors of Memory in the article but I also really recommend Alison Winter’s book Memory: Fragments of a Modern History which also covers the fascinating interaction between technological developments and how we understand ourselves.

You can read my full article at the link below.

Link to article in The Observer.

Awaiting a theory of neural weather

In a recent New York Times editorial, psychologist Gary Marcus noted that neuroscience is still awaiting a ‘bridging’ theory that elegantly connects neuroscience with psychology.

This reflects a common belief in cognitive science that there is a ‘missing law’ to be discovered that will tell us how mind and brain are linked – but it is quite possible there just isn’t one to be discovered.

Marcus, not arguing for the theory himself, describes it when he writes:

What we are really looking for is a bridge, some way of connecting two separate scientific languages — those of neuroscience and psychology.

Such bridges don’t come easily or often, maybe once in a generation, but when they do arrive, they can change everything. An example is the discovery of DNA, which allowed us to understand how genetic information could be represented and replicated in a physical structure. In one stroke, this bridge transformed biology from a mystery — in which the physical basis of life was almost entirely unknown — into a tractable if challenging set of problems, such as sequencing genes, working out the proteins that they encode and discerning the circumstances that govern their distribution in the body.

Neuroscience awaits a similar breakthrough. We know that there must be some lawful relation between assemblies of neurons and the elements of thought, but we are currently at a loss to describe those laws.

The idea of a DNA-like missing component that will allow us to connect theories of psychology and neuroscience is an attractive one, but it is equally as likely that the connection between mind and brain is more like the relationship between molecular interactions and the weather.

In this case, there is no ‘special theory’ that connects weather to molecules because different atmospheric phenomena are understood in multiple ways and across multiple models, each of which has a differing relationship to the scale at which the physical data is understood – fluid flows, as statistical models, atomic interactions and so on.

In explanatory terms, ‘psychology’ is probably a lot like the weather. The idea of their being a ‘psychological level’ is a human concept and its conceptual components won’t neatly relate to neural function in a uniform way.

Some functions will have much more direct relationships – like basic sensory information and its representation in the brain’s ‘sensotopic maps’. A good example might be how visual information in space is represented in an equivalent retinotopic map in the brain.

Other functions will have more more indirect relationships but in great part because of how we define ‘functions’. Some have very empirical definitions – take iconic memory – whereas others will be cultural or folk concepts – think vicarious embarrassment or nostalgia.

So it’s unlikely we’re going to find an all-purpose theoretical bridge to connect psychology and neuroscience. Instead, we’ll probably end up with what Kenneth Kendler calls ‘patchy reductionism’ – making pragmatic links between mind and brain where possible using a variety of theories and descriptions.

A search for a general ‘bridging theory’ may be a fruitless one.

Link to NYT piece ‘The Trouble With Brain Science’.