Oliver Sacks has left the building

CC Licensed Photo from Wikipedia. Click for source.Neurologist and author Oliver Sacks has died at the age of 82.

It’s hard to fully comprehend the enormous impact of Oliver Sacks on the public’s understanding of the brain, its disorders and our diversity as humans.

Sacks wrote what he called ‘romantic science’. Not romantic in the sense of romantic love, but romantic in the sense of the romantic poets, who used narrative to describe the subtleties of human nature, often in contrast to the enlightenment values of quantification and rationalism.

In this light, romantic science would seem to be a contradiction, but Sacks used narrative and science not as opponents, but as complementary partners to illustrate new forms of human nature that many found hard to see: in people with brain injury, in alterations or differences in experience and behaviour, or in seemingly minor changes in perception that had striking implications.

Sacks was not the originator of this form of writing, nor did he claim to be. He drew his inspiration from the great neuropsychologist Alexander Luria but while Luria’s cases were known to a select group of specialists, Sacks wrote for the general public, and opened up neurology to the everyday world.

Despite Sacks’s popularity now, he had a slow start, with his first book Migraine not raising much interest either with his medical colleagues or the reading public. Not least, perhaps, because compared to his later works, it struggled to throw off some of the technical writing habits of academic medicine.

It wasn’t until his 1973 book Awakenings that he became recognised both as a remarkable writer and a remarkable neurologist, as the book recounted his experience with seemingly paralysed patients from the 1920s encephalitis lethargica epidemic and their remarkable awakening and gradual decline during a period of treatment with L-DOPA.

The book was scientifically important, humanely written, but most importantly, beautiful, as he captured his relationship with the many patients who experienced both a physical and a psychological awakening after being neurologically trapped for decades.

It was made into a now rarely seen documentary for Yorkshire Television which was eventually picked up by Hollywood and made into the movie starring Robin Williams and Robert De Niro.

But it was The Man Who Mistook His Wife for a Hat that became his signature book. It was a series of case studies, that wouldn’t seem particularly unusual to most neurologists, but which astounded the general public.

A sailor whose amnesia leads him to think he is constantly living in 1945, a woman who loses her ability to know where her limbs are, and a man with agnosia who despite normal vision can’t recognise objects and so mistook his wife’s head for a hat.

His follow-up book An Anthropologist on Mars continued in a similar vein and made for equally gripping reading.

Not all his books were great writing, however. The Island of the Colorblind was slow and technical while Sacks’s account of how his damaged leg, A Leg to Stand On, included conclusions about the nature of illness that were more abstract than most could relate to.

But his later books saw a remarkable flowering of diverse interest and mature writing. Music, imagery, hallucinations and their astounding relationship with the brain and experience were the basis of three books that showed Sacks at his best.

And slowly during these later books, we got glimpses of the man himself. He revealed in Hallucinations that he had taken hallucinogens in his younger years and that the case of medical student Stephen D in The Man Who Mistook His Wife for a Hat – who developed a remarkable sense of smell after a night on speed, cocaine, and PCP – was, in fact, an autobiographical account.

His final book, On the Move, was the most honest, as he revealed he was gay, shy, and in his younger years, devastatingly handsome but somewhat troubled. A long way from the typical portrayal of the grey-bearded, kind but eccentric neurologist.

On a personal note, I have a particular debt of thanks to Dr Sacks. When I was an uninspired psychology undergraduate, I was handed a copy of The Man Who Mistook His Wife for a Hat which immediately convinced me to become a neuropsychologist.

Years later, I went to see him talk in London following the publication of Musicophilia. I took along my original copy of The Man Who Mistook His Wife for a Hat, hoping to surprise him with the news that he was responsible for my career in brain science.

As the talk started, the host mentioned that ‘it was likely that many of us became neuroscientists because we read Oliver Sacks when we started out’. To my secret disappointment, about half the lecture hall vigorously nodded in response.

The reality is that Sacks’s role in my career was neither surprising nor particularly special. He inspired a generation of neuroscientists to see brain science as a gateway to our common humanity and humanity as central to the scientific study of the brain.
 

Link to The New York Times obituary for Oliver Sacks.

Spike activity 28-08-2015

Quick links from the past week in mind and brain news:

Vice has an excellent documentary about how skater Paul Alexander was affected by mental illness as he was turning pro.

The US Navy is working on AI that can predict a pirate attacks reports Science News. Apparently it uses Arrrrgh-tificial intelligence. I’m here all week folks.

The New York Times has a good piece on the case for teaching ignorance to help frame our understanding of science.

Yes, Men’s and Women’s Brains Do Function Differently — But The Difference is Small. Interesting piece on The Science of US.

Lots of junk reporting on the Reproducibility Project but these are some of the best we’ve not mentioned so far:
* Neuropsychologist Dorothy Bishop gives her take in The Guardian.
* The BPS Research Digest gives a good run-down of the results

Good video interview with philosopher Patricia Churchland on neuroscience for SeriousScience.

Don’t call it a comeback

Duchenne_de_BoulogneThe Reproducibility Project, the giant study to re-run experiments reported in three top psychology journals, has just published its results and it’s either a disaster, a triumph or both for psychology.

You can’t do better than the coverage in The Atlantic, not least as it’s written by Ed Yong, the science journalist who has been key in reporting on, and occasionally appearing in, psychology’s great replication debates.

Two important things have come out of the Reproducibility Project. The first is that psychologist, project leader and now experienced cat-herder Brian Nosek deserves some sort of medal, and his 270-odd collaborators should be given shoulder massages by grateful colleagues.

It’s been psychology’s equivalent of the large hadron collider but without the need to dig up half of Switzerland.

The second is that no-one quite knows what it means for psychology. 36% of the replications had statistically significant results and 47% had effect sizes in a comparable range although the effect sizes were typically 50% smaller than the originals.

When looking at replication by subject area, studies on cognitive psychology were more likely to reproduce than studies from social psychology.

Is this good? Is this bad? What would be a reasonable number to expect? No one’s really sure, because there are perfectly acceptable reasons why more positive results would be published in top journals but not replicate as well, alongside lots of not so acceptable reasons.

The not-so-acceptable reasons have been well-publicised: p-hacking, publication bias and at the darker end of the spectrum, fraud.

But on the flip side, effects like regression to the mean and ‘surprisingness’ are just part of the normal routine of science.

‘Regression to the mean’ is an effect where, if the first measurement of an effect is large, it is likely to be closer to the average on subsequent measurements or replications, simply because things tend to even out over time. This is not a psychological effect, it happens everywhere.

Imagine you record a high level of cosmic rays from an area of space during an experiment and you publish the results. These results are more likely to merit your attention and the attention of journals because they are surprising.

But subsequent experiments, even if they back up the general effect of high readings, are less likely to find such extreme recordings, because by definition, it was their statistically surprising nature that got them published in the first place.

The same may well be happening here. Top psychology journals currently specialise in surprising findings. The editors have shaped these journal by making a trade-off between surprisingness and stability of the findings, and currently they are tipped far more towards surprisingness. Probably unhealthily so.

This is exactly what the Reproducibility Project found. More initially surprising results were less likely to replicate.

But it’s an open question as to what’s the “right balance” of surprisingness to reliability for any particular journal or, indeed, field.

There’s also a question about reliability versus boundedness. Just because you don’t replicate the results of a particular experiment it doesn’t necessarily mean the originally reported effect was a false positive. It may mean the effect is sensitive to a particular context that isn’t clear yet. Working this out is basically the grunt work of science.

Some news outlets have wrongly reported that this study shows that ‘about two thirds of studies in psychology are not reliable’ but the Reproducibility Project didn’t sample widely enough across publications to be able to say this.

Similarly, it only looked at initially positive findings. You could easily imagine a ‘Reverse Reproducibility Project’ where a whole load of original studies that found no effect are replicated to see which subsequently do show an effect.

We know study bias tends to favour positive results but that doesn’t mean that all negative findings should be automatically accepted as the final answer either.

The main take home messages are that findings published in leading journals are not a good guide to invariant aspects of human nature. And stop with the journal worship. And let’s get more pre-registration on the go. Plus science is hard.

What is also clear, however, is that the folks from the Reproducibility Project deserve our thanks. And if you find one who still needs that shoulder massage, limber up your hands and make a start.
 

Link to full text of scientific paper in Science.
Link to coverage in The Atlantic.

The reproducibility of psychological science

The Reproducibility Project results have just been published in Science, a massive, collaborative, ‘Open Science’ attempt to replicate 100 psychology experiments published in leading psychology journals. The results are sure to be widely debated – the biggest result being that many published results were not replicated. There’s an article in the New York Times about the study here: Many Psychology Findings Not as Strong as Claimed, Study Says

This is a landmark in meta-science : researchers collaborating to inspect how psychological science is carried out, how reliable it is, and what that means for how we should change what we do in the future. But, it is also an illustration of the process of Open Science. All the materials from the project, including the raw data and analysis code, can be downloaded from the OSF webpage. That means that if you have a question about the results, you can check it for yourself. So, by way of example, here’s a quick analysis I ran this morning: does the number of citations of a paper predict how large the effect size will be of a replication in the Reproducibility Project? Answer: not so much

cites_vs_effectR

That horizontal string of dots along the bottom is replications with close to zero-effect size, and high citations for the original paper (nearly all of which reported non-zero and statistically significant effects). Draw your own conclusions!

Link: Reproducibility OSF project page

Link: my code for making this graph (in python)

A Million Core Silicon Brain

For those of you who like to get your geek on (and rumour has it, they can be found reading this blog) the Computerphile channel just had a video interview with Steve Furber of the Human Brain Project who talks about the custom hardware that’s going to run their neural net simulations.

Furber is better known as one of the designers of the BBC Micro and the ARM microprocessor but has more recently been involved in the SpiNNaker project which is the basis of the Neuromorphic Computing Platform for the Human Brain Project.

Fascinating interview with a man who clearly likes the word toroid.

Spike activity 21-08-2015

Quick links from the past week in mind and brain news:

Be wary of studies that link mental illness with creativity or high IQ. Good piece in The Guardian.

Nautilus has a piece on the lost dream journal of neuroscientist Santiago Ramon y Cajal.

Video games are tackling mental health with mixed results. Great piece in Engadget.

The Globe and Mail asks how we spot the next ‘lone wolf’ terrorist and looks at some of the latest research which has changed what people look for.

A third of young Americans say they aren’t 100% heterosexual according to a YouGov survey. 4% class themselves as ‘completely homosexual’, a further 3% as ‘predominantly homosexual’.

National Geographic reports on a study suggesting that three-quarters of handprints in ancient cave art were left by women.

Psychiatry is reinventing itself thanks to advances in biology says NIMH Chief Thomas Insel in New Scientist. Presumably a very slow reinvention that doesn’t seem to change treatment very much.

Wired report that IBM have a close-to-production neuromorphic chip. Big news.

Most people are resilient after trauma. Good piece in BBC Future.

Psychological science in intelligence service operations

CC Licensed Photo by Flickr user nolifebeforecoffee. Click for source.I’ve got an article in today’s Observer about how British intelligence services are applying psychological science in their deception and infiltration operations.

Unfortunately, the online version has been given a headline which is both frivolous and wrong (“Britain’s ‘Twitter troops’ have ways of making you think…”). The ‘Twitter troops’ name was given to the UK Army’s ‘influence operations specialists’ the 77th Brigade whom the article is not focused on and whom I only mention to note their frivolous nickname.

Actually, the piece focuses on GCHQ’s Joint Threat Research Intelligence Group or JTRIG whose job it is to “discredit, disrupt, delay, deny, degrade, and deter” opponents mainly through online deception operations.

Some of the Snowden leaks have specifically focused on the psychological theory and evidence-base behind their operations which is exactly what I discuss in the article.

Controversially, not only were terrorists and hostile states listed as opponents who could pose a national security threat, but also domestic criminals and activist groups. JTRIG’s work seems primarily to involve electronic communications, and can include practical measures such as hacking computers and flooding phones with junk messages. But it also attempts to influence people socially through deception, infiltration, mass persuasion and, occasionally, it seems, sexual “honeypot” stings. The Human Science Operations Cell appears to be a specialist section of JTRIG dedicated to providing psychological support for this work.

It’s a fascinating story and there’s more at the link below.
 

Link to article on psychological science in intelligence service ops.

Spike activity 14-07-2015

Quick links from the past week in mind and brain news:

Trends and fashions in the science of neurotransmitters. Neuroskeptic looks at this seasons hottest brain chemicals.

MIT Tech Reviews has an interesting piece on the new wave of normal hearing enhancement hearing aids.

Sorry Paleo diet aficionados, carbs were probably essentially to our evolving brains in early human history. Good piece in The New York Times.

National Geographic has a piece on how some isolated tribes in the Amazon are initiating contact and how it’s causing a rethink of existing policies.

Brain imaging research is often wrong. This researcher wants to change that. Great interview with Russ Poldrack in Vox.

Neurocritic asks: Will machine learning create new diagnostic categories, or just refine the ones we already have?

The Obscure Neuroscience Problem That’s Plaguing VR. Interesting Wired piece on the physiological challenges of virtual reality.

The Atlantic has thought-provoking article on ‘Learning Empathy From the Dead’ – the effects of corpse dissection on medical students’ empathy.

The amygdala is NOT the brain’s fear center. Joseph LeDoux sings it from his new blog I Got a Mind to Tell You.

Good edition of ABC Radio’s Philosopher’s Zone on dreaming.

Postmortemgirl has a great guide to postmortem brain studies in mental health.

Digital tech, the BMJ, and The Baroness

CC Licensed Photo by Flickr user World Bank Photo Collection. Click for source.The British Medical Journal just published an editorial by me, Dorothy Bishop and Andrew Przybylski about the debate over digital technology and young people that focuses on Susan Greenfield’s mostly, it has to be said, unhelpful contributions.

Through appearances, interviews, and a recent book Susan Greenfield, a senior research fellow at Lincoln College, Oxford, has promoted the idea that internet use and computer games can have harmful effects on the brain, emotions, and behaviour, and she draws a parallel between the effects of digital technology and climate change. Despite repeated calls for her to publish these claims in the peer reviewed scientific literature, where clinical researchers can check how well they are supported by evidence, this has not happened, and the claims have largely been aired in the media. As scientists working in mental health, developmental neuropsychology, and the psychological impact of digital technology, we are concerned that Greenfield’s claims are not based on a fair scientific appraisal of the evidence, often confuse correlation for causation, give undue weight to anecdote and poor quality studies, and are misleading to parents and the public at large.

It continues from there.

I was also on Channel 4 News last night, debating The Baroness, and they seem to put some of their programme online as YouTube clips so if our section turns up online, I’ll post it here.

UPDATE: It disappeared on the Channel 4 site but it seems to be archived on Yahoo of all places. Either way you can now view it here.

Greenfield was lovely, as on the previous occasion we met. Actually, she didn’t remember meeting me before, despite the fact she specifically invited me to debate her on this topic at a All-Party Parliamentary Group in 2010, but I suspect what was a markedly atypical experience for me, was probably pretty humdrum for her.

Either way, she trotted out the same justifications. ‘I’ve written a book.’ ‘It contains 250 references.’ ‘The internet could trigger autistic-like traits.’

Dorothy Bishop has had a look at those 250 references and they’re not very convincing but actually our main message is shared by pretty much everyone who’s debated Greenfield over the years: describe your claims in a scientific paper and submitted them to a peer-reviewed journal so they can be examined through the rigour of the scientific process.

Oddly, Greenfield continues to publish peer-reviewed papers from her work on the neuroscience of Alzheimer’s disease but refuses to do so for her claims on digital technology and the brain.

It’s a remarkable case of scientific double standards and the public really deserves better.
 

Link to ‘The debate over digital technology and young people’ in the BMJ.


So the video of my debate with Greenfield is up online but it seems like you can’t embed it so you’ll have to follow this link to watch it.

Watching it back, one thing really stands out: Greenfield’s bizarre and continuing insistence that using the internet could ‘trigger’ autistic-like symptoms in young people, saying that most kids with autism are not diagnosed until age five and many use computers before.

This shows a fundamental misunderstanding of what autism is, and how diagnosis is done. Autism is diagnosed both on presentation (how you are at the time) and history (how you have been throughout your life) and to get a diagnosis of autism spectrum disorder you have to demonstrate both. So by definition, being ‘turned autistic’ at age 4 or 5 doesn’t even make sense diagnostically, let alone scientifically, as we know autism is a life-long neurodevelopmental condition.

Intuitions about free will and the brain

Libet’s classifc experiment on the neuroscience of free will tells us more about our intuition than about our actual freedom

It is perhaps the most famous experiment in neuroscience. In 1983, Benjamin Libet sparked controversy with his demonstration that our sense of free will may be an illusion, a controversy that has only increased ever since.

Libet’s experiment has three vital components: a choice, a measure of brain activity and a clock.

The choice is to move either your left or right arm. In the original version of the experiment this is by flicking your wrist; in some versions of the experiment it is to raise your left or right finger. Libet’s participants were instructed to “let the urge [to move] appear on its own at any time without any pre-planning or concentration on when to act”. The precise time at which you move is recorded from the muscles of your arm.

The measure of brain activity is taken via electrodes on the scalp. When the electrodes are placed over the motor cortex (roughly along the middle of the head), a different electrical signal appears between right and left as you plan and execute a movement on either the left or right.

The clock is specially designed to allow participants to discern sub-second changes. This clock has a single dot, which travels around the face of the clock every 2.56 seconds. This means that by reporting position you are reporting time. If we assume you can report position accurately to 5 degree angle, that means you can use this clock to report time to within 36 milliseconds – that’s 36 thousandths of a second.

Putting these ingredients together, Libet took one extra vital measurement. He asked participants to report, using the clock, exactly the point when they made the decision to move.

Physiologists had known for decades that a fraction of a second before you actually move the electrical signals in your brain change. So it was in Libet’s experiment, a fraction of a second before participants moved, a reliable change could be recorded using the electrodes. But the explosive result was when participants reported deciding to move. This occurred in between the electric change in the brain and the actual movement. This means, as sure as cause follows effect, that the feeling of deciding couldn’t be a timely report of whatever was causing the movement. The electrode recording showed that the decision had – in some sense – already been made before the participants were aware of having taken action. The brain signals were changing before the subjective experience of taking a decision occurred.

Had participants’ brains already made the decision? Was the feeling of choosing just an illusion? Controversy has raged ever since. There is far more to the discussion about neuroscience and free will than this one experiment, but its simplicity has allowed it to capture the imagination of many who think our status as biological creatures limits our free will, as well as those who argue that free will survives the challenge of our minds being firmly grounded in our biological brains.

Part of the appeal of the Libet experiment is due to two pervasive intuitions we have about the mind. Without these intuitions the experiment doesn’t seem so surprising.

The first intuition is the feeling that our minds are a separate thing from our physical selves – a natural dualism that pushes us to believe that the mind is a pure, abstract place, free from biological constraints. A moment’s thought about the last time you were grumpy because you were hungry shatters this illusion, but I’d argue that it is still a persistent theme in our thinking. Why else would we be the least surprised that it is possible to find neural correlates of mental events? If we really believed, in our heart of hearts, that the mind is based in the brain, then we would know that every mental change must have a corresponding change in the brain.

The second pervasive intuition, which makes us surprised by the Libet experiment, is the belief that we know our own minds. This is the belief that our subjective experience of making decisions is an accurate report of how that decision is made. The mind is like a machine – as long as it runs right, we are happily ignorant of how it works. It is only when mistakes or contradictions arise that we’re drawn to look under the hood: Why didn’t I notice that exit? How could I forget that person’s name? Why does the feeling of deciding come after the brain changes associated with decision making?

There’s no reason to think that we are reliable reporters of every aspect of our minds. Psychology, in fact, gives us lots of examples of where we often get things wrong. The feeling of deciding in the Libet experiment may be a complete illusion – maybe the real decision really is made ‘by our brains’ somehow – or maybe it is just that the feeling of deciding is delayed from our actual deciding. Just because we erroneously report the timing of the decision, doesn’t mean we weren’t intimately involved in it, in whatever meaningful sense that can be.

More is written about the Libet experiment every year. It has spawned an academic industry investigating the neuroscience of free will. There are many criticisms and rebuttals, with debate raging about how and if the experiment is relevant to the freedom of our everyday choices. Even supporters of Libet have to admit that the situation used in the experiment may be too artificial to be a direct model of real everyday choices. But the basic experiment continues to inspire discussion and provoke new thoughts about the way our freedom is rooted in our brains. And that, I’d argue, is due to the way it helps us confront our intuitions about the way the mind works, and to see that things are more complex than we instinctively imagine.

This is my latest column for BBC Future. The original is here. You may also enjoy this recent post on mindhacks.com Critical strategies for free will experiments

Critical strategies for free will experiments

waveBenjamin Libet’s experiment on the neuroscience of free will needs little introduction. (If you do need an introduction, it’s the topic of my latest column for BBC Future). His reports that the subjective feeling of making a choice only come after the brain signals indicating a choice has been made are famous, and have produced controversy ever since they were published in the 1980s.

For a simple experiment, Libet’s paradigm admits to a large number of interpretations, which I think is an important lesson. Here are some common, and less common, critiques of the experiment:

The Disconnect Criticism

The choice required from Libet’s participants was trivial and inconsequential. Moreover, they were specifically told to make the choice without any reason (“let the urge [to move] appear on its own at any time without any pre-planning or concentration on when to act”). A common criticism is that this kind of choice has little to tell us about everyday choices which are considered, consequential or which are actively trying to involve ourselves in.

The timing criticism(s)

Dennett discusses how the original interpretation of the experiment assumes that the choosing self exists at a particular point and at particular time – so, for example, maybe in some central ‘Cartesian Theatre’ in which information from motor cortex and visual cortex come together, but crucially, does not have direct report of (say) the information about timing gathered by the visual cortex. Even in a freely choosing self, there will be timing delays as information on the clock time is ‘connected up’ with information on when the movement decision was made. These, Dennett argues, could produce the result Libet saw without indicating a fatal compromise for free choice.

My spin on this is that the Libet result shows, minimally, that we don’t accurately know the timing of our decisions, but inaccurate judgements about the timing of decisions doesn’t mean that we don’t actually make the decisions themselves that are consequential.

Spontaneous activity

Aaron Schurger and colleagues have a nice paper in which they argue that Libet’s results can be explained by variations in spontaneous activity before actions are taken. They argue that the movement system is constantly experiencing sub-threshold variation in activity, so that at any particular point in time you are more or less close to performing any particular act. Participants in the Libet paradigm, asked to make a spontaneous act, take advantage of this variability – effectively lowering their threshold for action and waiting until the covert fluctuations are large enough to trigger a movement. Importantly, this reading weakens the link between the ‘onset’ of movements and the delayed subjective experience of making a movement. If the movement is triggered by random fluctuations (observable in the rise of the electrode signal) then there isn’t a distinct ‘decision to act’ in the motor system, so we can’t say that the subjective decision to act reliably comes afterwards.

The ‘only deterministic on average’ criticism

The specific electrode signal which is used to time the decision to move in the brain is called the readiness potential (RP). Electrode readings are highly variable, so the onset of the RP is a statistical artefact, produced by averaging over many trials (40 in Libet’s case). This means we lose the ability to detect, trial-by-trial, the relation between the brain activity related to movement and the subjective experience. Libet reports this in his original paper [1] (‘only the average RP for the whole series could be meaningfully recorded’, p634). On occasion the subjective decision time (which Libet calls W) comes before the time of even the average RP, not after (p635: “instances in which individual W time preceded onset time of averaged RP numbered zero in 26 series [out of 36] – which means that 28% of series saw at least one instance of W occurring before the RP).

The experiment showed strong reliability, but not complete reliability (the difference is described by Libet as ‘generally’ occurring and as being ‘fairly consistent’, p636). What happened next to Libet’s result is a common trick of psychologists. A statistical pattern is discovered and then reality is described as if the pattern is the complete description: “The brain change occurs before the choice”.

Although such generalities are very useful, they are misleading if we forget that they are only averagely true, not always true. I don’t think Libet’s experiment would have the imaginative hold if the result was summarised as “The brain change usually occurs before the choice”.

A consistent, but not universal, pattern in the brain before a choice has the flavour of a prediction, rather than a compulsion. Sure, before we make a choice there are antecedents in the brain – it would be weird if there weren’t – but since these don’t have any necessary consequence for what we choose, so what?

To my mind the demonstration that you can use fMRI to reproduce the Libet effect but with brain signals changing up to 10 seconds before the movement (and an above chance accuracy at predicting the movement made), only reinforces this point. We all believe that the mind has something to do with the brain, so finding patterns in the brain at one point which predict actions in the mind at a later point isn’t surprising. The fMRI result, and perhaps Libet’s experiment, rely as much on our false intuition about dualism as conclusively demonstrating anything new about freewill.

Link: my column Why do we intuitively believe we have free will?

Fifty psychological terms to just, well, be aware of

CC Licensed Photo by Flickr user greeblie. Click for source.Frontiers in Psychology has just published an article on ‘Fifty psychological and psychiatric terms to avoid’. These sorts of “here’s how to talk about” articles are popular but themselves can often be misleading, and the same applies to this one.

The article supposedly contains 50 “inaccurate, misleading, misused, ambiguous, and logically confused words and phrases”.

The first thing to say is that by recommending that people avoid certain words or phrases, the article is violating its own recommendations. That may seem like a trivial point but it isn’t when you’re giving advice about how to use language in scientific discussion.

It’s fine to use even plainly wrong terms to discuss how they’re used, the multiple meanings and misconceptions behind them. In fact, a lot of scientific writing does exactly this. When there are misconceptions that may cloud people’s understanding, it’s best to address them head on rather than avoid them.

Sometimes following the recommendations for ‘phrases to avoid’ would actually hinder this process.

For example, the piece recommends you avoid the term ‘autism epidemic’ as there is no good evidence that there is an actual epidemic. But this is not advice about language, it’s just an empirical point. According to this list, all the research that has used the term, to discuss the actual evidence in contrary to the popular idea, should have avoided the term and presumably referred to it as ‘the concept that shall not be named’.

The article also recommends against using ‘ambiguous’ words but this recommendation would basically kill the English language as many words have multiple meanings – like the word ‘meaning’ for example – but that doesn’t mean you should avoid them.

If you’re a fan of pedantry you may want to go through the article and highlight where the authors have used other ambiguous psychological phrases (starter for 10, “memory”) and post it to some obscure corner of the internet.

Many of the recommendations also rely on you agreeing with the narrow definition and limits of use that the authors premise their argument on. Do you agree that “antidepressant medication” means that the medication has a selective and specific effect on depression and no other conditions – as the authors suggest? Or do you think this just describes a property of the medication? This is exactly how medication description works throughout medicine. Aspirin is an analgesic medication and an anti-inflammatory medication, as well as having other properties. No banning needed here.

And in fact, this sort of naming is just a property of language. If you talk about an ‘off-road vehicle’, and someone pipes up to tell you “actually, off-road vehicles can also go on-road so I recommend you avoid that description” I recommend you ignore them.

The same applies to many of the definitions in this list. The ‘chemical imbalance’ theory of depression is not empirically supported, so don’t claim it is, but feel free to use the phrase if you want to discuss this misconception. Some conditions genuinely do involve a chemical imbalance though – like the accumulation of copper in Wilson’s disease, so you can use the phrase accurately in this case, being aware of how its misused in other contexts. Don’t avoid it, just use it clearly.

With ‘Lie detector test’ no accurate test has ever been devised to detect lies. But you may be writing about research which is trying to develop one or research that has tested the idea. ‘No difference between groups’ is fine if there is genuinely no difference in your measure between the groups (i.e. they both score exactly the same).

Some of the recommendations are essentially based on the premise that you ‘shouldn’t use the term except for how it was first defined or defined where we think is the authoritative source’. This is just daft advice. Terms evolve over time. Definitions shift and change. The article recommends against using ‘Fetish’ except for in its DSM-5 definition, despite the fact this is different to how it’s used commonly and how it’s widely used in other academic literature. ‘Splitting’ is widely used in a form to mean ‘team splitting’ which the article says is ‘wrong’. It isn’t wrong – the term has just evolved.

I think philosophers would be surprised to hear ‘reductionism’ is a term to be avoided – given the massive literature on reductionism. Similarly, sociologists might be a bit baffled by ‘medical model’ being a banned phrase, given the debates over it and, unsurprisingly, its meaning.

Some of the advice is just plain wrong. Don’t use “Prevalence of trait X” says the article because apparently prevalence only applies to things that are either present or absent and “not dimensionally distributed in the population, such as personality traits and intelligence”. Many traits are defined by cut-off scores along dimensionally defined constructs, making them categorical. If you couldn’t talk about the prevalence in this way, we’d be unable to talk about prevalence of intellectual disability (widely defined as involving an IQ of less than 70) or dementia – which is diagnosed by a cut-off score on dimensionally varying neuropsychological test performance.

Some of the recommended terms to avoid are probably best avoided in most contexts (“hard-wired”, “love molecule”) and some are inherently self-contradictory (“Observable symptom”, “Hierarchical stepwise regression”) but again, use them if you want to discuss how they’re used.

I have to say, the piece reminds me of Stephen Pinker’s criticism of ‘language mavens’ who have come up with rules for their particular version of English which they decide others must follow.

To be honest, I think the Frontiers in Psychology article is well-worth reading. It’s a great guide to how some concepts are used in different ways, but it’s not good advice for things to avoid.

The best advice is probably: communicate clearly, bearing in mind that terms and concepts can have multiple meanings and your audience may not be aware of which you want to communicate, so make an effort to clarify where needed.
 

Link to Frontiers in Psychology article.