Oliver Sacks has left the building

CC Licensed Photo from Wikipedia. Click for source.Neurologist and author Oliver Sacks has died at the age of 82.

It’s hard to fully comprehend the enormous impact of Oliver Sacks on the public’s understanding of the brain, its disorders and our diversity as humans.

Sacks wrote what he called ‘romantic science’. Not romantic in the sense of romantic love, but romantic in the sense of the romantic poets, who used narrative to describe the subtleties of human nature, often in contrast to the enlightenment values of quantification and rationalism.

In this light, romantic science would seem to be a contradiction, but Sacks used narrative and science not as opponents, but as complementary partners to illustrate new forms of human nature that many found hard to see: in people with brain injury, in alterations or differences in experience and behaviour, or in seemingly minor changes in perception that had striking implications.

Sacks was not the originator of this form of writing, nor did he claim to be. He drew his inspiration from the great neuropsychologist Alexander Luria but while Luria’s cases were known to a select group of specialists, Sacks wrote for the general public, and opened up neurology to the everyday world.

Despite Sacks’s popularity now, he had a slow start, with his first book Migraine not raising much interest either with his medical colleagues or the reading public. Not least, perhaps, because compared to his later works, it struggled to throw off some of the technical writing habits of academic medicine.

It wasn’t until his 1973 book Awakenings that he became recognised both as a remarkable writer and a remarkable neurologist, as the book recounted his experience with seemingly paralysed patients from the 1920s encephalitis lethargica epidemic and their remarkable awakening and gradual decline during a period of treatment with L-DOPA.

The book was scientifically important, humanely written, but most importantly, beautiful, as he captured his relationship with the many patients who experienced both a physical and a psychological awakening after being neurologically trapped for decades.

It was made into a now rarely seen documentary for Yorkshire Television which was eventually picked up by Hollywood and made into the movie starring Robin Williams and Robert De Niro.

But it was The Man Who Mistook His Wife for a Hat that became his signature book. It was a series of case studies, that wouldn’t seem particularly unusual to most neurologists, but which astounded the general public.

A sailor whose amnesia leads him to think he is constantly living in 1945, a woman who loses her ability to know where her limbs are, and a man with agnosia who despite normal vision can’t recognise objects and so mistook his wife’s head for a hat.

His follow-up book An Anthropologist on Mars continued in a similar vein and made for equally gripping reading.

Not all his books were great writing, however. The Island of the Colorblind was slow and technical while Sacks’s account of how his damaged leg, A Leg to Stand On, included conclusions about the nature of illness that were more abstract than most could relate to.

But his later books saw a remarkable flowering of diverse interest and mature writing. Music, imagery, hallucinations and their astounding relationship with the brain and experience were the basis of three books that showed Sacks at his best.

And slowly during these later books, we got glimpses of the man himself. He revealed in Hallucinations that he had taken hallucinogens in his younger years and that the case of medical student Stephen D in The Man Who Mistook His Wife for a Hat – who developed a remarkable sense of smell after a night on speed, cocaine, and PCP – was, in fact, an autobiographical account.

His final book, On the Move, was the most honest, as he revealed he was gay, shy, and in his younger years, devastatingly handsome but somewhat troubled. A long way from the typical portrayal of the grey-bearded, kind but eccentric neurologist.

On a personal note, I have a particular debt of thanks to Dr Sacks. When I was an uninspired psychology undergraduate, I was handed a copy of The Man Who Mistook His Wife for a Hat which immediately convinced me to become a neuropsychologist.

Years later, I went to see him talk in London following the publication of Musicophilia. I took along my original copy of The Man Who Mistook His Wife for a Hat, hoping to surprise him with the news that he was responsible for my career in brain science.

As the talk started, the host mentioned that ‘it was likely that many of us became neuroscientists because we read Oliver Sacks when we started out’. To my secret disappointment, about half the lecture hall vigorously nodded in response.

The reality is that Sacks’s role in my career was neither surprising nor particularly special. He inspired a generation of neuroscientists to see brain science as a gateway to our common humanity and humanity as central to the scientific study of the brain.
 

Link to The New York Times obituary for Oliver Sacks.

Spike activity 28-08-2015

Quick links from the past week in mind and brain news:

Vice has an excellent documentary about how skater Paul Alexander was affected by mental illness as he was turning pro.

The US Navy is working on AI that can predict a pirate attacks reports Science News. Apparently it uses Arrrrgh-tificial intelligence. I’m here all week folks.

The New York Times has a good piece on the case for teaching ignorance to help frame our understanding of science.

Yes, Men’s and Women’s Brains Do Function Differently — But The Difference is Small. Interesting piece on The Science of US.

Lots of junk reporting on the Reproducibility Project but these are some of the best we’ve not mentioned so far:
* Neuropsychologist Dorothy Bishop gives her take in The Guardian.
* The BPS Research Digest gives a good run-down of the results

Good video interview with philosopher Patricia Churchland on neuroscience for SeriousScience.

Don’t call it a comeback

Duchenne_de_BoulogneThe Reproducibility Project, the giant study to re-run experiments reported in three top psychology journals, has just published its results and it’s either a disaster, a triumph or both for psychology.

You can’t do better than the coverage in The Atlantic, not least as it’s written by Ed Yong, the science journalist who has been key in reporting on, and occasionally appearing in, psychology’s great replication debates.

Two important things have come out of the Reproducibility Project. The first is that psychologist, project leader and now experienced cat-herder Brian Nosek deserves some sort of medal, and his 270-odd collaborators should be given shoulder massages by grateful colleagues.

It’s been psychology’s equivalent of the large hadron collider but without the need to dig up half of Switzerland.

The second is that no-one quite knows what it means for psychology. 36% of the replications had statistically significant results and 47% had effect sizes in a comparable range although the effect sizes were typically 50% smaller than the originals.

When looking at replication by subject area, studies on cognitive psychology were more likely to reproduce than studies from social psychology.

Is this good? Is this bad? What would be a reasonable number to expect? No one’s really sure, because there are perfectly acceptable reasons why more positive results would be published in top journals but not replicate as well, alongside lots of not so acceptable reasons.

The not-so-acceptable reasons have been well-publicised: p-hacking, publication bias and at the darker end of the spectrum, fraud.

But on the flip side, effects like regression to the mean and ‘surprisingness’ are just part of the normal routine of science.

‘Regression to the mean’ is an effect where, if the first measurement of an effect is large, it is likely to be closer to the average on subsequent measurements or replications, simply because things tend to even out over time. This is not a psychological effect, it happens everywhere.

Imagine you record a high level of cosmic rays from an area of space during an experiment and you publish the results. These results are more likely to merit your attention and the attention of journals because they are surprising.

But subsequent experiments, even if they back up the general effect of high readings, are less likely to find such extreme recordings, because by definition, it was their statistically surprising nature that got them published in the first place.

The same may well be happening here. Top psychology journals currently specialise in surprising findings. The editors have shaped these journal by making a trade-off between surprisingness and stability of the findings, and currently they are tipped far more towards surprisingness. Probably unhealthily so.

This is exactly what the Reproducibility Project found. More initially surprising results were less likely to replicate.

But it’s an open question as to what’s the “right balance” of surprisingness to reliability for any particular journal or, indeed, field.

There’s also a question about reliability versus boundedness. Just because you don’t replicate the results of a particular experiment it doesn’t necessarily mean the originally reported effect was a false positive. It may mean the effect is sensitive to a particular context that isn’t clear yet. Working this out is basically the grunt work of science.

Some news outlets have wrongly reported that this study shows that ‘about two thirds of studies in psychology are not reliable’ but the Reproducibility Project didn’t sample widely enough across publications to be able to say this.

Similarly, it only looked at initially positive findings. You could easily imagine a ‘Reverse Reproducibility Project’ where a whole load of original studies that found no effect are replicated to see which subsequently do show an effect.

We know study bias tends to favour positive results but that doesn’t mean that all negative findings should be automatically accepted as the final answer either.

The main take home messages are that findings published in leading journals are not a good guide to invariant aspects of human nature. And stop with the journal worship. And let’s get more pre-registration on the go. Plus science is hard.

What is also clear, however, is that the folks from the Reproducibility Project deserve our thanks. And if you find one who still needs that shoulder massage, limber up your hands and make a start.
 

Link to full text of scientific paper in Science.
Link to coverage in The Atlantic.

The reproducibility of psychological science

The Reproducibility Project results have just been published in Science, a massive, collaborative, ‘Open Science’ attempt to replicate 100 psychology experiments published in leading psychology journals. The results are sure to be widely debated – the biggest result being that many published results were not replicated. There’s an article in the New York Times about the study here: Many Psychology Findings Not as Strong as Claimed, Study Says

This is a landmark in meta-science : researchers collaborating to inspect how psychological science is carried out, how reliable it is, and what that means for how we should change what we do in the future. But, it is also an illustration of the process of Open Science. All the materials from the project, including the raw data and analysis code, can be downloaded from the OSF webpage. That means that if you have a question about the results, you can check it for yourself. So, by way of example, here’s a quick analysis I ran this morning: does the number of citations of a paper predict how large the effect size will be of a replication in the Reproducibility Project? Answer: not so much

cites_vs_effectR

That horizontal string of dots along the bottom is replications with close to zero-effect size, and high citations for the original paper (nearly all of which reported non-zero and statistically significant effects). Draw your own conclusions!

Link: Reproducibility OSF project page

Link: my code for making this graph (in python)

A Million Core Silicon Brain

For those of you who like to get your geek on (and rumour has it, they can be found reading this blog) the Computerphile channel just had a video interview with Steve Furber of the Human Brain Project who talks about the custom hardware that’s going to run their neural net simulations.

Furber is better known as one of the designers of the BBC Micro and the ARM microprocessor but has more recently been involved in the SpiNNaker project which is the basis of the Neuromorphic Computing Platform for the Human Brain Project.

Fascinating interview with a man who clearly likes the word toroid.

Spike activity 21-08-2015

Quick links from the past week in mind and brain news:

Be wary of studies that link mental illness with creativity or high IQ. Good piece in The Guardian.

Nautilus has a piece on the lost dream journal of neuroscientist Santiago Ramon y Cajal.

Video games are tackling mental health with mixed results. Great piece in Engadget.

The Globe and Mail asks how we spot the next ‘lone wolf’ terrorist and looks at some of the latest research which has changed what people look for.

A third of young Americans say they aren’t 100% heterosexual according to a YouGov survey. 4% class themselves as ‘completely homosexual’, a further 3% as ‘predominantly homosexual’.

National Geographic reports on a study suggesting that three-quarters of handprints in ancient cave art were left by women.

Psychiatry is reinventing itself thanks to advances in biology says NIMH Chief Thomas Insel in New Scientist. Presumably a very slow reinvention that doesn’t seem to change treatment very much.

Wired report that IBM have a close-to-production neuromorphic chip. Big news.

Most people are resilient after trauma. Good piece in BBC Future.

Psychological science in intelligence service operations

CC Licensed Photo by Flickr user nolifebeforecoffee. Click for source.I’ve got an article in today’s Observer about how British intelligence services are applying psychological science in their deception and infiltration operations.

Unfortunately, the online version has been given a headline which is both frivolous and wrong (“Britain’s ‘Twitter troops’ have ways of making you think…”). The ‘Twitter troops’ name was given to the UK Army’s ‘influence operations specialists’ the 77th Brigade whom the article is not focused on and whom I only mention to note their frivolous nickname.

Actually, the piece focuses on GCHQ’s Joint Threat Research Intelligence Group or JTRIG whose job it is to “discredit, disrupt, delay, deny, degrade, and deter” opponents mainly through online deception operations.

Some of the Snowden leaks have specifically focused on the psychological theory and evidence-base behind their operations which is exactly what I discuss in the article.

Controversially, not only were terrorists and hostile states listed as opponents who could pose a national security threat, but also domestic criminals and activist groups. JTRIG’s work seems primarily to involve electronic communications, and can include practical measures such as hacking computers and flooding phones with junk messages. But it also attempts to influence people socially through deception, infiltration, mass persuasion and, occasionally, it seems, sexual “honeypot” stings. The Human Science Operations Cell appears to be a specialist section of JTRIG dedicated to providing psychological support for this work.

It’s a fascinating story and there’s more at the link below.
 

Link to article on psychological science in intelligence service ops.