Drug control through fantasy neuroscience

I’ve got an article in today’s Observer about the disastrous Psychoactive Substances Bill, a proposed law designed to outlaw all psychoactive substances based on a fantasy land version of neuroscience.

“The bottom line is, the only way of knowing whether a mystery substance alters the mind is to take it. You simply can’t tell by chemical tests, because there is no direct mapping between molecular structure and mental experience. If you could solve the problem of working out whether a substance would affect the conscious mind purely from its chemistry, you would have done Nobel prize winning work on the the problem of consciousness. A second-rank approach is just to see whether a new substance is similar to a known family of mind-altering drugs, but even here there are no guarantees. A slight tweak can make a similar drug completely inactive and about as much fun as Theresa May at a techno night.”

Although I talk about the scientific problems of the Psychoactive Substances Bill, the whole process has been a farce.

From the minister in charge clearly not understanding his own legislation to the Government having to reassure churches that incense won’t be banned.

It’s been criticised from everyone from the Royal Society of Chemistry to traditional Tory supporters stalwarts like The Spectator.

The Medical Research Council have expressed concerns that it could “inhibit worthwhile research and/or potential new therapeutics”.

Just as the rest of the world is turning away from the failed ‘war on drugs’ approach to drug legislation, the UK has decided to make up its own scientific impossibilities to support it.

Normally, scientific impossibilities would be the death knell for proposed regulation but for drugs laws I have long since stopped believing that scientific incompetence was any barrier to enacting legislation.
 

Link to article ‘Theresa May’s futile war on psychoactive drugs’

A temporary blindness during a wrongful conviction

I’m just reading Clinical Psychology in Britain: Historical Perspectives which is a wonderful book if you are a clinical psychologist but probably about as exciting to non-clinical psychologists as you might expect. However, it does contain a few gems of wider interest.

This is a remarkable story from the chapter on the history of forensic clinical psychology which concerns the case of Barry George during the original trial that wrongfully convicted him for the murder of television journalist Jill Dando.

On 26 April 1999, Jill Dando, the presenter of BBC programme Crimewatch, was shot dead outside her home in Fulham, London. On 2 July 2001 Barry George, who lived nearby, was convicted of her murder. Prior to the trial, three defence experts, Gisli Gudjonsson, Susan Young and Michael Kopelman, had reported that Mr George’s fitness to stand trial was contingent on his receiving clinical psychological support in court throughout the trial, which lasted from 23 April to 2 July 2001.

Mr George had a complex presentation, including a long history of primary generalised epilepsy (first identified at age two or three), severely abnormal EEG, intellectual deterioration, significant cognitive and executive deficits, rigid and obsessive personality structure, hypochondriacal preoccupations, and an extreme reaction to stress in the form of anxiety and panic attacks, which increased the frequency of absence epileptic seizures. The court appointed Susan Young, a forensic clinical psychologist, who initially sat in the dock with Mr George and provided him with the required assistance. On 26 April 2001, on the fourth day of the legal arguments and prior to swearing in the jury, Mr George turned to Susan Young and declared, ‘I can’t see’. Prior to this Mr George had been observed having difficulties concentrating on the legal arguments and he claimed to be experiencing petit mal epileptic seizures in the dock.

The trial before the jury was due to commence on 2 May, but the court determined that the trial could only proceed if Mr George’s eyesight could be restored. On the morning of 1 May, all three defence experts were asked to meet Mr George and try to restore his eyesight by 2pm (when the court commenced that day). Michael Kopelman conducted a medical examination and informed Mr George that there was no physical explanation for his blindness. All attempts to persuade Mr George that it was in his interest to to regain his eyesight proved fruitless; he simply kept saying ‘I can’t see’.

At 12.30pm Gisli Gudjonnson, who was trained in hypnosis techniques, suggested that hypnosis might prove successful in bringing back his sight. Mr George agreed to this approach. After an initial induction to the process, Mr George was asked to imagine that he was being taken through a tunnel, accompanied by suggestions that his eyesight would gradually return during the journey and improve further during the lunchbreak (i.e. posthypnotic suggestion). After being brought out of the hypnosis, Mr George said he could see but his eyesight was blurred. He was reassured that it would continue to improve and by 2.00pm his eyesight had fully recovered and after the final legal arguments that afternoon, the trial commenced before a jury.

The defence experts construed Mr George’s blindness as being psychogenic in origin caused by the inability to cope with the stress generated by the legal arguments (i.e. putting a physical barrier between himself and the court), which was unlocked by the process of hypnosis. This was not the first time Mr George had presented with psychogenic symptoms as he had presented with a functional aphonia (i.e. nonorganic loss of speech) following a stressful environmental event in 1994. Psychogenic blindness and psychogenic aphonia are both a form of ‘conversion disorder’ and are often caused by stress that manifests itself as physical symptoms.

Gisli Gudjonnson was originally a policeman in his native Iceland but became interested in the psychological aspects of the crimes he was investigating, moved to the UK to study psychology, and has been massively influential in the development of forensic psychology.

He has been involved in some of the most high profile cases in the country and, TV producers, is the likely subject of your next Nordic detective drama.
 

Link to details of Clinical Psychology in Britain: Historical Perspectives.

Spike activity 04-12-2015

Quick links from the past week in mind and brain news:

Sleep Paralysis’ Demons: Influenced by Culture and Fed by Our Fears. Interesting piece at Brain Decoder.

The Telegraph has an excellent piece on artist Alice Evans, her work and her experience of schizophrenia.

What we can learn about the latest epidemic of opioid drug abuse from the opium wave of 100 years ago. Good piece in the New England Journal of Medicine.

Aeon has a good piece on the possibilities of stem cell therapy for fixing neurodegeneration in dementia.

Beard-envy, Freud and the gentleman’s excuse-me. Amusing look at facial furniture by neuroscientist Sophie Scott in Standard Issue.

Neuroskeptic has a fascinating piece on whether bilingual people have a cognitive advantage.

Felton et al. ranked the relative hotness quotients of professors in 36 different fields. The Monkey Cage has the data.

The New Yorker has a typically brilliant piece from Rachel Aviv on war, refugees and mental health. One of the best writers on mental health anywhere.

Was the counterculture’s favourite psychiatrist a dangerous renegade or a true visionary? The Independent has an extended piece on R.D. Laing.

TechCrunch has an excellent piece on decision science – an increasingly important area in cognitive science.

Neuroimaging in 20 minutes

Neuroscientist Matt Wall did a fascinating talk on all things neuroimaging at a recent TEDx Vienna event. It’s a gently funny, engrossing talk that both introduces brain imaging and discuss some of the cutting-edge developments.

He particularly talks about some of the recent fMRI ‘mind reading’ studies – which are more complex, limited, and interesting than many people assume.

Recommended
 

Link to Matt Wall’s TEDx Vienna talk on neuroimaging.

Spike activity 20-11-2015

Quick links from the past week in mind and brain news:

Wired has a good brief piece on the history of biodigital brain implants.

Why are conspiracy theories so attractive? Good discussion on the Science Weekly podcast.

The Wilson Quarterly has a piece on the mystery behind Japan’s high child suicide rate.

The Dream Life of Driverless Cars. Wonderful piece in The New York Times. Don’t miss the video.

The New Yorker has an extended profile on the people who run the legendary Erowid website on psychedelic drugs.

Allen Institute scientists identify human brain’s most common genetic patterns. Story in Geekwire.

BoingBoing covers a fascinating game where you play a blind girl and the game world is dynamically constructed through other senses and memory and shifts with new sensory information.

Excellent article on the real science behind the hype of neuroplasticity in Mosaic Science. Not to be missed.

Spike activity 13-11-2015

Quick links from the past week in mind and brain news:

The Weak Science Behind the Wrongly Named Moral Molecule. The Atlantic has some home truths about oxytocin.

Neurophilosophy reports on some half a billion year old brains found preserved in fool’s gold.

An Illuminated, 5,000-Pound Neuron Sculpture Is Coming to Boston. Boston magazine has some pictures.

Guardian Science Weekly podcast has neuroscientist David Eagleman discussing his new book.

A neurologist frustrated by the obstacles to his work on brain-machine interfaces paid a surgeon in Central America $25,000 to implant electrodes into his brain. MIT Tech Review reports.

Business Insider reports on Google’s troubled robotics division. It’s called Replicant, so I’m guessing incept dates may be a point of contention.

The real history of the ‘safe space’

There’s much debate in the media about a culture of demanding ‘safe spaces’ at university campuses in the US, a culture which has been accused of restricting free speech by defining contrary opinions as harmful.

The history of safe spaces is an interesting one and a recent article in Fusion cited the concept as originating in the feminist and gay liberation movements of the 1960s.

But the concept of the ‘safe space’ didn’t start with these movements, it started in a much more unlikely place – corporate America – largely thanks to the work of psychologist Kurt Lewin.

Like so many great psychologists of the early 20th Century, Lewin was a Jewish academic who left Europe after the rise of Nazism and moved to the United States.

Although originally a behaviourist, he became deeply involved in social psychology at the level of small group interactions and eventually became director of the Center for Group Dynamics at MIT.

Lewin’s work was massively influential and lots of our everyday phrases come from his ideas. The fact we talk about ‘social dynamics’ at all, is due to him, and the fact we give ‘feedback’ to our colleagues is because Lewin took the term from engineering and applied it to social situations.

In the late 1940s, Lewin was asked to help develop leadership training for corporate bosses and out of this work came the foundation of the National Training Laboratories and the invention of sensitivity training which was a form of group discussion where members could give honest feedback to each other to allow people to become aware of their unhelpful assumptions, implicit biases, and behaviours that were holding them back as effective leaders.

Lewin drew on ideas from group psychotherapy that had been around for years but formalised them into a specific and brief focused group activity.

One of the ideas behind sensitivity training, was that honesty and change would only occur if people could be frank and challenge others in an environment of psychological safety. In other words, without judgement.

Practically, this means that there is an explicit rule that everyone agrees to at the start of the group. A ‘safe space’ is created, confidential and free of judgement but precisely to allow people to mention concerns without fear of being condemned for them, on the understanding that they’re hoping to change.

It could be anything related to being an effective leader, but if we’re thinking about race, participants might discuss how, even though they try to be non-racist, they tend to feel fearful when they see a group of black youths, or that they often think white people are stuck up, and other group members, perhaps those affected by these fears, could give alternative angles.

The use of sensitivity groups began to gain currency in corporate America and the idea was taken up by psychologists such as the humanistic therapist Carl Rogers who, by the 1960s, developed the idea into encounter groups which were more aimed at self-actualisation and social change, in line with the spirit of the times, but based on the same ‘safe space’ environment. As you can imagine, they were popular in California.

It’s worth saying that although the ideal was non-judgement, the reality could be a fairly rocky emotional experience, as described by a famous 1971 study on ‘encounter group casualties’.

From here, the idea of safe space was taken up by feminist and gay liberation groups, but with a slightly different slant, in that sexist or homophobic behaviour was banned by mutual agreement but individuals could be pulled up if it occurred, with the understanding that people would make an honest attempt to recognise it and change.

And finally we get to the recent campus movements, where the safe space has become a public political act. Rather than individuals opting in, it is championed or imposed (depending on which side you take) as something that should define acceptable public behaviour.

In other words, creating a safe space is considered to be a social responsibility and you can opt out, but only by leaving.

Extremes of self-experimentation with brain electrodes

MIT Technology Review has jaw dropping article about brain-computer interface research Phil Kennedy. In the face of diminishing funding and increasing regulation he “paid a surgeon in Central America $25,000 to implant electrodes into his brain in order to establish a connection between his motor cortex and a computer”.

Both ethically dubious and interesting, it discusses what led Kennedy to this rather drastic decision:

Kennedy’s scientific aim has been to build a speech decoder—software that can translate the neuronal signals produced by imagined speech into words coming out of a speech synthesizer. But this work, carried out by his small Georgia company Neural Signals, had stalled, Kennedy says. He could no longer find research subjects, had little funding, and had lost the support of the U.S. Food and Drug Administration.

That is why in June 2014, he found himself sitting in a distant hospital contemplating the image of his own shaved scalp in a mirror. “This whole research effort of 29 years so far was going to die if I didn’t do something,” he says. “I didn’t want it to die on the vine. That is why I took the risk.”

 

Link to MIT Tech Review article.

A medieval attitude to suicide

I had always thought that suicide was made illegal in medieval times due to religious disapproval until suicidal people were finally freed from the risk of prosecution by the 1961 Suicide Act.

It turns out the history is a little more nuanced, as noted in this 1904 article from the Columbia Law Review entitled “Is Suicide Murder?” that explores the rather convoluted legal approach to suicide in centuries past.

In the UK, the legal status of suicide was first mentioned in a landmark 13th Century legal document attributed to Henry de Bracton.

But contrary to popular belief about medieval attitudes, suicide by ‘insane’ people was not considered a crime and was entirely blame free. Suicide by people who were motivated by “weariness of life or impatience of pain” received only a light punishment (their goods were forfeited but their family could still inherit their lands).

The most serious punishment of forfeiting everything to the Crown was restricted to those who were thought to have killed themselves “without any cause, through anger or ill will, as when he wished to hurt another”.

There are some examples of exactly these sorts of considerations in a British Journal of Psychiatry article that looks at these cases in the Middle Ages. This is a 1292 case from Hereford:

William la Emeyse of this vill, suffering from an acute fever which took away his senses, got up at night, entered the water of Kentford and drowned himself. The jury was asked if he did this feloniously and said no, he did it through his illness. The verdict was an accident.

We tend to think that the medieval world had a very simplistic view of the experiences and behaviour that we might now classify as mental illness but this often wasn’t the case.

Even the common assumption that all these experiences were put down to ‘demonic possession’ turns out to be a myth, as possession was considered to be a possible but rare explanation and was only accepted after psychological and physical disturbances were ruled out.

Spike activity 06-11-2015

Quick links from the past week in mind and brain news:

If you only read one thing this week, make it the excellent critical piece on the concept of an ‘autism spectrum’ in The Atlantic.

Nature reports that the controversial big bucks Human Brain Project has secured another three years’ funding. Giant all-knowing neurotron brain simulation coming “any day now”.

The psychological power of narrative. Good piece in Nautilus.

There’s an excellent in-depth piece on London’s BabyLab – a research centre for baby cognitive neuroscience – in Nature.

New Scientist has a fascinating piece on how a leading theory of consciousness has been rocked by oddball study.

Human language may be shaped by climate and terrain. Fascinating study covered in the newsy bit of Science.

Brain Flapping has a great piece on Robin Williams and Lewy-body dementia.

When it comes to the brain, blood also seems to be more than a travelling storyteller. In some cases, the blood may be writing the script. Interesting piece in Science News.

The Atlantic has a wonderful piece on why most languages have so few words for smells but why do these two hunter-gatherer groups have lots.

What is you mind doing during resting state fMRI scans? Interesting study covered by Neuroskeptic.

What do children know of their own mortality?

CC Licensed Image by Flickr user DAVID MELCHOR DIAZ. Click for source.We are born immortal, as far as we know at the time, and slowly we learn that we are going to die. For most children, death is not fully understood until after the first decade of life – a remarkable amount of time to comprehend the most basic truth of our existence.

There are poetic ways of making sense of this difficulty: perhaps an understanding of our limited time on Earth is too difficult for the fragile infant mind to handle, maybe it’s evolution’s way of instilling us with hope; but these seductive theories tend to forget that death is more complex than we often assume.

To completely understand the significance of death, researchers – mortality psychologists if you will – have identified four primary concepts we need to grasp: universality (all living things die), irreversibility (once dead, dead forever), nonfunctionality (all functions of the body stop) and causality (what causes death).

In a recent review of studies on children’s understanding of death, medics Alan Bates and Julia Kearney describe how:

Partial understanding of universality, irreversibility, and nonfunctionality usually develops between the ages of 5 and 7 years, but a more complete understanding of death concepts, including causality, is not generally seen until around age 10. Prior to understanding nonfunctionality, children may have concrete questions such as how a dead person is going to breathe underground. Less frequently studied is the concept of personal mortality, which most children have some under standing of by age 6 with more complete understanding around age 8–11.

But this is a general guide, rather than a life plan. We know that children vary a great deal in their understanding of death and they tend to acquire these concepts at different times.

Although interesting from a developmental perspective these studies also have clear, practical implications.

Most children will know someone who dies and helping children deal with these situations often involves explaining death and dying in a way they can understand while addressing any frightening misconceptions they might have. No, your grandparent hasn’t abandoned you. Don’t worry, they won’t get lonely.

But there is a starker situation which brings the emerging ability to understand mortality into very sharp relief. Children who are themselves dying.

The understanding of death by terminally ill children has been studied by a small but dedicated research community, largely motivated by the needs of child cancer services.

One of the most remarkable studies, and perhaps, one of the most remarkable studies in the whole of palliative care, was completed by the anthropologist Myra Bluebond-Langner and was published as the book The Private Worlds of Dying Children.

Bluebond-Langner spent the mid 1970’s in an American child cancer ward and began to look at what the children knew about their own terminal prognosis, how this knowledge affected social interactions, and how social interactions were conducted to manage public awareness of this knowledge.

Her findings were nothing short of stunning: although adults, parents, and medical professionals, regularly talked in a way to deliberately obscure knowledge of the child’s forthcoming death, children often knew they were dying. But despite knowing they were dying, children often talked in a way to avoid revealing their awareness of this fact to the adults around them.

Bluebond-Langner describes how this mutual pretence allowed everyone to support each other through their typical roles and interactions despite knowing that they were redundant. Adults could ask children what they wanted for Christmas, knowing that they would never see it. Children could discuss what they wanted to be when they grew up, knowing that they would never get the chance. Those same conversations, through which compassion flows in everyday life, could continue.

This form of emotional support was built on fragile foundations, however, as it depended on actively ignoring the inevitable. When cracks sometimes appeared during social situations they had to be quickly and painfully papered over.

When children’s hospices first began to appear, one of their innovations was to provide a space where emotional support did not depend on mutual pretence.

Instead, dying can be discussed with children, alongside their families, in a way that makes sense to them. Studying what children understand about death is a way of helping this take place. It is knowledge in the service of compassion.

Jeb Bush has misthought

According to the Washington Examiner, republican presidential candidate Jeb Bush has said that doing a psychology major will mean “you’re going to be working a Chick-fil-A” and has encouraged students to choose college degrees with better employment prospects.

If you’re not American, Chik-fil-A turns out be a fast food restaurant, presumably of dubious quality.

Bush continued:

“The number one degree program for students in this country … is psychology,” Bush said. “I don’t think we should dictate majors. But I just don’t think people are getting jobs as psych majors.

Firstly, he’s wrong about psychology being the most popular degree in the US. The official statistics shows it’s actually business related subjects that are the most studied, with psychology coming in at fifth.

He’s also wrong about the employment prospects of psych majors. I initially mused on Twitter as to why US psych majors have such poor employment prospects when, in the UK, psychology graduates are typically the most likely to be employed.

But I was wrong about US job prospects for psych majors, because I was misled by lots of US media articles suggesting exactly this.

There is actually decent research on this, and it says something quite different. Georgetown University’s Centre on Education and the Workforce published reports in 2010 and 2013, called ‘Hard Times: College Majors, Unemployment and Earnings’ where they looked at exactly this issue.

They found on both occasions that doing a psych major gives you employment prospects that are about mid-table in comparison to other degrees.

Below is the graph from the 2013 report. Click for a bigger version.

Essentially psychology is slightly below average in terms of employability. Tenth out of sixteen but still a college major where more than 9 out of 10 (91.2%) find jobs as recent graduates.

If you look at median income, the picture is much the same: somewhat below average but clearly not in the Chik-fil-A range.

What’s not factored into these reports, however, is gender difference. According to the statistics, almost 80% of psychology degrees in the US are earned by women.

Women earn less than men on average, are more likely to take voluntary career breaks, are more likely to be suspend work to have children, and so on. So it’s worth remembering that these figures don’t control for gender effects.

So when Bush says “I just don’t think people are getting jobs as psych majors” it seems he misthought.

Specifically, it looks like his thinking was biased by the availability heuristic which, if you know about it, can help you avoid embarrassing errors when making factual claims.

I’ll leave that irony for Jeb Bush to ponder, along with Allie Brandenburger, Kaitlin Zurdowsky and Josh Venable – three psychology majors he employed as senior members of his campaign team.

Spike activity 23-10-2015

Quick links from the past week in mind and brain news:

MP tricked into condemning a fake drug called ‘Cake’ on Brass Eye has been put in charge of scrutinising drugs policy in the UK Parliament, reports The Independent. What starts as satire is so often reborn as policy.

Narratively takes a look at the human stories behind the alarming rates of prescription opioid addiction in Appalachia.

Mental health research makes good economic sense, argues The Economist.

American Civil Liberties Union are suing the psychologists who developed the CIA torture programme.

Before 6 months, babies don’t relate touch to an event outside of themselves. We’re calling this “tactile solipsism”. Interesting Brain Decoder piece.

Mashable reports that Sesame Street debuts its first autistic Muppet. And try watching that What My Family Wants You to Know About Autism video without welling up.

‘Mental patient’ Halloween costumes: a scientific guide to dressing accurately. Important evidence-based Halloween advice on Brain Flapping.

The Scientist looks back at Camillo Golgi’s first drawings of neurons from the 1870s.

A social vanishing

CC Licensed Photo by Flickr user Jonathan Jordan. Click for source,A fantastic eight-part podcast series called Missing has just concluded and it’s a brilliant look at the psychology and forensic science of missing people.

It’s been put together by the novelist Tim Weaver who is renowned for his crime thrillers that feature missing persons investigator David Raker.

He uses the series to investigate the phenomenon of missing people and the result is a wonderfully engrossing, diverse documentary series that talks to everyone from forensic psychiatrists, to homicide investigators, to commercial companies that help you disappear without trace.

Missing people, by their absence, turn out to reveal a lot about the tension between social structures and individual behaviour in modern society. Highly recommended.
 

Link to Missing podcast series with iTunes / direct download links.

From school shootings to everyday counter-terrorism

CC Licensed Image from Secretive Ireland. Click for source.Mother Jones has a fascinating article on how America is attempting to stop school shootings by using community detection and behavioural intervention programmes for people identified as potential killers – before a crime has ever been committed.

It is a gripping read in itself but it is also interesting because it describes an approach that is now been rolled out to millions as part of community counter-terrorism strategies across the world, which puts a psychological model of mass-violence perpetration at its core.

The Mother Jones article describes a threat assessment model for school shootings that sits at an evolutionary mid-point: first developed to protect the US President, then to preventing school shootings, and now as mass deployment domestic counter-terrorism programmes.

You can see exactly this in the UK Government’s Prevent programme (part of the wider CONTEST counter-terrorism strategy). Many people will recognise this in the UK because if you work for a public body, like a school or the health service, you will have been trained in it.

The idea behind Prevent is that workers are trained to be alert to signs of radicalisation and extremism and can pass on potential cases to a multi-disciplinary panel, made up of social workers, mental health specialists, staff members and the police, who analyse the case in more detail and get more information as it’s needed.

If they decide the person is vulnerable to becoming dangerously radicalised or violent, they refer the case on the Channel programme, which aims to manage the risk by a combination of support from social services and heightened monitoring by security services.

A central concept is that the person may be made vulnerable to extremism due to unmet needs (poor mental health, housing, lack of opportunity, poor social support, spiritual emptiness, social conflict) which may convert into real world violence when mixed with certain ideologies or beliefs about the world that they are recruited into, or persuaded by, and so violence prevention includes both a needs-based and a threat-based approach.

This approach came from work by the US Secret Service in the 1990s, who were mainly concerned with protecting key government officials, and it was a radical departure from the idea that threat management was about physical security.

They began to try and understand why people might want to attempt to kill important officials and worked on figuring out how to identify risks and intervene before violence was ever used.

The Mother Jones article also mentions the LAPD Threat Management Unit (LAPDTMU) which was formed to deal with cases of violent stalking of celebrities, and the FBI had been developing a data-driven approach since the National Center for the Analysis of Violent Crime (NCAVC) launched in 1985.

By the time the Secret Service founded the National Threat Assessment Center in 1998, the approach was well established. When the Columbine massacre occurred the following year, the same thinking was applied to school shootings.

After Columbine, reports were produced by both the FBI (pdf) and the Secret Service (pdf) which outline some of the evolution of this approach and how it applies to preventing school shootings. The Mother Jones article illustrates what this looks like, more than 15 years later, as shootings are now more common and often directly inspired by Columbine or other more recent attacks.

It’s harder to find anything written on the formal design of the UK Goverment’s Prevent and Channel programmes but the approach is clearly taken from the work in the United States.

The difference is that it has been deployed on a mass scale. Literally, millions of public workers have been trained in Prevent, and Channel programmes exist all over the country to receive and evaluate referrals.

It may be one of the largest psychological interventions ever deployed.
 

Link to Mother Jones article on preventing the next mass shooting.

The echoes of the Prozac revolution

The Lancet Psychiatry has a fantastic article giving a much needed cultural retrospective on the wave of antidepressants like Prozac – which first made us worry we would no longer be our true selves through ‘cosmetic pharmacology,’ to the dawning realisation that they are unreliably useful but side-effect-ridden tools that can help manage difficult moods.

From their first appearance in the late 1980s until recently, SSRIs were an A-list topic of debate in the culture wars, and the rhetoric, whether pro or con, was red hot. Antidepressants were going to heal, or destroy, the world as we knew it.

Those discussions now feel dated. While antidepressants themselves are here to stay, they just don’t pulse with meaning the way they once did. Like the automobile or the telephone before them, SSRIs are a one-time miracle technology that have since become a familiar—even frumpy—part of the furniture of modern life.

At some point recently, they’ve slid into the final act of Mickey Smith’s wonder-drug drama. And in the aftermath of that change, many of the things that people used to say about them have come to sound completely absurd.

It’s a wonderful piece that perfectly captures the current place of antidepressants in modern society.

It’s by author Katherine Sharpe who wrote the highly acclaimed book Coming of Age on Zoloft which I haven’t read but have just ordered.
 

Link to ‘The silence of prozac’ in The Lancet Psychiatry.