Annette Karmiloff-Smith has left the building

The brilliant developmental neuropsychologist Annette Karmiloff-Smith has passed away and one of the brightest lights into the psychology of children’s development has been dimmed.

She actually started her professional life as a simultaneous interpreter for the UN and then went on to study psychology and trained with Jean Piaget.

Karmiloff-Smith went into neuropsychology and starting rethinking some of the assumptions of how cognition was organised in the brain which, until then, had almost entirely been based on studies of adults with brain injury.

These studies showed that some mental abilities could be independently impaired after brain damage suggesting that there was a degree of ‘modularity’ in the organisation of cognitive functions.

But Karmiloff-Smith investigated children with developmental disorders, like autism or William’s syndrome, and showed that what seemed to be the ‘natural’ organisation of the brain in adults was actually a result of development itself – an approach she called neuroconstructivism.

In other words, developmental disorders were not ‘knocking out’ specific abilities but affecting the dynamics of neurodevelopment as the child interacted with the world.

If you want to hear more of Karmiloff-Smith’s life and work, her interview on BBC Radio 4’s The Life Scientific is well worth a listen.
 

Link to page of remembrance for Annette Karmiloff-Smith.

The hidden history of war on terror torture

The Hidden Persuaders project has interviewed neuropsychologist Tim Shallice about his opposition to the British government’s use of ‘enhanced interrogation’ in the Northern Ireland conflict of the 1970s – a practice eventually abandoned as torture.

Shallice is little known to the wider public but is one of the most important and influential neuropsychologists of his generation, having pioneered the systematic study of neurological problems as a window on typical cognitive function.

One of his first papers was not on brain injury, however, it was an article titled ‘Ulster depth interrogation techniques and their relation to sensory deprivation research’ where he set out a cognitive basis for why the ‘five techniques’ – wall-standing, hooding, white noise, sleep deprivation, and deprivation of food and drink – amounted to torture.

Shallice traces a link between the use of these techniques and research on sensory deprivation – which was investigated both by regular scientists for reasons of scientific curiosity, and as we learned later, by intelligence services while trying to understand ‘brain washing’.

The use of these techniques in Northern Ireland was subject to an official investigation and Shallice and other researchers testified to the Parker Committee which led Prime Minister Edward Heath to ban the practice.

If those techniques sound eerily familiar, it is because they formed the basis of interrogation practices at Guantanamo Bay and other notorious sites in the ‘war on terror’.

The Hidden Persuaders is a research project at Birkbeck, University of London, which is investigating the history of ‘brainwashing’. It traces the practice to its use by the British during the colonisation of Yemen, who seemed to have borrowed it off the KGB.

And if you want to read about the modern day effects of the abusive techniques, The New York Times has just published a disturbing feature article about the long-term consequences of being tortured in Guantanamo and other ‘black sites’ by following up many of the people subject to the brutal techniques.
 

Link to Hidden Persuaders interview with Tim Shallice.
Link to NYT on long-term legacy of war on terror torture.

The cognitive science of how to study

CC Licensed image from Flickr user Moyan Brenn. Click for source.Researchers from the Bjork Learning and Forgetting Lab at UCLA have created a fantastic video on the cognitive science of how to study.

Despite the fact that we now know loads about what makes for optimal learning, it’s rarely applied by students who are trying to learn a subject or ace a test.

This is a short, clear, helpful video on exactly that.

It looks like the video is set so it can’t be embedded but you can watch it at the link below.

Happy studying.
 

Link to Pro Tips: How to Study on vimeo

Suzanne Corkin has left the building

Neuropsychologist Suzanne Corkin, most well known for her work with profoundly amnesic patient HM, has passed away and The New York Times has a fitting obituary and tribute.

Although Corkin did a range of work on memory, including testing various medications to treat Alzheimer’s disease, she is in many ways synonymous with amnesic Patient HM, later revealed to be Henry Molaison, who she studied and worked with for most of both of their lives.

Corkin not only took a scientific interest in HM, she also ensured his well-being and appropriate care.

HM had perhaps one of the profoundest amnesias reported in the scientific literature but there is a lovely description in The New York Times obituary that describes how HM formed an emotional memory of Corkin, even though a conscious memory wasn’t present.

But it was her relationship with H.M. that was defining. His profound deficits made their relationship anything but normal — every time she walked in the room, she had to reintroduce herself — but that repetition bred a curious bond over time.

“He thought he knew me from high school,” Dr. Corkin said in an interview with The New York Times in 2008.

 

Link to Suzanne Corkin obituary in The NYT.

Why you forget what you came for when you enter the room

Forgetting why you entered a room is called the “Doorway Effect”, and it may reveal as much about the strengths of human memory, as it does the weaknesses, says psychologist Tom Stafford.

We’ve all done it. Run upstairs to get your keys, but forget that it is them you’re looking for once you get to the bedroom. Open the fridge door and reach for the middle shelf only to realise that we can’t remember why we opened the fridge in the first place. Or wait for a moment to interrupt a friend to find that the burning issue that made us want to interrupt has now vanished from our minds just as we come to speak: “What did I want to say again?” we ask a confused audience, who all think “how should we know?!”

Although these errors can be embarrassing, they are also common. It’s known as the “Doorway Effect”, and it reveals some important features of how our minds are organised. Understanding this might help us appreciate those temporary moments of forgetfulness as more than just an annoyance (although they will still be annoying).

These features of our minds are perhaps best illustrated by a story about a woman who meets three builders on their lunch break. “What are you doing today?” she asks the first. “I’m putting brick after sodding brick on top of another,” sighs the first. “What are you doing today?” she asks the second. “I’m building a wall,” is the simple reply. But the third builder swells with pride when asked, and replies: “I’m building a cathedral!”

Maybe you heard that story as encouragement to think of the big picture, but to the psychologist in you the important moral is that any action has to be thought of at multiple levels if you are going to carry it out successfully. The third builder might have the most inspiring view of their day-job, but nobody can build a cathedral without figuring out how to successfully put one brick on top of another like the first builder.

As we move through our days our attention shifts between these levels – from our goals and ambitions, to plans and strategies, and to the lowest levels, our concrete actions. When things are going well, often in familiar situations, we keep our attention on what we want and how we do it seems to take care of itself. If you’re a skilled driver then you manage the gears, indicators and wheel automatically, and your attention is probably caught up in the less routine business of navigating the traffic or talking to your passengers. When things are less routine we have to shift our attention to the details of what we’re doing, taking our minds off the bigger picture for a moment. Hence the pause in conversation as the driver gets to a tricky junction, or the engine starts to make a funny sound.

The way our attention moves up and down the hierarchy of action is what allows us to carry out complex behaviours, stitching together a coherent plan over multiple moments, in multiple places or requiring multiple actions.

The Doorway Effect occurs when our attention moves between levels, and it reflects the reliance of our memories – even memories for what we were about to do – on the environment we’re in.

Imagine that we’re going upstairs to get our keys and forget that it is the keys we came for as soon as we enter the bedroom. Psychologically, what has happened is that the plan (“Keys!”) has been forgotten even in the middle of implementing a necessary part of the strategy (“Go to bedroom!”). Probably the plan itself is part of a larger plan (“Get ready to leave the house!”), which is part of plans on a wider and wider scale (“Go to work!”, “Keep my job!”, “Be a productive and responsible citizen”, or whatever). Each scale requires attention at some point. Somewhere in navigating this complex hierarchy the need for keys popped into mind, and like a circus performer setting plates spinning on poles, your attention focussed on it long enough to construct a plan, but then moved on to the next plate (this time, either walking to the bedroom, or wondering who left their clothes on the stairs again, or what you’re going to do when you get to work or one of a million other things that it takes to build a life).

And sometimes spinning plates fall. Our memories, even for our goals, are embedded in webs of associations. That can be the physical environment in which we form them, which is why revisiting our childhood home can bring back a flood of previously forgotten memories, or it can be the mental environment – the set of things we were just thinking about when that thing popped into mind.

The Doorway Effect occurs because we change both the physical and mental environments, moving to a different room and thinking about different things. That hastily thought up goal, which was probably only one plate among the many we’re trying to spin, gets forgotten when the context changes.

It’s a window into how we manage to coordinate complex actions, matching plans with actions in a way that – most of the time – allows us to put the right bricks in the right place to build the cathedral of our lives.

This is my BBC Future column from Tuesday. The original is here

Where Are We Now? – David Bowie and Psychosis

The mercurial David Bowie has left the capsule and the world is a poorer place. His circuit is dead, and there definitely is something wrong, at least for those of us still on Planet Earth.

There have been many tributes, noting Bowie’s impact on music, art and cinema, and the extent of his eclectic tastes. But one significant part of Bowie’s life has barely merited a mention – his experiences with psychosis – despite the fact that it had a major impact on his life and featured in some of his most important work.

Bowie was familiar with psychosis from an early age, not least because it affected his close family. Two of his aunts were reportedly diagnosed with schizophrenia and third was confined to an asylum.

One of Bowie’s most influential early role models, his half-brother Terry, was diagnosed with schizophrenia and reportedly had marked periods of psychosis.

Here is Bowie, discussing one of his brother’s psychotic episodes, in a 1998 documentary for VH1:

Bowie’s brother was admitted to now defunct Cane Hill psychiatric hospital in South London and the experience heavily influenced 1970’s The Man Who Sold the World album with a drawing of the hospital appearing on the original sleeve art.

One of the songs on that album, All the Madmen, vividly describes madness and treatment in the old asylums, and was discussed in a 2010 article for the British Medical Journal:

“All the Madmen” was inspired by the mental health problems of David Bowie’s brother and was released 39 years ago (before Bowie achieved major fame), on the album The Man Who Sold the World. It recognises the separation from society of mentally ill people, who are sent to “mansions cold and grey.” In a lucid interval, spoken instead of sung, the national shame of mental illness and policies of alienation and institution are questioned with sadness: “Where can the horizon lie / When a nation hides / Its organic minds in a cellar.”

Faced with the prospect of discharge, the patient protagonist recognises his comfort in Librium, considers his ability to cope outside, and pushes the risk buttons with, “I can fly, I will scream, I will break my arm / I will do me harm.” He adopts a catatonic posture, standing with a foot in his hand, talking to the wall. He is accepting of electric shock treatment. When he asks, “I’m not quite right at all . . . am I?” is this a cryptic taunt that he knows he is putting it on, pushing the psychiatrist to keep his place in the institution? Or, more worryingly, is he questioning his own sanity and certainty?

Perhaps unsurprisingly, the themes of madness pervade Bowie’s work. The title track for the Aladdin Sane album (a play on “A lad insane”) was inspired by his brother, as was the song Jump They Say. Some other references are more obvious, such as in the song I’m Deranged, while some only allude to altered states and psychological alienation, as in The Man Who Sold the World.

Little known is that his most famous character, Ziggy Stardust, was based on someone who experienced striking periods of psychosis. In a 1996 interview, Bowie recounted how Ziggy was based on the obscure rock star Vince Taylor who Bowie met several times, presumably between the periods Taylor spent in psychiatric hospital.

Bowie himself was widely thought to have experienced an episode of psychosis himself, some years later, largely due to a period when he was taking very large amounts of cocaine while working on the album Station to Station.

Several biographies describe how he feared evil entities floating past his window, thought The Rolling Stones were sending message to him through their music and believed witches were stealing his semen.

But the semantic traffic between madness and Bowie’s work was not solely one way. The medical literature has reports of Bowie featuring in the delusions of people with psychosis. One case report described a “32-year-old divorced white female with a long history of affective and behavioral problems”:

She believed she was secretly married to the rock star, David Bowie, after supposedly meeting in a church camp several years previously. She described seeing him “wait for her” outside her hospital window. The onset of this delusion coincided with a local tour by Bowie.

As Bowie was the master of looping cultural expression, making his art reference himself reacting to cultural responses to his work, it’s a return acknowledgement he may have appreciated.

Why do we forget names?

A reader, Dan, asks “Why do we forget people’s names when we first meet them? I can remember all kinds of other details about a person but completely forget their name. Even after a lengthy, in-depth conversation. It’s really embarrassing.”

Fortunately the answer involves learning something fundamental about the nature of memory. It also provides a solution that can help you to avoid the embarrassing social situation of having spoken to someone for an hour, only to have forgotten their name.

To know why this happens you have to recognise that our memories aren’t a simple filing system, with separate folders for each kind of information and a really brightly coloured folder labelled “Names”.

Rather, our minds are associative. They are built out of patterns of interconnected information. This is why we daydream: you notice that the book you’re reading was printed in Paris, and that Paris is home to the Eiffel Tower, that your cousin Mary visited last summer, and Mary loves pistachio ice-cream. Say, I wonder if she ate a pistachio ice cream while up the Tower? It goes on and on like that, each item connected to every other, not by logic but by coincidence of time, place, how you learnt the information and what it means.

The same associative network means you can guess a question from the answer. Answer: “Eiffel Tower?” Question: “Paris’s most famous landmark.” This makes memory useful, because you can often go as easily from the content to the label as vice versa: “what is in the top drawer?” isn’t a very interesting question, but it becomes so when you want the answer “where are my keys?”.

So memory is built like this on purpose, and now we can see the reason why we forget names. Our memories are amazing, but they respond to how many associations we make with new information, not with how badly we want to remember it.

When you meet someone for the first time you learn their name, but for your memory it is probably an arbitrary piece of information unconnected to anything else you know, and unconnected to all the other things you later learn about them. After your conversation, in which you probably learn about their job, and their hobbies, and their family or whatever, all this information becomes linked in your memory. Imagine you are talking to a guy with a blue shirt who likes fishing and works selling cars, but would rather give it up to sell fishing gear. Now if you can remember one bit of information (“sell cars”) you can follow the chain to the others (“sells cars but wants to give it up”, “wants to give it up to sell fishing gear”, “loves fishing” and so on). The trouble is that your new friend’s name doesn’t get a look in because it is simply a piece of arbitrary information you didn’t connect to anything else about the conversation.

Fortunately, there are ways to strengthen those links so it does become entrenched with the other memories. Here’s how to remember the name, using some basic principles of memory.

First, you should repeat any name said to you. Practice is one of the golden rules of learning: more practice makes stronger memories. In addition, when you use someone’s name you are linking it to yourself, in the physical act of saying it, but also to the current topic of the conversation in your memory (“So, James, just what is it about fishing that makes you love it so much?”).

Second, you should try to link the name you have just learnt to something you already know. It doesn’t matter if the link is completely silly, it is just important that you find some connection to help the name stick in memory. For example, maybe the guy is called James, and your high school buddy was called James, and although this guy is wearing a blue shirt, high school James only ever wore black, so he’d never wear blue. It’s a silly made up association, but it can help you remember.

Finally, you need to try to link their name to something else about them. If it was me I’d grab the first thing to come to mind to bridge between the name and something I’ve learnt about them. For example, James is a sort of biblical name, you get the King James bible after all, and James begins with J, just like Jonah in the bible who was swallowed by the whale, and this James likes fishing, but I bet he prefers catching them to being caught by them.

It doesn’t matter if the links you make are outlandish or weird. You don’t have to tell anyone. In fact, probably it is best if you don’t tell anyone, especially your new friend! But the links will help create a web of association in your memory, and that web will stop their name falling out of your mind when it is time to introduce them to someone else.

And if you’re sceptical, try this quick test. I’ve mentioned three names during this article. I bet you can remember James, who isn’t Jonah. And probably you can remember cousin Mary (or at least what kind of ice cream she likes). But you can you remember the name of the reader who asked the question? That’s the only one I introduced without elaborating some connections around the name, and that’s why I’ll bet it is the only one you’ve forgotten.

This is my BBC Future column from last week. The original is here

The real history of the ‘safe space’

There’s much debate in the media about a culture of demanding ‘safe spaces’ at university campuses in the US, a culture which has been accused of restricting free speech by defining contrary opinions as harmful.

The history of safe spaces is an interesting one and a recent article in Fusion cited the concept as originating in the feminist and gay liberation movements of the 1960s.

But the concept of the ‘safe space’ didn’t start with these movements, it started in a much more unlikely place – corporate America – largely thanks to the work of psychologist Kurt Lewin.

Like so many great psychologists of the early 20th Century, Lewin was a Jewish academic who left Europe after the rise of Nazism and moved to the United States.

Although originally a behaviourist, he became deeply involved in social psychology at the level of small group interactions and eventually became director of the Center for Group Dynamics at MIT.

Lewin’s work was massively influential and lots of our everyday phrases come from his ideas. The fact we talk about ‘social dynamics’ at all, is due to him, and the fact we give ‘feedback’ to our colleagues is because Lewin took the term from engineering and applied it to social situations.

In the late 1940s, Lewin was asked to help develop leadership training for corporate bosses and out of this work came the foundation of the National Training Laboratories and the invention of sensitivity training which was a form of group discussion where members could give honest feedback to each other to allow people to become aware of their unhelpful assumptions, implicit biases, and behaviours that were holding them back as effective leaders.

Lewin drew on ideas from group psychotherapy that had been around for years but formalised them into a specific and brief focused group activity.

One of the ideas behind sensitivity training, was that honesty and change would only occur if people could be frank and challenge others in an environment of psychological safety. In other words, without judgement.

Practically, this means that there is an explicit rule that everyone agrees to at the start of the group. A ‘safe space’ is created, confidential and free of judgement but precisely to allow people to mention concerns without fear of being condemned for them, on the understanding that they’re hoping to change.

It could be anything related to being an effective leader, but if we’re thinking about race, participants might discuss how, even though they try to be non-racist, they tend to feel fearful when they see a group of black youths, or that they often think white people are stuck up, and other group members, perhaps those affected by these fears, could give alternative angles.

The use of sensitivity groups began to gain currency in corporate America and the idea was taken up by psychologists such as the humanistic therapist Carl Rogers who, by the 1960s, developed the idea into encounter groups which were more aimed at self-actualisation and social change, in line with the spirit of the times, but based on the same ‘safe space’ environment. As you can imagine, they were popular in California.

It’s worth saying that although the ideal was non-judgement, the reality could be a fairly rocky emotional experience, as described by a famous 1971 study on ‘encounter group casualties’.

From here, the idea of safe space was taken up by feminist and gay liberation groups, but with a slightly different slant, in that sexist or homophobic behaviour was banned by mutual agreement but individuals could be pulled up if it occurred, with the understanding that people would make an honest attempt to recognise it and change.

And finally we get to the recent campus movements, where the safe space has become a public political act. Rather than individuals opting in, it is championed or imposed (depending on which side you take) as something that should define acceptable public behaviour.

In other words, creating a safe space is considered to be a social responsibility and you can opt out, but only by leaving.

A medieval attitude to suicide

I had always thought that suicide was made illegal in medieval times due to religious disapproval until suicidal people were finally freed from the risk of prosecution by the 1961 Suicide Act.

It turns out the history is a little more nuanced, as noted in this 1904 article from the Columbia Law Review entitled “Is Suicide Murder?” that explores the rather convoluted legal approach to suicide in centuries past.

In the UK, the legal status of suicide was first mentioned in a landmark 13th Century legal document attributed to Henry de Bracton.

But contrary to popular belief about medieval attitudes, suicide by ‘insane’ people was not considered a crime and was entirely blame free. Suicide by people who were motivated by “weariness of life or impatience of pain” received only a light punishment (their goods were forfeited but their family could still inherit their lands).

The most serious punishment of forfeiting everything to the Crown was restricted to those who were thought to have killed themselves “without any cause, through anger or ill will, as when he wished to hurt another”.

There are some examples of exactly these sorts of considerations in a British Journal of Psychiatry article that looks at these cases in the Middle Ages. This is a 1292 case from Hereford:

William la Emeyse of this vill, suffering from an acute fever which took away his senses, got up at night, entered the water of Kentford and drowned himself. The jury was asked if he did this feloniously and said no, he did it through his illness. The verdict was an accident.

We tend to think that the medieval world had a very simplistic view of the experiences and behaviour that we might now classify as mental illness but this often wasn’t the case.

Even the common assumption that all these experiences were put down to ‘demonic possession’ turns out to be a myth, as possession was considered to be a possible but rare explanation and was only accepted after psychological and physical disturbances were ruled out.

The echoes of the Prozac revolution

The Lancet Psychiatry has a fantastic article giving a much needed cultural retrospective on the wave of antidepressants like Prozac – which first made us worry we would no longer be our true selves through ‘cosmetic pharmacology,’ to the dawning realisation that they are unreliably useful but side-effect-ridden tools that can help manage difficult moods.

From their first appearance in the late 1980s until recently, SSRIs were an A-list topic of debate in the culture wars, and the rhetoric, whether pro or con, was red hot. Antidepressants were going to heal, or destroy, the world as we knew it.

Those discussions now feel dated. While antidepressants themselves are here to stay, they just don’t pulse with meaning the way they once did. Like the automobile or the telephone before them, SSRIs are a one-time miracle technology that have since become a familiar—even frumpy—part of the furniture of modern life.

At some point recently, they’ve slid into the final act of Mickey Smith’s wonder-drug drama. And in the aftermath of that change, many of the things that people used to say about them have come to sound completely absurd.

It’s a wonderful piece that perfectly captures the current place of antidepressants in modern society.

It’s by author Katherine Sharpe who wrote the highly acclaimed book Coming of Age on Zoloft which I haven’t read but have just ordered.
 

Link to ‘The silence of prozac’ in The Lancet Psychiatry.

Oliver Sacks has left the building

CC Licensed Photo from Wikipedia. Click for source.Neurologist and author Oliver Sacks has died at the age of 82.

It’s hard to fully comprehend the enormous impact of Oliver Sacks on the public’s understanding of the brain, its disorders and our diversity as humans.

Sacks wrote what he called ‘romantic science’. Not romantic in the sense of romantic love, but romantic in the sense of the romantic poets, who used narrative to describe the subtleties of human nature, often in contrast to the enlightenment values of quantification and rationalism.

In this light, romantic science would seem to be a contradiction, but Sacks used narrative and science not as opponents, but as complementary partners to illustrate new forms of human nature that many found hard to see: in people with brain injury, in alterations or differences in experience and behaviour, or in seemingly minor changes in perception that had striking implications.

Sacks was not the originator of this form of writing, nor did he claim to be. He drew his inspiration from the great neuropsychologist Alexander Luria but while Luria’s cases were known to a select group of specialists, Sacks wrote for the general public, and opened up neurology to the everyday world.

Despite Sacks’s popularity now, he had a slow start, with his first book Migraine not raising much interest either with his medical colleagues or the reading public. Not least, perhaps, because compared to his later works, it struggled to throw off some of the technical writing habits of academic medicine.

It wasn’t until his 1973 book Awakenings that he became recognised both as a remarkable writer and a remarkable neurologist, as the book recounted his experience with seemingly paralysed patients from the 1920s encephalitis lethargica epidemic and their remarkable awakening and gradual decline during a period of treatment with L-DOPA.

The book was scientifically important, humanely written, but most importantly, beautiful, as he captured his relationship with the many patients who experienced both a physical and a psychological awakening after being neurologically trapped for decades.

It was made into a now rarely seen documentary for Yorkshire Television which was eventually picked up by Hollywood and made into the movie starring Robin Williams and Robert De Niro.

But it was The Man Who Mistook His Wife for a Hat that became his signature book. It was a series of case studies, that wouldn’t seem particularly unusual to most neurologists, but which astounded the general public.

A sailor whose amnesia leads him to think he is constantly living in 1945, a woman who loses her ability to know where her limbs are, and a man with agnosia who despite normal vision can’t recognise objects and so mistook his wife’s head for a hat.

His follow-up book An Anthropologist on Mars continued in a similar vein and made for equally gripping reading.

Not all his books were great writing, however. The Island of the Colorblind was slow and technical while Sacks’s account of how his damaged leg, A Leg to Stand On, included conclusions about the nature of illness that were more abstract than most could relate to.

But his later books saw a remarkable flowering of diverse interest and mature writing. Music, imagery, hallucinations and their astounding relationship with the brain and experience were the basis of three books that showed Sacks at his best.

And slowly during these later books, we got glimpses of the man himself. He revealed in Hallucinations that he had taken hallucinogens in his younger years and that the case of medical student Stephen D in The Man Who Mistook His Wife for a Hat – who developed a remarkable sense of smell after a night on speed, cocaine, and PCP – was, in fact, an autobiographical account.

His final book, On the Move, was the most honest, as he revealed he was gay, shy, and in his younger years, devastatingly handsome but somewhat troubled. A long way from the typical portrayal of the grey-bearded, kind but eccentric neurologist.

On a personal note, I have a particular debt of thanks to Dr Sacks. When I was an uninspired psychology undergraduate, I was handed a copy of The Man Who Mistook His Wife for a Hat which immediately convinced me to become a neuropsychologist.

Years later, I went to see him talk in London following the publication of Musicophilia. I took along my original copy of The Man Who Mistook His Wife for a Hat, hoping to surprise him with the news that he was responsible for my career in brain science.

As the talk started, the host mentioned that ‘it was likely that many of us became neuroscientists because we read Oliver Sacks when we started out’. To my secret disappointment, about half the lecture hall vigorously nodded in response.

The reality is that Sacks’s role in my career was neither surprising nor particularly special. He inspired a generation of neuroscientists to see brain science as a gateway to our common humanity and humanity as central to the scientific study of the brain.
 

Link to The New York Times obituary for Oliver Sacks.

Pope returns to cocaine

Image from Wikipedia. Click for source,According to a report from BBC News the Pope ‘plans to chew coca leaves’ during his visit to Bolivia. Although portrayed as a radical encounter, this is really a return to cocaine use after a long period of abstinence in the papal office.

Although the leaves are a traditional, mild stimulant that have been used for thousands of years, they are controversial as they’re the raw material for synthesising powder cocaine.

The leaves themselves actually contain cocaine in its final form but only produce a mild stimulant effect because they have a low dose that is released relatively gently when chewed.

The lab process to produce the powder is largely concerned with concentrating and refining it which means it can be taken in a way to give the cocaine high.

The Pope is likely to be wanting to chew coca leaves to show support for the traditional uses of the plant, which, among other things, are used to help with altitude sickness but have become politicised due to the ‘war on drugs’.

Because of this, recent decades have seen pressure to outlaw or destroy coca plants, despite them being little more problematic than coffee when used in traditional ways, and consequently, a push back campaign from Latin Americans has been increasingly influential.

However, two previous Popes have been cocaine users. Pope Leo XIII and Pope Pius X were drinkers of Vin Mariani, which was essentially cocaine dissolved in alcohol for its, er, tonic effect.

Pope Leo XIII even went as far as appearing in an advert for Vin Mariani, which you can see in the image above.

The advert says that “His Holiness THE POPE writes that he has fully appreciated the beneficient effects of this Tonic Wine and has forwarded to Mr. Mariani as a token of his gratitude a gold medal bearing his august effigy.”

But being a Latin American, the new Pope seems to have a much more sensible view of the drug and values it in its traditional form, and so probably won’t be giving away some of the papal gold after having a blast on the liquid snow.

 
Link to BBC News story.
And thanks to @MikeJayNet for reminding me of the historical connection.

Context Is the New Black

The New Yorker has one of the best articles I’ve ever read on the Stanford prison experiment – the notorious and mythologised study that probably doesn’t tell us that we ‘all have the potential to be monsters’.

It’s a study that’s often taught as one of the cornerstones of psychology and like many foundational stories, it has come to serve a purpose beyond what we can confidently conclude from it.

Was the study about our individual fallibility, or about broken institutions? Were its findings about prisons, specifically, or about life in general? What did the Stanford Prison Experiment really show?

The appeal of the experiment has a lot to do with its apparently simple setup: prisoners, guards, a fake jail, and some ground rules. But, in reality, the Stanford County Prison was a heavily manipulated environment, and the guards and prisoners acted in ways that were largely predetermined by how their roles were presented. To understand the meaning of the experiment, you have to understand that it wasn’t a blank slate; from the start, its goal was to evoke the experience of working and living in a brutal jail.

It’s a great piece that I can probably do little to add to here, so you’re best off reading it in full.
 

Link to The Real Lesson of the Stanford Prison Experiment.

An alternative history of the human mind

Nautilus has an excellent article on a theory of consciousness that is very likely wrong but so startlingly original it is widely admired: Julian Jaynes’ theory of the bicameral mind.

Based on the fact that there is virtually no description of mental states in the Ancient Greek classic The Iliad, where the protagonists are largely spoken to by Gods, Jaynes speculates that consciousness as we know it didn’t exist at this point in time and people experienced their thoughts as instructions from external voices which they interpreted as gods.

His book is a 1976 is a tour de force of interdisciplinary scholarship and although the idea that humans became conscious only 3,000 years ago is extremely unlikely, the book has been hugely influential even among people who think Jaynes was wrong, largely because he is a massively creative thinker.

Consciousness, Jaynes tells readers, in a passage that can be seen as a challenge to future students of philosophy and cognitive science, “is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of.” His illustration of his point is quite wonderful. “It is like asking a flashlight in a dark room to search around for something that does not have any light shining upon it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade all mentality when actually it does not.”

The Nautilus article is a brilliant retrospective on both Jaynes as a person and the theory, talking to some leading cognitive scientists who are admirers.

A wonderful piece on a delightful chapter in the history of psychology.
 

Link to Nautilus article on Julian Jaynes.

A visual history of madness

The Paris Review has an extended and richly illustrated piece by historian Andrew Scull who tracks how madness has been visually depicted through the ages.

Scull is probably the most thorough and readable historian of madness since the death of the late, great Roy Porter, and this article is no exception.

Modern psychiatry seems determined to rob madness of its meanings, insisting that its depredations can be reduced to biology and nothing but biology. One must doubt it. The social and cultural dimensions of mental disorders, so indispensable a part of the story of madness and civilization over the centuries, are unlikely to melt away, or to prove no more than an epiphenomenal feature of so universal a feature of human existence. Madness indeed has its meanings, elusive and evanescent as our attempts to capture them have been.

By the way, most of the illustrations in the web article seem to be clickable for high resolution full screen versions, so you can see them in full detail.
 

Link to Madness and Meaning in Paris Review.

Half a century of neuroscience

The Lancet has a good retrospective looking back on the last 50 years of neuroscience, which in some ways, was when the field was born.

Of course, the brain and nervous system has been the subject of study for hundreds, if not thousands, of years but the concept of a dedicated ‘neuroscience’ is relatively new.

The term ‘neuroscience’ was first used in 1962 by biologist Francis Schmitt who previously referred to the integrated study of mind, brain and behaviour by the somewhat less catchy title “biophysics of the mind”. The first undergraduate degree in neuroscience was offered by Amherst College only in 1973.

The Lancet article, by one of the first generation ‘neuroscientists’ Steven Rose, looks back at how the discipline began in the UK (in a pub, as most things do) and then widens his scope to review how neuroscience has transformed over the last 50 years.

But many of the problems that had beset the early days remain unresolved. Neuroscience may be a singular label, but it embraces a plurality of disciplines. Molecular and cognitive neuroscientists still scarcely speak a common language, and for all the outpouring of data from the huge industry that neuroscience has become, Schmitt’s hoped for bridging theories are still in short supply. At what biological level are mental functions to be understood? For many of the former, reductionism rules and the collapse of mind into brain is rarely challenged—there is even a society for “molecular and cellular cognition”—an elision hardly likely to appeal to the cognitivists who regard higher order mental functions as emergent properties of the brain as a system.

It’s an interesting reflection on how neuroscience has changed over its brief lifespan from one of the people who were there at the start.
 

Link to ’50 years of neuroscience’ in The Lancet.