Round trip ticket to the science of psychedelics

The latest edition of The Psychologist is a special open-access issue on the science and social impact of hallucinogenic drugs.

There’s an article by me on culture and hallucinogens that discusses the role of hallucinogenic drugs in diverse cultures and which also covers how cultural expectations shape the hallucinogenic experience – from traditional Kitanemuk society to YouTube trip videos.

The other articles cover some fascinating topics.

Neuroscientists Robin Carhart-Harris, Mendel Kaelen and David Nutt have a great article on the neuroscience of hallucinogens, Henry David Abraham discusses hallucinogen persisting perception disorder or post-trip flashbacks, and there’s also piece that talks to a researcher, participant and clinician on the use of psilocybin to alleviate cancer anxiety, while Keith Laws discusses an intense painting and its psychedelic aspects.

There’s also an excellent piece on the influence of psychedelic drugs on literature from Dirk Hanson – long-time writer of the essential drug blog Addiction Inbox, and Mo Costandi (who you may know from the Neurophilosophy blog) has written a fantastic retrospective of the use of psychedelics in psychiatry.

Overall, a fascinating read and well worth checking out.
 

Link to special issue of The Psychologist on hallucinogens.

Disco biscuits

This is a video of Professor Stephen Stahl, author of Stahl’s Essential Psychopharmacology, doing a DSM-5 themed version of Stayin’ Alive by the BeeGees.
 

After working out that, no, no-one has dropped acid in your morning Red Bull, you may notice that the professor busts some pretty respectable moves.
 

Link to video on YouTube (via @AllenFrancesMD)

How to speak the language of thought

We are now beginning to crack the brain’s code, which allows us to answer such bizarre questions as “what is the speed of thought?”

When he was asked, as a joke, to explain how the mind works in five words, cognitive scientist Steven Pinker didn’t hesitate. “Brain cells fire in patterns”, he replied. It’s a good effort, but all it really does is replace one enigma with another mystery.

It’s long been known that brain cells communicate by firing electrical signals to each other, and we now have myriad technologies for recording their patterns of activity – from electrodes in the brain or on the scalp, to functional magnetic resonance scanners that can detect changes in blood oxygenation. But, having gathered these data, the meaning of these patterns is still an enduring mystery. They seem to dance to a tune we can’t hear, led by rules we don’t know.

Neuroscientists speak of the neural code, and have made some progress in cracking that code. They are figuring out some basic rules, such as when cells in specific parts of the brain are likely to light up depending on the task at hand. Progress has been slow, but in the last decade various research teams around the world have been pursuing a far more ambitious project. We may never be able to see the complete code book, they realised, but by trying to write our own entries, we can begin to pick apart the ways that different patterns correspond to different actions.

Albert Lee and Matthew Wilson, at the Massachusetts Institute of Technology (MIT) first helped to set out the principles in 2002. It progresses like this. First, we record from the brain of a rat – one of our closer relatives, in the grand tree of life – as it runs a maze. Studying the whole brain would be too ambitious, so we can focus our recording on an area known as the hippocampus, known to be important for navigation and memory. If you’ve heard of this area before it is probably because of a famous result which showed that London taxi drivers developed larger hippocampi the longer they had spent navigating the streets of England’s sprawling capital.

While the rat runs the maze we record where it is, and simultaneously how the cells in the hippocampus are firing. The cell firing patterns are thrown into a mathematical algorithm which finds the pattern that best matches each bit of the maze. The language of the cells is no less complex, but now we have a Rosetta Stone against which we can decode it. We then test the algorithm by feeding it freshly recorded patterns, to see if it correctly predicts where the rat was at the point that pattern was recorded.

It doesn’t allow us to completely crack the code, because we still don’t know all the rules, and it can’t help us read the patterns which aren’t from this bit of the brain or which aren’t about maze running, but it is still a powerful tool.  For instance, using this technique, the team was able to show that the specific sequence of cell firing repeated in the brain of the rat when it slept after running the maze (and, as a crucial comparison, not in the sleep it had enjoyed before it had run the maze).

Fascinatingly, the sequence repeated faster during sleep around 20 times faster. This meant that the rat could run the maze in their sleeping minds in a fraction of the time it took them in real life. This could be related to the mnemonic function of sleep; by replaying the memory, it might have helped the rat to consolidate its learning. And the fact that the replay was accelerated might give us a glimpse of the activity that lies behind sudden insights, or experiences where our life “flashes before our eyes”; when not restrained, our thoughts really can retrace familiar paths in “fast forward”. Subsequent work has shown that these maze patterns can run backwards as well as forwards  – suggesting that the rats can imagine a goal, like the end of the maze, and work their way back from that to the point where they are.

One application of techniques like these, which are equal parts highly specialised measurement systems and fiercely complicated algorithms, has been to decode the brain activity in patients who are locked in or in a vegetative state. These patients can’t move any of their muscles, and yet they may still be mentally aware and able to hear people talking to them in the same room. First, the doctors ask the patients to imagine activities which are known to active specific brain regions – such as the hippocampus. The data is then decoded so that you know which brain activity corresponds to certain ideas. During future brain scans, the patients can then re-imagine the same activities to answer basic questions. For instance, they might be told to imagine playing tennis to answer yes and walking around their house to answer no – the first form of communication since their injury.

There are other applications, both theoretical science, to probe the inner workings of our minds, and practical domains such as brain-computer interfaces. If, in the future, a paraplegic wants to control a robot arm, or even another person, via a brain interface, then it will rely on the same techniques to decode information and translate it into action. Now the principles have been shown to work, the potential is staggering.

If you have an everyday psychological phenomenon you’d like to see written about in these columns please get in touch @tomstafford or ideas@idiolect.org.uk

This is my BBC Future column from monday. The original is here

Brain scanning the deceased

I’ve got an article in The Observer about how, a little surprisingly, the dead are becoming an increasing focus for brain scanning studies.

I first discussed this curious corner of neuroscience back in 2007 but a recent Neuroskeptic post reminded me of the area and I decided to check in on how it’s progressing.

It turns out that brain scanning the dead is becoming increasingly common in research and medicine and the article looks at how the science is progressing. Crucially, it’s helping us better understand ourselves in both life and death.

For thousands of years, direct studies of the human brain required the dead. The main method of study was dissection, which needed, rather inconveniently for the owner, physical access to their brain. Despite occasional unfortunate cases where the living brain was exposed on the battlefield or the surgeon’s table, corpses and preserved brains were the source of most of our knowledge.

When brain scanning technologies were invented in the 20th century they allowed the structure and function of the brain to be shown in living humans for the first time. This was as important for neuroscientists as the invention of the telescope and the cadaver slowly faded into the background of brain research. But recently, scrutiny of the post-mortem brain has seen something of a revival, a resurrection you might say, as modern researchers have become increasingly interested in applying their new scanning technologies to the brains of the deceased.

It’s a fascinating area and you can read the full article at the link below.

UPDATE: I’ve just noticed two of the links to studies have gone AWOL from the online article. The study that looked for the source of a mysterious signal by scanning people, cadavers and dummies and found it was a scanner problem was this one and the study that used corpses to test in-scanner motion correction was this one.

 

Link to Observer article on brain scanning the dead.

Spike activity 15-08-2014

Quick links from the past week in mind and brain news:

An important editorial in Nature describes the pressing problem of how research is not being turned into practice for treating children with mental health problems caused by armed conflict.

Not Exactly Rocket Science covers a swarm of self-organising autonomous robots that have the potential to rise up, rise up and threaten humanity with their evil buzzing. To the bunkers!

A Malaysian language names odors as precisely as English does colors. Interesting finding covered by Discover Magazine.

New York Magazine has a piece on the social psychology of how the presence of militarised police can increase aggression.

The Demographics of Genocide: Who Commits Mass Murder? Interesting piece in The Atlantic.

The Neurocritic has a fascinating interview with Jan Kalbitzer, the man behind the ‘Twitter psychosis’ case study, who discusses the media reverberations of the article.

Excellent Wired profile of Yann LeCun, AI guru begind Facebook’s, tweaked deep learning revolution.

Science News has an interesting piece on how the explosion of baby monitoring technology feeds ‘paranoia parenting’.

The new president of the Royal College of Psychiatrists gives his first interview in The Guardian and lays down some hard truths about mental health treatment.

One death too many

One of the first things I do in the morning is check the front pages of the daily papers and on the day following Robin Williams’ death, rarely have I been so disappointed in the British press.

Over the years, we have gathered a lot of evidence from reliable studies that show that how suicide is reported in the mass media affects the chances of suicide in the population – likely due to its effect on vulnerable people.

In other words, sensationalist and simplistic coverage of suicides, particularly celebrity suicides, regularly leads to more deaths.

It seems counter-intuitive to many, that a media description of suicide could actually increase the risk for suicide, but it is a genuine risk and people die through what is sometimes called suicide contagion or copycat suicide.

For this reason, organisations from the Samaritans, to the Centre for Disease Control, to an international panel of media organisations, have created explicit suicide reporting guidelines to ensure that no one dies or is harmed unnecessarily because of how suicide is reported.

The guidelines include sensible advice like not focusing on the methods people use to harm themselves, not oversimplifying the causes, not overly focusing on celebrity suicide, avoiding sensationalist coverage and not presenting suicide as a tool for accomplishing certain ends.

This advice keeps people safe. Today’s coverage does exactly the opposite, and many of the worst examples of dangerous reporting have been put directly on the front pages.

It is entirely possible to report on suicide and self-harm in a way that informs us, communicates the tragedy of the situation, and leaves us better off as a result of making these events more comprehensible.

This is not about freedom of the press. The press can report on what they want, how they want. There are no laws against bad reporting and neither would I want there to be but you do have a personal and professional responsibility to ensure that you are not putting people at risk by your need to sell copy.

You also have to look yourself in the mirror every morning, and by the front pages of many of today’s daily papers, I’m sure there are more than a few editors who had to divert their gaze while standing, momentarily shamed, in front of their own reflections.

Drugs in space and sleepless in the shuttle

A fascinating study published in today’s Lancet Neurology reports on sleep deprivation in astronauts but also describes the drugs shuttle crew members use to keep themselves awake and help them fall asleep.

The study looked at sleep data from 64 astronauts on 80 space shuttle missions along with 21 astronauts on 13 International Space Station missions, and compared it to their sleep on the ground and in the days before space flight.

Essentially, in-flight astronauts don’t get a great deal of shut-eye, but what’s surprising is the range and extent of drugs they use to manipulate sleep.

Mostly these are the z-drug class of sleep medications (of which the best known is zolpidem, branded name Ambien) but also include benzos, melatonin and an antipsychotic called quetiapine.

Here are the sleep-inducing drugs with my comments in square brackets:

Zolpidem and zolpidem controlled release were the most frequently used drugs on shuttle missions, accounting for 301 (73%) and 49 (12%) of the 413 nights, respectively, when one dose of drug was reported. Zaleplon use was reported on 45 (11%) of 413 nights.

Other sleep-promoting drugs reported by shuttle crew members during the 413 nights included temazepam [sedative anti-anxiety benzodiazepine - similar to Vallium] on 8 (2%) nights, eszopiclone on 2 (<1%) nights, melatonin [hormone that regulates circadian rhythms] on 7 (2%) nights, and quetiapine fumarate [antipsychotic] on 1 (<1%) night.

The paper also notes concerns about the astronauts’ use of zolpidem and similar z-drug medications because they can affect mental sharpness, coordination and can lead to unusual and complex ‘sleep-behaviours’.

Interestingly, it seems astronauts tend to use these drugs in a rather ad-hoc manner and the consequences of this have clearly not been well thought through.

As the Lancet Neurology paper notes:

This consideration is especially important because all crew members on a given mission might be taking a sleep-promoting drug at the same time…. crew members reported taking a second dose of hypnotic drugs—most commonly zolpidem—often only a few hours before awakening. Although crew members are encouraged to try such drugs on the ground at home at least once before their use in flight, such preparations probably do not involve multiple dosing or dosing with two different drugs on the same night.

Furthermore, such tests do not include any measure of objective effectiveness or safety, such as what would happen in the case of abrupt awakening during an in-flight night-time emergency… sleep-related-eating, sleep-walking, and sleep-driving events have been reported with zolpidem use, leading the FDA to require a so-called black-box warning on all hypnotic drugs stating that driving and performance of other tasks might be impaired in the morning after use of such drugs:

“A variety of abnormal thinking and behavior changes have been reported to occur in association with the use of sedative/hypnotics…. Complex behaviors such as ‘sleep-driving’…have been reported. Amnesia, anxiety, and other neuropsychiatric symptoms may occur unpredictably.”

However, use of sleep drugs was reported on more than half the nights before extravehicular activities were undertaken.

Information on stimulant use by astronauts is hidden in the appendix but caffeine was widely used in space, but less than when on the ground – although possibly due to coffee shortages, and modafinil was used occasionally.

Caffeine was widely used throughout all data collection intervals by both shuttle and ISS crewmembers, though supply shortages sometimes led to coffee rationing and reduced consumption aboard ISS. All but eight shuttle mission crewmembers (72/80, 90%) and all but one ISS crewmember (20/21,95%) reported using caffeine at least once during the study…

Given the 3-7 hour half-life of caffeine and the sleep disturbances associated with its use, caffeine may have contributed to or enabled the sleep curtailment observed in this population. However, there is no evidence that caffeine accounts for the reduced sleep duration observed during spaceflight, as caffeine consumption was, if anything, reduced during spaceflight.

The wakefulness-promoting medication, modafinil, was reportedly used on both shuttle (10 reported uses) and ISS missions (2 reported uses). The use of this wakefulness-promoting medication was reported more frequently in post-flight debriefs.

There’s also an interesting snippet that gives the most common reason for sleep disturbance in space:

Nocturnal micturition is common in this age group and was the most reported reason for disruptive sleep both on Earth and inflight

Not stress, not being surrounded by equipment, not a lack of home comforts, but ‘Nocturnal micturition’ or wetting yourself in your sleep.

This is possibly more likely in space due to the fact that bodily cues for a full bladder work less effectively in zero gravity, but one major factor in astronauts wetting themselves was that it a better alternative than waking sleeping colleagues by going to the toilet.

The paper notes that this is why many astronauts wear ‘maximum absorbency garments’ – essentially giant nappies – while they sleep.
 

Link to locked Lancet study on sleep in astronauts.

Hallucinating in the deep waters of consciousness

On Saturday I curated a series of short films about other inner worlds, altered states and the extremes of mental health at London’s Shuffle Festival. I discovered one of the films literally a couple of days before the event, and it completely blew me away.

Narcose is a French documentary about a dive by world champion free diver Guillaume Néry. It documents, in real time, a five minute dive from a single breath and the hallucinations he experiences due to carbon dioxide narcosis.
 

 

Firstly, the film is visually stunning. A masterpiece of composition, light and framing.

Secondly, it’s technically brilliant. The director presumably thought ‘what can we do when we have access to a community of free divers, who can hold their breath under water for minutes at a time?’ It turns out, you can create stunning underwater scenes with a cast of apparently water-dwelling humans.

But most importantly it is a sublime depiction of Néry’s enchanted world where the boundaries between inner and outer perception become entirely porous. It is perhaps the greatest depiction of hallucinations I’ve seen on film.

Darken the room, watch it on as big a screen as possible and immerse yourself.
 

Link to Narcose on Vimeo.

Why bad news dominates the headlines

Why are newspapers and TV broadcasts filled with disaster, corruption and incompetence? It may be because we’re drawn to depressing stories without realising, says psychologist Tom Stafford.

When you read the news, sometimes it can feel like the only things reported are terrible, depressing events. Why does the media concentrate on the bad things in life, rather than the good? And what might this depressing slant say about us, the audience?

It isn’t that these are the only things that happen. Perhaps journalists are drawn to reporting bad news because sudden disaster is more compelling than slow improvements. Or it could be that newsgatherers believe that cynical reports of corrupt politicians or unfortunate events make for simpler stories. But another strong possibility is that we, the readers or viewers, have trained journalists to focus on these things. Many people often say that they would prefer good news: but is that actually true?

To explore this possibility, researchers Marc Trussler and Stuart Soroka, set up an experiment, run at McGill University in Canada. They were dissatisfied with previous research on how people relate to the news – either the studies were uncontrolled (letting people browse news at home, for example, where you can’t even tell who is using the computer), or they were unrealistic (inviting them to select stories in the lab, where every participant knew their choices would be closely watched by the experimenter). So, the team decided to try a new strategy: deception.

 

Trick question

Trussler and Soroka invited participants from their university to come to the lab for “a study of eye tracking”. The volunteers were first asked to select some stories about politics to read from a news website so that a camera could make some baseline eye-tracking measures. It was important, they were told, that they actually read the articles, so the right measurements could be prepared, but it didn’t matter what they read.

After this ‘preparation’ phase, they watched a short video (the main purpose of the experiment as far as the subjects were concerned, but it was in fact just a filler task), and then they answered questions on the kind of political news they would like to read.

The results of the experiment, as well as the stories that were read most, were somewhat depressing. Participants often chose stories with a negative tone – corruption, set-backs, hypocrisy and so on – rather than neutral or positive stories. People who were more interested in current affairs and politics were particularly likely to choose the bad news.

And yet when asked, these people said they preferred good news. On average, they said that the media was too focussed on negative stories.

 

Danger reaction

The researchers present their experiment as solid evidence of a so called “negativity bias“, psychologists’ term for our collective hunger to hear, and remember bad news.

It isn’t just schadenfreude, the theory goes, but that we’ve evolved to react quickly to potential threats. Bad news could be a signal that we need to change what we’re doing to avoid danger.

As you’d expect from this theory, there’s some evidence that people respond quicker to negative words. In lab experiments, flash the word “cancer”, “bomb” or “war” up at someone and they can hit a button in response quicker than if that word is “baby”, “smile” or “fun” (despite these pleasant words being slightly more common). We are also able to recognise negative words faster than positive words, and even tell that a word is going to be unpleasant before we can tell exactly what the word is going to be.

So is our vigilance for threats the only way to explain our predilection for bad news? Perhaps not.

There’s another interpretation that Trussler and Soroka put on their evidence: we pay attention to bad news, because on the whole, we think the world is rosier than it actually is. When it comes to our own lives, most of us believe we’re better than average, and that, like the clichés, we expect things to be all right in the end. This pleasant view of the world makes bad news all the more surprising and salient. It is only against a light background that the dark spots are highlighted.

So our attraction to bad news may be more complex than just journalistic cynicism or a hunger springing from the darkness within.

And that, on another bad news day, gives me a little bit of hope for humanity.

Are women and men forever destined to think differently?

By Tom Stafford, University of Sheffield

The headlines

The Australian: Male and female brains still unequal

The International Institute for Applied Systems Analysis: Gender disparities in cognition will not diminish

The Economist: A variation in the cognitive abilities of the two sexes may be more about social development than gender stereotypes

The story

Everybody has an opinion on men, women and the difference (or not) between them. Now a new study has used a massive and long-running European survey to investigate how differences in cognitive ability are changing. This is super smart, because it offers us an escape from arguing about whether men and women are different in how they think, allowing us some insight into how any such differences might develop.

What they actually did

Researchers led by Daniela Weber at Austria’s International Institute for Applied Systems Analysis analysed data collected as part of the European Survey of Health, Ageing and Retirement. This includes data analysed in this study from approximately 31,000 adults, men and women all aged older than 50. As well as answering demographic questions, the survey participants took short quizzes which tested their memory, numeracy and verbal fluency (this last item involved a classic test which asks people to name as many animals as they could in 60 seconds). Alongside each test score, we have the year the participant was born in, as well as measures of gender equality and economic development for the country where they grew up.

What they found

The results show that as a country develops economically, the differences in cognitive ability between men and women change. But the pattern isn’t straightforward. Differences in verbal fluency disappear (so that an advantage on this test for men born in the 1920s over women is not found for those born in the 1950s). Differences in numeracy diminish (so the male advantage is less) and differences in memory actually increase (so that a female advantage is accentuated).

Further analysis looked at the how these differences in cognitive performance related to the amount of education men and women got. In all regions women tended to have fewer years of education, on average, then men. But, importantly, the size of this difference varied. This allowed the researchers to gauge how differences in education affected cognitive performance.

For all three abilities tested, there was a relationship between the size of the differences in the amount of education and the size of the difference in cognitive performance: fewer years of education for women was associated with worse scores for women, as you’d expect.

What varied for the three abilities was in the researchers’ predictions for the situation where men and women spent an equal amount of time in education: for memory this scenario was associated with a distinct female advantage, for numeracy a male advantage and for verbal fluency, there was no difference.

What this means

The thing that dogs studies on gender differences in cognition is the question of why these differences exist. People have such strong expectations, that they often leap to the assumption that any observed difference must reflect something fundamental about men vs women. Here, consider the example of the Australian newspaper which headlined their take on this story as telling us something about “male and female brains”, the implication being that the unequalness was a fundamental, biological, difference. In fact, research often shows that gender differences in cognitive performance are small, and even then we don’t know why these differences exist.

The great thing about this study is that by looking at how gender differences evolve over time it promises insight into what drives those difference in the first place. The fact that the female memory advantage increases as women are allowed more access to education is, on the face of it, suggestive evidence that at least one cognitive difference between men and women may be unleashed by more equal societies, rather than removed by them.

Tom’s take

The most important thing to take from this research is – as the authors report – increasing gender equality disproportionately benefits women. This is because – no surprise! – gender inequality disproportionately disadvantages women. Even in the area of cognitive performance, this historical denial of opportunities, health and education to women means, at a population level, they have more potential to increase their scores on these tests.

Along with other research on things like IQ, this study found systemmatic improvements in cognitive performance across time for both men and women – as everyone’s opportunities and health increases, so does their cognitive function.

But the provocative suggestion of this study is that as societies develop we won’t necessarily see all gender differences go away. Some cognitive differences may actually increase when women are at less of a disadvantage.

You don’t leap to conclusions based on one study, but this is a neat contribution. One caveat is that even though indices such as “years in education” show diminished gender inequality in Europe, you’d be a fool to think that societies which educated men and women for an equal number of years treated them both equally and put equal expectations on them.

Even if you thought this was true for 2014, you wouldn’t think this was true for European societies of the 1950s (when the youngest of these study participants were growing up). There could be very strong societal influences on cognitive ability – such as expecting women to be good with words and bad with numbers – that simply aren’t captured by the data analysed here.

Personally, I find it interesting to observe how keen people are to seize on such evidence that “essential” gender differences definitely do exist (despite the known confounds of living in a sexist society). My preferred strategy would be to hold judgement and focus on the remaking the definitely sexist society. For certain, we’ll only get the truth when we have an account of how cognitive abilities develop within both biological and social contexts. Studies like this point the way, and suggest that whatever the truth is, it should have some surprises for everyone.

Read more

The original research: The changing face of cognitive gender differences in Europe

My previous column on gender differences: Are men better wired to read maps or is it a tired cliché?

Cordelia Fine’s book, Delusions of gender: how our minds, society, and neuro-sexism create difference

The Conversation

This article was originally published on The Conversation.
Read the original article.

Shuffle Your Mind: Short Film Screenings

If you’re around in London Saturday 2nd August I’m curating a showing of short films about psychosis, hallucinations and mental health as part of the fantastic Shuffle Festival.

The films include everything from a first-person view of voice hearing, to out-of-step behaviour in the urban sprawl, to a free-diver’s deep sea hallucinations.

There will be a discussion after the showing with film-makers and first-person visionaries about the challenges of depicting altered minds, other inner worlds and the limits of mental health.

Tickets are free but you have to book as there are only 40 seats.

If you want to join us, find the event on this page (which doesn’t list all the films, so prepare for some surprises) and click to book.

Seeing ourselves through the eyes of the machine

I’ve got an article in The Observer about how our inventions have profoundly shaped how we view ourselves because we’ve traditionally looked to technology for metaphors of human nature.

We tend to think that we understand ourselves and then create technologies to take advantage of that new knowledge but it usually happens the other way round – we invent something new and then use that as a metaphor to explain the mind and brain.

As history has moved on, the mind has been variously explained in terms of a wax tablets, a house with many rooms, pressures and fluids, phonograph recordings, telegraph signalling, and computing.

The idea that these are metaphors sometimes gets lost which, in some ways, is quite worrying.

It could be that we’ve reached “the end of history” as far as neuroscience goes and that everything we’ll ever say about the brain will be based on our current “brain as calculation” metaphors. But if this is not the case, there is a danger that we’ll sideline aspects of human nature that don’t easily fit the concept. Our subjective experience, emotions and the constantly varying awareness of our own minds have traditionally been much harder to understand as forms of “information processing”. Importantly, these aspects of mental life are exactly where things tend to go awry in mental illness, and it may be that our main approach for understanding the mind and brain is insufficient for tackling problems such as depression and psychosis. It could be we simply need more time with our current concepts, but history might show us that our destiny lies in another metaphor, perhaps from a future technology.

I mention Douwe Draaisma’s book Metaphors of Memory in the article but I also really recommend Alison Winter’s book Memory: Fragments of a Modern History which also covers the fascinating interaction between technological developments and how we understand ourselves.

You can read my full article at the link below.
 

Link to article in The Observer.

Awaiting a theory of neural weather

In a recent New York Times editorial, psychologist Gary Marcus noted that neuroscience is still awaiting a ‘bridging’ theory that elegantly connects neuroscience with psychology.

This reflects a common belief in cognitive science that there is a ‘missing law’ to be discovered that will tell us how mind and brain are linked – but it is quite possible there just isn’t one to be discovered.

Marcus, not arguing for the theory himself, describes it when he writes:

What we are really looking for is a bridge, some way of connecting two separate scientific languages — those of neuroscience and psychology.

Such bridges don’t come easily or often, maybe once in a generation, but when they do arrive, they can change everything. An example is the discovery of DNA, which allowed us to understand how genetic information could be represented and replicated in a physical structure. In one stroke, this bridge transformed biology from a mystery — in which the physical basis of life was almost entirely unknown — into a tractable if challenging set of problems, such as sequencing genes, working out the proteins that they encode and discerning the circumstances that govern their distribution in the body.

Neuroscience awaits a similar breakthrough. We know that there must be some lawful relation between assemblies of neurons and the elements of thought, but we are currently at a loss to describe those laws.

The idea of a DNA-like missing component that will allow us to connect theories of psychology and neuroscience is an attractive one, but it is equally as likely that the connection between mind and brain is more like the relationship between molecular interactions and the weather.

In this case, there is no ‘special theory’ that connects weather to molecules because different atmospheric phenomena are understood in multiple ways and across multiple models, each of which has a differing relationship to the scale at which the physical data is understood – fluid flows, as statistical models, atomic interactions and so on.

In explanatory terms, ‘psychology’ is probably a lot like the weather. The idea of their being a ‘psychological level’ is a human concept and its conceptual components won’t neatly relate to neural function in a uniform way.

Some functions will have much more direct relationships – like basic sensory information and its representation in the brain’s ‘sensotopic maps’. A good example might be how visual information in space is represented in an equivalent retinotopic map in the brain.

Other functions will have more more indirect relationships but in great part because of how we define ‘functions’. Some have very empirical definitions – take iconic memory – whereas others will be cultural or folk concepts – think vicarious embarrassment or nostalgia.

So it’s unlikely we’re going to find an all-purpose theoretical bridge to connect psychology and neuroscience. Instead, we’ll probably end up with what Kenneth Kendler calls ‘patchy reductionism’ – making pragmatic links between mind and brain where possible using a variety of theories and descriptions.

A search for a general ‘bridging theory’ may be a fruitless one.
 

Link to NYT piece ‘The Trouble With Brain Science’.

Out on a limb too many

Two neuropsychologists have written a fascinating review article about the desire to amputate a perfectly healthy limb known variously as apotemnophilia, xenomelia or body integrity identity disorder

The article is published in the Journal of Neuropsychiatric Disease and Treatment although some who have these desires would probably disagree that it is a disease or disorder and are more likely to compare it to something akin to being transgender.

The article also discusses the two main themes in the research literature: an association with sexual fetish for limb aputation (most associated with the use of the name apotemnophilia) and an alteration in body image linked to differences in the function of the parietal lobe in the brain (most associated with the use of the name xenomelia).

It’s a fascinating review of what we know about this under-recognised form of human experience but it also has an interesting snippet about how this desire first came to light not in the scientific literature, but in the letters page of Penthouse magazine:

A first description of this condition traces back to a series of letters published in 1972 in the magazine Penthouse. These letters were from erotically-obsessed persons who wanted to become amputees themselves. However, the first scientific report of this desire only appeared in 1977: Money et al described two cases who had intense desire toward amputation of a healthy limb. Another milestone was a 2005 study by Michael First, an American psychiatrist, who published the first systematic attempt to describe individuals who desire amputation of a healthy limb. Thanks to this survey, which included 52 volunteers, a number of key features of the condition are identified: gender prevalence (most individuals are men), side preference (left-sided amputations are most frequently desired), and finally, a preference toward amputation of the leg versus the arm.

The review also discusses a potentially related experience which has recently been reported – the desire to be paralysed.

If you want a more journalistic account, Matter published an extensive piece on the condition last year.
 

Link to scientific review article on apotemnophilia / xenomelia.
Link to Matter article.

Towards a scientifically unified therapy

nature_scienceToday’s edition of Nature has an excellent article on the need to apply cognitive science to understanding how psychological therapies work.

Psychological therapies are often called ‘talking treatments’ but this is often a misleading name. Talking is essential, but it’s not where most of the change happens.

Like seeing a personal trainer in the gym, communication is key, but it’s the exercise which accounts for the changes.

In the same way, psychological therapy is only as effective as the experience of putting changes into practice, but we still know relatively little about the cognitive science behind this process.

Unfortunately, there is a traditional but unhelpful divide in psychology where some don’t see any sort of emotional problem as biological in any way, and the contrasting divide in psychiatry where biology is considered the only explanation in town.

The article in Nature argues that this is pointless and counter-productive:

It is time to use science to advance the psychological, not just the pharmaceutical, treatment of those with mental-health problems. Great strides can and must be made by focusing on concerns that are common to fields from psychology, psychiatry and pharmacology to genetics and molecular biology, neurology, neuroscience, cognitive and social sciences, computer science, and mathematics. Molecular and theoretical scientists need to engage with the challenges that face the clinical scientists who develop and deliver psychological treatments, and who evaluate their outcomes. And clinicians need to get involved in experimental science. Patients, mental-health-care providers and researchers of all stripes stand to benefit.

The piece tackles many good examples of why this is the case and sets out three steps for bridging the divide.

Essential reading.
 

Link to ‘Psychological treatments: A call for mental-health science’.

Why do we bite our nails?

It can ruin the appearance of your hands, could be unhygienic and can hurt if you take it too far. So why do people do it? Biter Tom Stafford investigates

What do ex-British prime minster Gordon Brown, Jackie Onassis, Britney Spears and I all have in common? We all are (or were) nail biters.

It’s not a habit I’m proud of. It’s pretty disgusting for other people to watch, ruins the appearance of my hands, is probably unhygienic and sometimes hurts if I take it too far. I’ve tried to quit many times, but have never managed to keep it up.

Lately I’ve been wondering what makes someone an inveterate nail-biter like me. Are we weaker willed? More neurotic? Hungrier? Perhaps, somewhere in the annals of psychological research there could be an answer to my question, and maybe even hints about how to cure myself of this unsavoury habit.

My first dip into the literature shows up the medical name for excessive nail biting: ‘onychophagia’. Psychiatrists classify it as an impulse control problem, alongside things like obsessive compulsive disorder. But this is for extreme cases, where psychiatric help is beneficial, as with other excessive grooming habits like skin picking or hair pulling. I’m not at that stage, falling instead among the majority of nail biters who carry on the habit without serious side effects. Up to 45% of teenagers bite their nails, for example; teenagers may be a handful but you wouldn’t argue that nearly half of them need medical intervention. I want to understand the ‘subclinical’ side of the phenomenon – nail biting that isn’t a major problem, but still enough of an issue for me to want to be rid of it.

It’s mother’s fault

Psychotherapists have had some theories about nail biting, of course. Sigmund Freud blamed it on arrested psycho-sexual development, at the oral stage (of course). Typical to Freudian theories, oral fixation is linked to myriad causes, such as under-feeding or over-feeding, breast-feeding too long, or problematic relationship with your mother. It also has a grab-bag of resulting symptoms: nail biting, of course, but also a sarcastic personality, smoking, alcoholism and love of oral sex. Other therapists have suggested nail-biting may be due to inward hostility – it is a form of self-mutilation after all – or nervous anxiety.

Like most psychodynamic theories these explanations could be true, but there’s no particular reason to believe they should be true. Most importantly for me, they don’t have any strong suggestions on how to cure myself of the habit. I’ve kind of missed the boat as far as extent of breast-feeding goes, and I bite my nails even when I’m at my most relaxed, so there doesn’t seem to be an easy fix there either. Needless to say, there’s no evidence that treatments based on these theories have any special success.

Unfortunately, after these speculations, the trail goes cold. A search of a scientific literature reveals only a handful of studies on treatment of nail-biting. One reports that any treatment which made people more aware of the habit seemed to help, but beyond that there is little evidence to report on the habit. Indeed, several of the few articles on nail-biting open by commenting on the surprising lack of literature on the topic.

Creature of habit

Given this lack of prior scientific treatment, I feel free to speculate for myself. So, here is my theory on why people bite their nails, and how to treat it.

Let’s call it the ‘anti-theory’ theory. I propose that there is no special cause of nail biting – not breastfeeding, chronic anxiety or a lack of motherly love. The advantage of this move is that we don’t need to find a particular connection between me, Gordon, Jackie and Britney. Rather, I suggest, nail biting is just the result of a number of factors which – due to random variation – combine in some people to create a bad habit.

First off, there is the fact that putting your fingers in your mouth is an easy thing to do. It is one of the basic functions for feeding and grooming, and so it is controlled by some pretty fundamental brain circuitry, meaning it can quickly develop into an automatic reaction. Added to this, there is a ‘tidying up’ element to nail biting – keeping them short – which means in the short term at least it can be pleasurable, even if the bigger picture is that you end up tearing your fingers to shreds. This reward element, combined with the ease with which the behaviour can be carried out, means that it is easy for a habit to develop; apart from touching yourself in the genitals it is hard to think of a more immediate way to give yourself a small moment of pleasure, and biting your nails has the advantage of being OK at school. Once established, the habit can become routine – there are many situations in everyone’s daily life where you have both your hands and your mouth available to use.

Understanding nail-biting as a habit has a bleak message for a cure, unfortunately, since we know how hard bad habits can be to break. Most people, at least once per day, will lose concentration on not biting their nails.

Nail-biting, in my view, isn’t some revealing personality characteristic, nor a maladaptive echo of some useful evolutionary behaviour. It is the product of the shape of our bodies, how hand-to-mouth behaviour is built into (and rewarded in) our brains and the psychology of habit.

And, yes, I did bite my nails while writing this column. Sometimes even a good theory doesn’t help.

 

This was my BBC Future column from last week

Follow

Get every new post delivered to your Inbox.

Join 23,229 other followers