BBC Future column: Hypnic Jerks

Here’s my column at BBC Future from last week. You can see the original here. The full listof my columns is here and  there is now a RSS feed, should you need it

As we give up our bodies to sleep, sudden twitches escape our brains, causing our arms and legs to jerk. Some people are startled by them, others are embarrassed. Me, I am fascinated by these twitches, known as hypnic jerks. Nobody knows for sure what causes them, but to me they represent the side effects of a hidden battle for control in the brain that happens each night on the cusp between wakefulness and dreams.

Normally we are paralysed while we sleep. Even during the most vivid dreams our muscles stay relaxed and still, showing little sign of our internal excitement. Events in the outside world usually get ignored: not that I’d recommend doing this but experiments have shown that even if you sleep with your eyes taped open and someone flashes a light at you it is unlikely that it will affect your dreams.

But the door between the dreamer and the outside world is not completely closed. Two kinds of movements escape the dreaming brain, and they each have a different story to tell.

Brain battle

The most common movements we make while asleep are rapid eye-movements. When we dream, our eyes move according to what we are dreaming about. If, for example, we dream we are watching a game of tennis our eyes will move from left to right with each volley. These movements generated in the dream world escape from normal sleep paralysis and leak into the real world. Seeing a sleeping persons’ eyes move is the strongest sign that they are dreaming.

Hypnic jerks aren’t like this. They are most common in children, when our dreams are most simple and they do not reflect what is happening in the dream world – if you dream of riding a bike you do not move your legs in circles. Instead, hypnic jerks seem to be a sign that the motor system can still exert some control over the body as sleep paralysis begins to take over. Rather than having a single “sleep-wake” switch in the brain for controlling our sleep (i.e. ON at night, OFF during the day), we have two opposing systems balanced against each other that go through a daily dance, where each has to wrest control from the other.

Deep in the brain, below the cortex (the most evolved part of the human brain) lies one of them: a network of nerve cells called the reticular activating system. This is nestled among the parts of the brain that govern basic physiological processes, such as breathing. When the reticular activating system is in full force we feel alert and restless – that is, we are awak

Opposing this system is the ventrolateral preoptic nucleus: ‘ventrolateral’ means it is on the underside and towards the edge in the brain, ‘preoptic’ means it is just before the point where the nerves from the eyes cross. We call it the VLPO. The VLPO drives sleepiness, and its location near the optic nerve is presumably so that it can collect information about the beginning and end of daylight hours, and so influence our sleep cycles. As the mind gives in to its normal task of interpreting the external world, and starts to generate its own entertainment, the struggle between the reticular activating system and VLPO tilts in favour of the latter. Sleep paralysis sets in.

What happens next is not fully clear, but it seems that part of the story is that the struggle for control of the motor system is not quite over yet. Few battles are won completely in a single moment. As sleep paralysis sets in remaining daytime energy kindles and bursts out in seemingly random movements. In other words, hypnic jerks are the last gasps of normal daytime motor control.

Dream triggers

Some people report that hypnic jerks happen as they dream they are falling or tripping up. This is an example of the rare phenomenon known as dream incorporation, where something external, such as an alarm clock, is built into your dreams. When this does happen, it illustrates our mind’s amazing capacity to generate plausible stories. In dreams, the planning and foresight areas of the brain are suppressed, allowing the mind to react creatively to wherever it wanders – much like a jazz improviser responds to fellow musicians to inspire what they play.

As hypnic jerks escape during the struggle between wake and sleep, the mind is undergoing its own transition. In the waking world we must make sense of external events. In dreams the mind tries to make sense of its own activity, resulting in dreams. Whilst a veil is drawn over most of the external world as we fall asleep, hypnic jerks are obviously close enough to home – being movements of our own bodies – to attract the attention of sleeping consciousness. Along with the hallucinated night-time world they get incorporated into our dreams.

So there is a pleasing symmetry between the two kinds of movements we make when asleep. Rapid eye movements are the traces of dreams that can be seen in the waking world. Hypnic jerks seem to be the traces of waking life that intrude on the dream world.

A bridge over troubled waters for fMRI?

Yesterday’s ‘troubles with fMRI’ article has caused lots of debate so I thought I’d post the original answers given to me by neuroimagers Russ Poldrack and Tal Yarkoni from which I quoted.

Poldrack and Yarkoni have been at the forefront of finding, fixing and fine-tuning fMRI and its difficulties. I asked them about current challenges but could only include small quotes in The Observer article. Their full answers, included below with their permission, are important and revealing, so well worth checking out.

First, however, a quick note about the reactions the piece has received from the neuroimaging community. They tend to be split into “well said” and “why are you saying fMRI is flawed?”

Because of this, it’s worth saying that I don’t think fMRI or other imaging methods are flawed in themselves. However, it is true that we have discovered that a significant proportion of the past research has been based on potentially misleading methods.

Although it is true that these methods have largely been abandoned there still remain some important and ongoing uncertainties around how we should interpret neuroimaging data.

As a result of these issues, and genuinely due to the fact that brain scans are often enchantingly beautiful, I think neuroimaging results are currently given too much weight as we are trying to understand the brain but that we shouldn’t undervalue neuroimaging as a science.

Despite having our confidence shaken in past studies, neuroimaging will clearly come out better and stronger as a result of current debates about problems with analysis and interpretation.

At the moment, the science is at a fascinating point of transition, so it’s a great time to be interested in cognitive neuroscience and I think this is made crystal clear from Russ and Tal’s answers below.

Russ Poldrack from the University of Texas Austin

What’s the most pressing problem fMRI research needs to address at the moment?

I think that biggest fundamental problem is the great flexibility of analytic methods that one can bring to bear on any particular dataset; the ironic thing is that this is also one of fMRI’s greatest strengths, i.e., that it allows us to ask so many different questions in many different ways. The problem comes about when researchers search across many different analysis approaches for a result, without the realization that this induces an increase in the ultimate likelihood of finding a false positive. I think that another problem that interacts with this is the prevalence of relatively underpowered studies, which are often analyzed using methods that are not stringent enough to control the level of false positives. The flexibility that I mentioned above also includes methods that are known by experts to be invalid, but unfortunately these still get into top journals, which only helps perpetuate them further.

Someone online asked the question “How Much of the Neuroimaging Literature Should We Discard?” How do you think should we consider past fMRI studies that used problematic methodology?

I think that replication is the ultimate answer. For example, the methods that we used in our 1999 Neuroimage paper that examined semantic versus phonological processing seem pretty abominable by today’s standards, but the general finding of that paper has been replicated many times since then. There are many other findings from the early days that have stood the test of time, while others have failed to replicate. So I would say that if a published study used problematic methods, then one really wants to see some kind of replication before buying the result.

Tal Yarkoni from the University of Colorado at Boulder

What’s the most pressing problem fMRI research needs to address at the moment?

My own feeling (which I’m sure many people would disagree with) is that the biggest problem isn’t methodological laxness so much as skewed incentives. As in most areas of science, researchers have a big incentive to come up with exciting new findings that make a splash. What’s particularly problematic about fMRI research–as opposed to, say, cognitive psychology–is the amount of flexibility researchers have when performing their analyses. There simply isn’t any single standard way of analyzing fMRI data (and it’s not clear there should there be); as a result, it’s virtually impossible to assess the plausibility of many if not most fMRI findings simply because you have no idea how many things the researchers tried before they got something to work.

The other very serious and closely related problem is what I’ve talked about in my critique of Friston’s paper [on methods in fMRI analysis] as well as other papers (e.g., I wrote a commentary on the Vul et al “voodoo correlations” paper to the same effect): in the real world, most effects are weak and diffuse. In other words, we expect complicated psychological states or processes–e.g., decoding speech, experiencing love, or maintaining multiple pieces of information in mind–to depend on neural circuitry widely distributed throughout the brain, most of which are probably going to play a relatively minor role. The problem is that when we conduct fMRI studies with small samples at very stringent statistical thresholds, we’re strongly biased to detect only a small fraction of the ‘true’ effects, and because of the bias, the effects we do detect will seem much stronger than they actually are in the real world. The result is that fMRI studies will paradoxically tend to produce *less* interesting results as the sample size gets bigger. Which means your odds of getting a paper into a journal like Science or Nature are, in many cases, much higher if you only collect data from 20 subjects than if you collect data from 200.

The net result is that we have hundreds of very small studies in the literature that report very exciting results but are unlikely to ever be directly replicated, because researchers don’t have much of an incentive to collect the large samples needed to get a really good picture of what’s going on.

Someone online asked the question “How Much of the Neuroimaging Literature Should We Discard?” How do you think should we consider past fMRI studies that used problematic methodology?

This is a very difficult question to answer in a paragraph or two. I guess my most general feeling is that our default attitude to any new and interesting fMRI finding should be skepticism–instead of accepting findings at face value until we discover a good reason to discount them, we should incline toward disbelief until a finding has been replicated and extended. Personally I’d say I don’t really believe about 95% of what gets published. That’s not to say I think 95% of the literature is flat-out wrong; I think there’s probably a kernel of truth to most findings that get published. But the real problem in my view is a disconnect between what we should really conclude from any given finding and what researchers take license to say in their papers. To take just one example, I think claims of “selective” activation are almost without exception completely baseless (because very few studies really have the statistical power to confidently claim that absence of evidence is evidence of absence).

For example, suppose someone publishes a paper reporting that romantic love selectively activates region X, and that activation in that region explains a very large proportion of the variance in some behavior (this kind of thing happens all the time). My view is that the appropriate response is to say, “well, look, there probably is a real effect in region X, but if you had had a much larger sample, you would realize that the effect in region X is much smaller than you think it is, and moreover, there are literally dozens of other regions that show similarly-sized effects.” The argument is basically that much of the novelty of fMRI findings stems directly from the fact that most studies are grossly underpowered. So really I think the root problem is not that researchers aren’t careful to guard against methodological problems X, Y, and Z when doing their analyses; it’s that our mental model of what most fMRI studies can tell us is fundamentally wrong in most cases. A statistical map of brain activity is *not* in any sense an accurate window into how the brain supports cognition; it’s more like a funhouse mirror that heavily distorts the true image, and to understand the underlying reality, you also have to take into account the distortion introduced by the measurement. The latter part is where I think we have a systemic problem in fMRI research.

The trouble with fMRI

I’ve written a piece for The Observer about ‘the trouble with brain scans’ that discusses how past fMRI studies may have been based on problematic assumptions.

For years the media has misrepresented brain scan studies (“Brain centre for liking cheese discovered!”) but we are now at an interesting point where neuroscientists are starting to seriously look for problems in their own methods of analysis.

In fact, many of these problems have now been corrected, but we still have 100s or 1000s of previous studies that have been based on methods that have now been abandoned.

In part, the piece was inspired by a post on the Neurocritic blog entitled “How Much of the Neuroimaging Literature Should We Discard?” that was prompted by growing concerns among neuroscientists.

The fact is, fMRI is a relatively new science – it just celebrated it’s 20th birthday – and it is still evolving.

I suspect it will be revised and reconsidered many times yet.

 
Link to Observer article ‘The Trouble With Brain Scans’

What is the DSM supposed to do?

I’ve written an article for the Discover Magazine’s blog The Crux on what the DSM diagnostic manual is supposed to do.

This is quite an interesting question when you think about it. In other words, it asks – how do we define mental illness – both in theory and in practice?

The article tackles how you decide what a mental illness is in the first place and then how you go about classifying mental states that, by definition, can only be experienced by one person. It turns out, classifying mental illness is a lot like classifying literature.

It also discusses the old and possibly futile quest for ‘biological tests for mental illness’ as if there is a perfect mapping between how we classify mental states and how the brain actually works at the neurobiological level.

So if you want to know the thinking and, indeed, problems behind one of the central and often unquestioned assumptions of psychiatry, this should be a good place to start.
 

Link to ‘What Is the “Bible of Psychiatry” Supposed to Do?’

Sigman and the skewed screen of death

The media is buzzing this morning with the shocking news that children spend ‘more than six hours in front of screens’. The news is shocking, however, because it’s wrong.

The sound bite stems from an upcoming talk on ‘Alcohol and electronic media: units of consumption’ by evidence-ambivalent psychologist Aric Sigman who is doing a guest lecture at a special interest group meeting at the Royal College of Paediatrics and Child Health annual conference.

Sigman has a track record of being economical with evidence for the purpose of promoting his ‘traditional family values’ and this is another classic example.

The ‘six hour a day in front of the screen’ figure comes from a commercial research organisation called Childwise. It was the headline finding that made all the papers, which is quite convenient if you’re selling the report for £1800 a copy.

But why would you rely on a commercial report when you have so many non-commercial scientific studies to choose from?

A 2006 meta-analysis looked at 90, yes 90, studies on media use in young people from Europe and North America and here’s what it found.

Youth watch an average of 1.8–2.8h TV a day. This has not changed for 50 years. Boys and girls spend approx 60 and 23 min day on computer games. Computers account for an additional 30 min day. TV viewing tends to decrease during adolescence.

Now, that’s not to say that there aren’t risks to children if they spend large amounts of their time sat on their arse. Time spent watching television has genuinely been linked to poor health. However, it’s better to inform people of the details rather than the panic inducing headlines.

For example, talking about ‘screen time’ is probably not helpful. For example, TV viewing seems increase the risk of obesity more than video games.

It’s also worth noting that researchers are now making a distinction between ‘passive screen time’ (i.e. being sat on your arse) and ‘active screen time’ (i.e. body movement-based video games) with the latter being found to be a likely intervention for obesity.

The devil is in the details, rather than behind the screen.

Legal highs making the drug war obsolete

If you want any evidence that drugs have won the drug war, you just need to read the scientific studies on legal highs.

If you’re not keeping track of the ‘legal high’ scene it’s important to remember that the first examples, synthetic cannabinoids sold as ‘Spice’ and ‘K2’ incense, were only detected in 2009.

Shortly after amphetamine-a-like stimulant drugs, largely based on variations on pipradrol and the cathinones appeared, and now ketamine-like drugs such as methoxetamine have become widespread.

Since 1997, 150 new psychoactive substances were reported. Almost a third of those appeared in 2010.

Last year, the US government banned several of these drugs although the effect has been minimal as the legal high laboratories have over-run the trenches of the drug warriors.

A new study just published in the Journal of Analytical Toxicology tracked the chemical composition of legal highs as the bans were introduced.

A key question was whether the legal high firms would just try and use the same banned chemicals and sell them under a different name.

The research team found that since the ban only 4.9% of the products contained any trace of the recently banned drugs. The remaining 95.1% of products contained drugs not covered by the law.

The chemicals in legal highs have fundamentally changed since the 2011 ban and the labs have outrun the authorities in less than a year.

Another new study has looked at legal highs derived from pipradrol – a drug developed in 1940s for treating obesity, depression, ADHD and narcolepsy.

It was made illegal in many countries during the 70s due to its potential for abuse because it gives an amphetamine-like high.

The study found that legal high labs have just been running through variations of the banned drug using simple modifications of the original molecule to make new unregulated versions.

The following paragraph is from this study and even if you’re not a chemist, you can get an impression of how the drug is been tweaked in the most minor ways to create new legal versions.

Modifications include: addition of halogen, alkyl or alkoxy groups on one or both of the phenyl rings or addition of alkyl, alkenyl, haloalkyl and hydroxyalkyl groups on the nitrogen atom. Other modifications that have been reported include the substitution of a piperidine ring with an azepane ring (7-membered ring), a morpholine ring or a pyridine ring or the fusion of a piperidine ring with a benzene ring. These molecules, producing amphetamine-like effects, increase the choice of new stimulants to be used as legal highs in the coming years.

New, unknown and poorly understood psychoactive chemicals are appearing faster than they can be regulated.

The market is being driven by a demand for drugs that have the same effects as existing legal highs but won’t get you thrown in prison.

The drug war isn’t only being lost, it’s being made obsolete.

Uploaded to the Life network

A fantastic short film about what you might see when your mind is uploaded to an online storage cloud in 2052. It’s subtitled “the Singularity, ruined by lawyers”.

The piece is by futurist Tom Scott who obviously sees the consciousness uploading business far more pessimistically than me.

Personally, I’m going to get uploaded to a linux server. It’s be completely free but won’t support all my mental states.

Yes, I’ll be doing software jokes in the afterlife. No, you won’t have to humour me.
 

Link to fantastic video ‘Welcome to Life’ (via @SebastianSeung)

BBC Future column: why your brain loves to tune out

My column for BBC Future from last week. The original is here. Thanks to Martin Thirkettle for telling me about the demo that leads the column.

Our brains are programmed to cancel out all manner of constants in our everyday lives. If you don’t believe it, try a simple, but startling experiment.

The constant whir of a fan. The sensation of the clothes against your skin. The chair pressing against your legs. Chances are that you were not acutely aware of these until I pointed them out. The reason you had somehow forgotten about their existence? A fundamental brain process that we call adaptation.
Our brains are remarkably good at cancelling out all sorts of constants in our everyday lives. The brain is interested in changes that it needs to react or respond to, and so brain cells are charged with looking for any of these differences, no matter how minute. This makes it a waste of time registering things that are not changing, like the sensation of clothes or a chair against your body, so the brain uses adaptation to tune this background out, allowing you to focus on what is new.

If you don’t believe me, try this simple, but startling demonstration. First, hold your eyeball perfectly still. You could use calipers to do this, or a drug that paralyses the eye muscles, but my favourite method is to use my thumb and index finger. Using the sides of your thumb and finger, press on the bone of the eye socket, through your upper and lower eyelids. Do this gently. Try it with one eye first, closing the other eye or covering it with your hand.

With your eye fixed in position, keep your head still and soon you will experience the strangest thing. (You will have to stop reading at this point. I don’t mind. We will pick up when you have finished). After a few seconds the world in front of you will fade away. As long as you are holding your eyeball perfectly still, you will very quickly discover that you can see nothing at all. Blink, or move your head, let go of your eye and the world will come back. What’s going on?!

Now you see it…

For all of our senses, when a certain input is constant we gradually get used to it. As you are holding your eye still, exactly the same pattern of light is falling on each brain cell that makes up the receptors in the back of your eye. Adaptation cancels out this constant stimulation, fading out the visual world. The receptors in your eye are still processing information. They have not gone to sleep. They simply stop firing as much, reducing the messages they pass on about incoming sensations – in effect the message passed on to the rest of the brain is “nothing new… nothing new… nothing new…”. You can make your brain cells spring into action by moving your eye, or by waving your hand in front of your face. Your hand, or anything moving in the visual world, is enough of a change to counteract the adaptation.
This sounds like it could go badly wrong. What if I am watching something, or someone, I am thinking hard about it, and I forget to move my eyes for a few seconds. Will adaptation mean that thing disappears? Well, yes, it could in principle. But the reason it does not happen in practice is due to an ingenious work-around that the evolution has built into the design of the eyes – they constantly jiggle in their sockets. As well as the large rapid eye movements we make several times a second, there is also a constant, almost unnoticeable twitching of the eye muscles that means that your eyes are never absolutely still, even when you are fixing your gaze on one point. This prevents any fading out due to adaptation.

 

You can see this twitching when you look at a single point of light against a dark background (such as a single star in the sky, or a glowing cigarette end in a totally dark room). Without a frame of reference your brain will be unable to infer a stable position of the point of light. Every twitch of your eye muscles will seem like a movement of the point of light (a phenomenon called the autokinetic effect).

Adaptation is so useful for the brain’s processing of information that it has been kept by evolution, even in basic visual processing, and this extra muscle twitching has been added in to prevent too much adaptation causing problems for us. But the basic mechanism is still there, as my eye experiment revealed.

Once you understand adaptation, you discover that it is all around us. It is the reason people shout when they come out of nightclubs (they have got used to the constant high volume, so it does not seem as loud to them as it does to the people they wake up on the way home). It is why a smell that might have hit you as overpowering when you first enter a room can actually be ignored after you’ve got used to it. And it is related to the phenomenon of word alienation, whereby you repeat a word so often it loses its meaning. But most of the time it operates quietly, in the background, helping to filtering out the things that do not change, so that we can concentrate on the more important tasks of those that do.


A history of human sacrifice

A video on the history of human sacrifice is available from Science magazine as part of their special issue on human conflict.

Sadly, all the articles are locked behind a paywall but the video is free to view and has science writer Ann Gibbons discussing how the practice evolved through the ages and how archaeologists have been uncovering the evidence.

If you can’t stump up the cash for what looks like a genuinely fascinating issue there’s more discussion from the latest edition on the podcast where the science of racism and prejudice is explored.
 

Link to locked special issue.
Link to video.
Link to podcast

Psychology and the one-hit wonder

Don’t miss an important article in this week’s Nature about how psychologists are facing up to problems with unreplicated studies in the wake of several high profiles controversies.

Positive results in psychology can behave like rumours: easy to release but hard to dispel. They dominate most journals, which strive to present new, exciting research. Meanwhile, attempts to replicate those studies, especially when the findings are negative, go unpublished, languishing in personal file drawers or circulating in conversations around the water cooler…

One reason for the excess in positive results for psychology is an emphasis on “slightly freak-show-ish” results, says Chris Chambers, an experimental psychologist at Cardiff University, UK. “High-impact journals often regard psychology as a sort of parlour-trick area,” he says. Results need to be exciting, eye-catching, even implausible. Simmons says that the blame lies partly in the review process. “When we review papers, we’re often making authors prove that their findings are novel or interesting,” he says. “We’re not often making them prove that their findings are true.”

It’s perhaps worth noting that clinical psychology suffers somewhat less from this problem, as treatment studies tend to get replicated by competing groups and negative studies are valued just as highly.

However, it would be interesting to see whether the “freak-show-ish” performing pony studies are less likely to replicate than specialist and not very catchy cognitive science (dual-process theory of recognition, I’m looking at you).

As a great complement to the Nature article, this month’s The Psychologist has an extended look at the problem of replication [pdf] and talks to a whole range of people affected by the problem, from journalists to research experts.

But I honestly don’t know where this ‘conceptual replication’ thing came from – where you test the general conclusion of a study in another form – as this just seems to be a test of the theory with another study.

It’s like saying your kebab is a ‘conceptual replication’ of the pizza you made last night. Close, but no neopolitana.
 

Link to Nature article on psychology and replication.
pdf of Psychologist article ‘Replication, replication, replication’

She’s lost control

An article in Slate claims to have detectected a ‘logic hole’ in how much sympathy we feel for people with mental illness as both psychopathy and autism are ‘biological disorders’ that people ‘can’t help’ but we feel quite differently about people affected by them.

The ‘logic hole’, however, doesn’t exist because it is based on misunderstanding of the role of neuroscience in understanding behaviour and a caricature of what it means to have ‘no control’ over a condition.

Here’s what the article claims:

In the piece [recently published in The New York Times], Kahn compares psychopathy to autism, not because the two disorders are similar in their manifestation, but because psychologists believe they’re both neurological disorders, i.e. based in the brain and really something that the sufferer can’t help.

This caused me to note on Twitter that even though the conditions are similar in this way, autism garners sympathy and psychopathy doesn’t. In fact, most social discourse around psychopathy is still demonizing and utterly unsympathetic to the parents, who are often blamed for the condition. It struck me as an interesting logic hole in our cultural narrative around mental illness, since the usual assumption is that sympathy for mental illness is directly correlated with inability to control your problems.

Clearly the author has good intentions and aims to reduce the stigma associated with mental illness but in terms of behavioural problems, everything is a ‘biological disorder’ because all your behaviour originates in the brain.

The idea that because a disorder is ‘based in the brain’ it therefore follows that ‘really something that the sufferer can’t help’ is a complete fallacy.

Psychopathy, autism, depression, over-eating, persistently losing your keys and constantly getting annoyed at X Factor are all ‘based in the brain’ and this fact has nothing to do with how much control you have over the behaviour.

Putting this misunderstanding aside, however, there is also the unhelpful implication that someone ‘has’ or ‘has not’ control over their thoughts, behaviour, emotions and propensities, especially if they have a psychiatric diagnosis.

Conscious control varies between individuals, is affected by genetics, is amenable to change and training, and depends on the specific task, situation or action.

This does not mean that everyone with autism, psychopathy or any other diagnosis can just decide not to react in a certain way, but it would be equally stigmatising and simply wrong to assume that current difficulties are forever ‘fixed’.

The article finishes “I was just interested in the fact that there’s no relationship between how much we care about those with a mental disorder and how much those with it can help having it.”

In reality, sympathy for people with disorders is a complex phenomenon and the perception of ‘how much control the person has’ over the condition is only one of the factors. The (often equally bogus) moral associations also play a part as does the seriousness of the condition and the medical speciality that treats it.

Nevertheless, we need to get away from the idea that ‘biology means poor control’ because it is both a fallacy, and, ironically, known to be particularly stigmatising in itself.
 

Link to somewhat confused Slate article (via @ejwillingham)

A look inside digital humanity

BBC Radio 4 has just started an excellent series called The Digital Human that looks at how we use technology and how it affects our relationship to the social world.

It’s written and presented by psychologist Aleks Krotoski and the first two episodes are already online.

The first discusses the tendency to capture and display personal media through sites like Flickr and YouTube but, so far, the stand-out episode has been the second which discusses the presentation of self online and how much control we have over it.

I think it’s going to be a six-part series so there should be plenty more great stuff on the way.
 

Link to podcasts of Digital Human series.

Sex survey a let down in bed

A ‘saucy sex survey’ has been doing the rounds in the media that claims to be one of the largest studies on the sex lives of UK citizens. Unfortunately, it seems to be a bit of a let down in bed.

The study has been carried out by an unholy alliance between one of the country’s most respected relationship counselling charities, Relate, and the Ann Summers chain of sex shops but, sadly, it seems the commercial fluff has won out over the genuine insight.

I’m a big fan of Relate. They provide sex and relationship counselling regardless of status, sexuality or income and do an important and often thankless task.

In fact, my mum was a counsellor for them, years ago, when they were still called ‘Marriage Guidance’, and it was one of the things that got me interested in psychology.

The charity also runs a training and research institute for psychologists, psychotherapists and the like, and have built up a reputation for an evidence-based, down-to-earth approach.

Which makes it all the more surprising that they’d get involved with a survey that is clearly designed as a marketing gimmick rather than genuinely useful research.

How do I know it was a marketing gimmick? Because it was discussed in Marketing Week magazine as an example of Ann Summer’s ongoing ‘brand overhaul’ aimed at appealing to ‘a more mature audience’.

“Both parties”, says the article, “hope to make the dual branded survey an annual census”. Lovely.

Now, I’m not necessarily against commercial-academic double teaming, if you’ll excuse the turn of phrase, but you’d better produce something of quality if you want to keep your head held high.

But in this case, the whole thing looks dodgy. The full report, available online as a pdf, is just a bunch of good typesetting, poor graphics and lists of percentages.

What’s more worrying is that Relate won’t release their questions or how they went about asking them. Sex ninja Dr Petra Boynton [not quite her official title] has been trying to get hold of them, in part, because the way questions about a sensitive subject like sex are asked can greatly affect the answers you get.

And of course, which questions you ask is also key. A critical article in today’s Guardian raises some uncomfortable issues about the survey noting that “It sets up a model of the normal libido as frisky and adventurous, looking to try threesomes, bondage and toys – and those things are normal, but so too is not wanting to try them”.

Except, of course, if you’re a massive retailer with an interest in selling people ‘frisky and adventurous’ accessories.
 

Link to article ‘Ann Summers and Relate ought to be unlikely bedfellows’

How the British missed a trip

The first ever medical report on the effects of magic mushrooms is featured in an article in Current Biology. The excerpt is from a 1799 report entitled ‘On A Poisonous Species of Agaric’ from an issue of The London Medical and Physical Journal.

The psychological effects of hallucinogenic, or ‘magic’ mushrooms were first documented in the medical literature in 1799: a forty year-old father of four, JS, collected wild mushrooms in London’s Green Park and cooked them as a stew for breakfast for himself and his four young children. The apothecary Everard Brande described what happened then:

“Edward, one of the children (eight years old), who had eaten a large proportion of the mushrooms, as they thought them, was attacked with fits of immoderate laughter, nor could the threats of his father or mother refrain him. To this succeeded vertigo, and a great deal of stupor, from which he was roused by being called or shaken, but immediately relapsed. […] he sometimes pressed his hands on different parts of his abdomen, as if in pain, but when roused and interrogated as to it, he answered indifferently, yes, or no, as he did to every other question, evidently without any relation to what was asked. About the same time the father, aged forty, was attacked with vertigo, and complained that everything appeared black, then wholly disappeared”

The report is curious for two reasons. The first is that, contrary to the title, the mushroom wasn’t a ‘species of Agaric’.

Agaric here refers to fly agaric which is a red and white spotted toadstool that has long been known to have deliriant properties due to its effect on the acetylcholine receptors in the brain

But the report clearly discusses the classic ‘magic mushroom’ found in the UK, psilocybe semilanceata, which is a small brown fungus that has its hallucinogenic effects through the serotonin system – as do most recreational psychedelic drugs.

The other curious thing is that this hallucinogenic mushroom is common in the UK but seemingly lay undiscovered until 1799.

In contrast, mushrooms from the same species that are equally common in South America were first recorded some 2,000 years ago and became a central part of indigenous spirituality. The Aztecs called these mushrooms teonanacatl – the God mushroom – and were considered a way of accessing the divine.

The British, it seemed, either missed or ignored the fungus, and considered it nothing more than an inedible brown pest.
 

Link to 1799 report on the effects of magic mushrooms.

As addictive as cupcakes

If I read the phrase “as addictive as cocaine” one more time I’m going to hit the bottle.

Anything that is either overused, pleasurable or has become vaguely associated with the dopamine system is compared to cocaine.

In fact, here is a list of things claimed to be as addictive as the illegal nose powder in the popular press:

World of Warcraft
Power
Nicotine
Junk food
High-Fructose Corn Syrup
Ice cream
Cannabis
Love
Gambling
Fatty foods
Porn
Facebook
Sugar
Cupcakes
Running
Stories

And here is a scientifically verified list of things genuinely addictive as cocaine:

Cocaine

In fact, the concept of ‘addictive as cocaine’ really makes very little sense. Even among drugs, cocaine has a unique chemical profile and social context that are the main things that determine its ‘addictiveness’.

Even if you wanted to make the vague analogy that rates of problematic use are similar you’d need to do a decent epidemiological study.

The classic research from the US reports that about 5% of users become cocaine dependent two years after starting the drug.

We are still waiting for a similar epidemiological study on the use of World of Warcraft or the consumption of cupcakes.
 

Link to cocaine entry on Wikipedia