BBC Future column: Hypnic Jerks

Here’s my column at BBC Future from last week. You can see the original here. The full listof my columns is here and  there is now a RSS feed, should you need it

As we give up our bodies to sleep, sudden twitches escape our brains, causing our arms and legs to jerk. Some people are startled by them, others are embarrassed. Me, I am fascinated by these twitches, known as hypnic jerks. Nobody knows for sure what causes them, but to me they represent the side effects of a hidden battle for control in the brain that happens each night on the cusp between wakefulness and dreams.

Normally we are paralysed while we sleep. Even during the most vivid dreams our muscles stay relaxed and still, showing little sign of our internal excitement. Events in the outside world usually get ignored: not that I’d recommend doing this but experiments have shown that even if you sleep with your eyes taped open and someone flashes a light at you it is unlikely that it will affect your dreams.

But the door between the dreamer and the outside world is not completely closed. Two kinds of movements escape the dreaming brain, and they each have a different story to tell.

Brain battle

The most common movements we make while asleep are rapid eye-movements. When we dream, our eyes move according to what we are dreaming about. If, for example, we dream we are watching a game of tennis our eyes will move from left to right with each volley. These movements generated in the dream world escape from normal sleep paralysis and leak into the real world. Seeing a sleeping persons’ eyes move is the strongest sign that they are dreaming.

Hypnic jerks aren’t like this. They are most common in children, when our dreams are most simple and they do not reflect what is happening in the dream world – if you dream of riding a bike you do not move your legs in circles. Instead, hypnic jerks seem to be a sign that the motor system can still exert some control over the body as sleep paralysis begins to take over. Rather than having a single “sleep-wake” switch in the brain for controlling our sleep (i.e. ON at night, OFF during the day), we have two opposing systems balanced against each other that go through a daily dance, where each has to wrest control from the other.

Deep in the brain, below the cortex (the most evolved part of the human brain) lies one of them: a network of nerve cells called the reticular activating system. This is nestled among the parts of the brain that govern basic physiological processes, such as breathing. When the reticular activating system is in full force we feel alert and restless – that is, we are awak

Opposing this system is the ventrolateral preoptic nucleus: ‘ventrolateral’ means it is on the underside and towards the edge in the brain, ‘preoptic’ means it is just before the point where the nerves from the eyes cross. We call it the VLPO. The VLPO drives sleepiness, and its location near the optic nerve is presumably so that it can collect information about the beginning and end of daylight hours, and so influence our sleep cycles. As the mind gives in to its normal task of interpreting the external world, and starts to generate its own entertainment, the struggle between the reticular activating system and VLPO tilts in favour of the latter. Sleep paralysis sets in.

What happens next is not fully clear, but it seems that part of the story is that the struggle for control of the motor system is not quite over yet. Few battles are won completely in a single moment. As sleep paralysis sets in remaining daytime energy kindles and bursts out in seemingly random movements. In other words, hypnic jerks are the last gasps of normal daytime motor control.

Dream triggers

Some people report that hypnic jerks happen as they dream they are falling or tripping up. This is an example of the rare phenomenon known as dream incorporation, where something external, such as an alarm clock, is built into your dreams. When this does happen, it illustrates our mind’s amazing capacity to generate plausible stories. In dreams, the planning and foresight areas of the brain are suppressed, allowing the mind to react creatively to wherever it wanders – much like a jazz improviser responds to fellow musicians to inspire what they play.

As hypnic jerks escape during the struggle between wake and sleep, the mind is undergoing its own transition. In the waking world we must make sense of external events. In dreams the mind tries to make sense of its own activity, resulting in dreams. Whilst a veil is drawn over most of the external world as we fall asleep, hypnic jerks are obviously close enough to home – being movements of our own bodies – to attract the attention of sleeping consciousness. Along with the hallucinated night-time world they get incorporated into our dreams.

So there is a pleasing symmetry between the two kinds of movements we make when asleep. Rapid eye movements are the traces of dreams that can be seen in the waking world. Hypnic jerks seem to be the traces of waking life that intrude on the dream world.

A bridge over troubled waters for fMRI?

Yesterday’s ‘troubles with fMRI’ article has caused lots of debate so I thought I’d post the original answers given to me by neuroimagers Russ Poldrack and Tal Yarkoni from which I quoted.

Poldrack and Yarkoni have been at the forefront of finding, fixing and fine-tuning fMRI and its difficulties. I asked them about current challenges but could only include small quotes in The Observer article. Their full answers, included below with their permission, are important and revealing, so well worth checking out.

First, however, a quick note about the reactions the piece has received from the neuroimaging community. They tend to be split into “well said” and “why are you saying fMRI is flawed?”

Because of this, it’s worth saying that I don’t think fMRI or other imaging methods are flawed in themselves. However, it is true that we have discovered that a significant proportion of the past research has been based on potentially misleading methods.

Although it is true that these methods have largely been abandoned there still remain some important and ongoing uncertainties around how we should interpret neuroimaging data.

As a result of these issues, and genuinely due to the fact that brain scans are often enchantingly beautiful, I think neuroimaging results are currently given too much weight as we are trying to understand the brain but that we shouldn’t undervalue neuroimaging as a science.

Despite having our confidence shaken in past studies, neuroimaging will clearly come out better and stronger as a result of current debates about problems with analysis and interpretation.

At the moment, the science is at a fascinating point of transition, so it’s a great time to be interested in cognitive neuroscience and I think this is made crystal clear from Russ and Tal’s answers below.

Russ Poldrack from the University of Texas Austin

What’s the most pressing problem fMRI research needs to address at the moment?

I think that biggest fundamental problem is the great flexibility of analytic methods that one can bring to bear on any particular dataset; the ironic thing is that this is also one of fMRI’s greatest strengths, i.e., that it allows us to ask so many different questions in many different ways. The problem comes about when researchers search across many different analysis approaches for a result, without the realization that this induces an increase in the ultimate likelihood of finding a false positive. I think that another problem that interacts with this is the prevalence of relatively underpowered studies, which are often analyzed using methods that are not stringent enough to control the level of false positives. The flexibility that I mentioned above also includes methods that are known by experts to be invalid, but unfortunately these still get into top journals, which only helps perpetuate them further.

Someone online asked the question “How Much of the Neuroimaging Literature Should We Discard?” How do you think should we consider past fMRI studies that used problematic methodology?

I think that replication is the ultimate answer. For example, the methods that we used in our 1999 Neuroimage paper that examined semantic versus phonological processing seem pretty abominable by today’s standards, but the general finding of that paper has been replicated many times since then. There are many other findings from the early days that have stood the test of time, while others have failed to replicate. So I would say that if a published study used problematic methods, then one really wants to see some kind of replication before buying the result.

Tal Yarkoni from the University of Colorado at Boulder

What’s the most pressing problem fMRI research needs to address at the moment?

My own feeling (which I’m sure many people would disagree with) is that the biggest problem isn’t methodological laxness so much as skewed incentives. As in most areas of science, researchers have a big incentive to come up with exciting new findings that make a splash. What’s particularly problematic about fMRI research–as opposed to, say, cognitive psychology–is the amount of flexibility researchers have when performing their analyses. There simply isn’t any single standard way of analyzing fMRI data (and it’s not clear there should there be); as a result, it’s virtually impossible to assess the plausibility of many if not most fMRI findings simply because you have no idea how many things the researchers tried before they got something to work.

The other very serious and closely related problem is what I’ve talked about in my critique of Friston’s paper [on methods in fMRI analysis] as well as other papers (e.g., I wrote a commentary on the Vul et al “voodoo correlations” paper to the same effect): in the real world, most effects are weak and diffuse. In other words, we expect complicated psychological states or processes–e.g., decoding speech, experiencing love, or maintaining multiple pieces of information in mind–to depend on neural circuitry widely distributed throughout the brain, most of which are probably going to play a relatively minor role. The problem is that when we conduct fMRI studies with small samples at very stringent statistical thresholds, we’re strongly biased to detect only a small fraction of the ‘true’ effects, and because of the bias, the effects we do detect will seem much stronger than they actually are in the real world. The result is that fMRI studies will paradoxically tend to produce *less* interesting results as the sample size gets bigger. Which means your odds of getting a paper into a journal like Science or Nature are, in many cases, much higher if you only collect data from 20 subjects than if you collect data from 200.

The net result is that we have hundreds of very small studies in the literature that report very exciting results but are unlikely to ever be directly replicated, because researchers don’t have much of an incentive to collect the large samples needed to get a really good picture of what’s going on.

Someone online asked the question “How Much of the Neuroimaging Literature Should We Discard?” How do you think should we consider past fMRI studies that used problematic methodology?

This is a very difficult question to answer in a paragraph or two. I guess my most general feeling is that our default attitude to any new and interesting fMRI finding should be skepticism–instead of accepting findings at face value until we discover a good reason to discount them, we should incline toward disbelief until a finding has been replicated and extended. Personally I’d say I don’t really believe about 95% of what gets published. That’s not to say I think 95% of the literature is flat-out wrong; I think there’s probably a kernel of truth to most findings that get published. But the real problem in my view is a disconnect between what we should really conclude from any given finding and what researchers take license to say in their papers. To take just one example, I think claims of “selective” activation are almost without exception completely baseless (because very few studies really have the statistical power to confidently claim that absence of evidence is evidence of absence).

For example, suppose someone publishes a paper reporting that romantic love selectively activates region X, and that activation in that region explains a very large proportion of the variance in some behavior (this kind of thing happens all the time). My view is that the appropriate response is to say, “well, look, there probably is a real effect in region X, but if you had had a much larger sample, you would realize that the effect in region X is much smaller than you think it is, and moreover, there are literally dozens of other regions that show similarly-sized effects.” The argument is basically that much of the novelty of fMRI findings stems directly from the fact that most studies are grossly underpowered. So really I think the root problem is not that researchers aren’t careful to guard against methodological problems X, Y, and Z when doing their analyses; it’s that our mental model of what most fMRI studies can tell us is fundamentally wrong in most cases. A statistical map of brain activity is *not* in any sense an accurate window into how the brain supports cognition; it’s more like a funhouse mirror that heavily distorts the true image, and to understand the underlying reality, you also have to take into account the distortion introduced by the measurement. The latter part is where I think we have a systemic problem in fMRI research.

The trouble with fMRI

I’ve written a piece for The Observer about ‘the trouble with brain scans’ that discusses how past fMRI studies may have been based on problematic assumptions.

For years the media has misrepresented brain scan studies (“Brain centre for liking cheese discovered!”) but we are now at an interesting point where neuroscientists are starting to seriously look for problems in their own methods of analysis.

In fact, many of these problems have now been corrected, but we still have 100s or 1000s of previous studies that have been based on methods that have now been abandoned.

In part, the piece was inspired by a post on the Neurocritic blog entitled “How Much of the Neuroimaging Literature Should We Discard?” that was prompted by growing concerns among neuroscientists.

The fact is, fMRI is a relatively new science – it just celebrated it’s 20th birthday – and it is still evolving.

I suspect it will be revised and reconsidered many times yet.

Link to Observer article ‘The Trouble With Brain Scans’

What is the DSM supposed to do?

I’ve written an article for the Discover Magazine’s blog The Crux on what the DSM diagnostic manual is supposed to do.

This is quite an interesting question when you think about it. In other words, it asks – how do we define mental illness – both in theory and in practice?

The article tackles how you decide what a mental illness is in the first place and then how you go about classifying mental states that, by definition, can only be experienced by one person. It turns out, classifying mental illness is a lot like classifying literature.

It also discusses the old and possibly futile quest for ‘biological tests for mental illness’ as if there is a perfect mapping between how we classify mental states and how the brain actually works at the neurobiological level.

So if you want to know the thinking and, indeed, problems behind one of the central and often unquestioned assumptions of psychiatry, this should be a good place to start.

Link to ‘What Is the “Bible of Psychiatry” Supposed to Do?’

Sigman and the skewed screen of death

The media is buzzing this morning with the shocking news that children spend ‘more than six hours in front of screens’. The news is shocking, however, because it’s wrong.

The sound bite stems from an upcoming talk on ‘Alcohol and electronic media: units of consumption’ by evidence-ambivalent psychologist Aric Sigman who is doing a guest lecture at a special interest group meeting at the Royal College of Paediatrics and Child Health annual conference.

Sigman has a track record of being economical with evidence for the purpose of promoting his ‘traditional family values’ and this is another classic example.

The ‘six hour a day in front of the screen’ figure comes from a commercial research organisation called Childwise. It was the headline finding that made all the papers, which is quite convenient if you’re selling the report for £1800 a copy.

But why would you rely on a commercial report when you have so many non-commercial scientific studies to choose from?

A 2006 meta-analysis looked at 90, yes 90, studies on media use in young people from Europe and North America and here’s what it found.

Youth watch an average of 1.8–2.8h TV a day. This has not changed for 50 years. Boys and girls spend approx 60 and 23 min day on computer games. Computers account for an additional 30 min day. TV viewing tends to decrease during adolescence.

Now, that’s not to say that there aren’t risks to children if they spend large amounts of their time sat on their arse. Time spent watching television has genuinely been linked to poor health. However, it’s better to inform people of the details rather than the panic inducing headlines.

For example, talking about ‘screen time’ is probably not helpful. For example, TV viewing seems increase the risk of obesity more than video games.

It’s also worth noting that researchers are now making a distinction between ‘passive screen time’ (i.e. being sat on your arse) and ‘active screen time’ (i.e. body movement-based video games) with the latter being found to be a likely intervention for obesity.

The devil is in the details, rather than behind the screen.

Legal highs making the drug war obsolete

If you want any evidence that drugs have won the drug war, you just need to read the scientific studies on legal highs.

If you’re not keeping track of the ‘legal high’ scene it’s important to remember that the first examples, synthetic cannabinoids sold as ‘Spice’ and ‘K2’ incense, were only detected in 2009.

Shortly after amphetamine-a-like stimulant drugs, largely based on variations on pipradrol and the cathinones appeared, and now ketamine-like drugs such as methoxetamine have become widespread.

Since 1997, 150 new psychoactive substances were reported. Almost a third of those appeared in 2010.

Last year, the US government banned several of these drugs although the effect has been minimal as the legal high laboratories have over-run the trenches of the drug warriors.

A new study just published in the Journal of Analytical Toxicology tracked the chemical composition of legal highs as the bans were introduced.

A key question was whether the legal high firms would just try and use the same banned chemicals and sell them under a different name.

The research team found that since the ban only 4.9% of the products contained any trace of the recently banned drugs. The remaining 95.1% of products contained drugs not covered by the law.

The chemicals in legal highs have fundamentally changed since the 2011 ban and the labs have outrun the authorities in less than a year.

Another new study has looked at legal highs derived from pipradrol – a drug developed in 1940s for treating obesity, depression, ADHD and narcolepsy.

It was made illegal in many countries during the 70s due to its potential for abuse because it gives an amphetamine-like high.

The study found that legal high labs have just been running through variations of the banned drug using simple modifications of the original molecule to make new unregulated versions.

The following paragraph is from this study and even if you’re not a chemist, you can get an impression of how the drug is been tweaked in the most minor ways to create new legal versions.

Modifications include: addition of halogen, alkyl or alkoxy groups on one or both of the phenyl rings or addition of alkyl, alkenyl, haloalkyl and hydroxyalkyl groups on the nitrogen atom. Other modifications that have been reported include the substitution of a piperidine ring with an azepane ring (7-membered ring), a morpholine ring or a pyridine ring or the fusion of a piperidine ring with a benzene ring. These molecules, producing amphetamine-like effects, increase the choice of new stimulants to be used as legal highs in the coming years.

New, unknown and poorly understood psychoactive chemicals are appearing faster than they can be regulated.

The market is being driven by a demand for drugs that have the same effects as existing legal highs but won’t get you thrown in prison.

The drug war isn’t only being lost, it’s being made obsolete.