A brief hallucinatory twilight

CC Licensed Photo by Flickr user Risto Kuulasmaa. Click for source.I’ve got an article in The Atlantic on the hypnagogic state – the brief hallucinatory period between wakefulness and sleep – and how it is being increasingly used as a tool to make sense of consciousness.

There is a brief time, between waking and sleep, when reality begins to warp. Rigid conscious thought starts to dissolve into the gently lapping waves of early stage dreaming and the world becomes a little more hallucinatory, your thoughts a little more untethered. Known as the hypnagogic state, it has received only erratic attention from researchers over the years, but a recent series of studies have renewed interest in this twilight period, with the hope it can reveal something fundamental about consciousness itself.

The hypnagogic state has been better dealt with by artists and writers over the years – Colderidge’s poem Kubla Khan apparently emerged out of hypnagogic reverie – albeit fuelled by opium

It has received only occasional attention from scientists, however. More recently, a spate of studies has come out showing some genuine mainstream interest in understanding hypnagogia as an interesting source of information about how consciousness is deconstructed as we enter sleep.

 

Link to article in The Atlantic on the hypnagogic state.

Genetics is rarely just about genes

If you want a crystal clear introduction to the role genetics can play in human nature, you can’t do much better than an article in The Guardian’s Sifting the Evidence blog by epidemiologist Marcus Munafo.

It’s been giving a slightly distracting title – but ignore that – and just read the main text.

Are we shaped more by our genes or our environment – the age-old question of nature and nurture? This is really a false dichotomy; few, if any, scientists working in the area of human behaviour would adhere to either an extreme nature or extreme nurture position. But what do we mean when we say that our behaviours are influenced by genetic factors? And how do we know?

It will be one of the most useful 20 minutes you’ll spend this week.
 

Link to excellent introduction to genetics and human behaviour.

3 salvoes in the reproducibility crisis

cannonThe reproducibility crisis in Psychology rumbles on. For the uninitiated, this is the general brouhaha we’re having over how reliable published psychological research is. I wrote a piece on this in 2013, which now sounds a little complacent, and unnecessarily focussed on just one area of psychology, given the extent of the problems since uncovered in the way research is manufactured (or maybe not, see below). Anyway, in the last week or so there have been three interesting developments

Despair

Michael Inzlicht blogged his ruminations on the state of the field of social psychology, and they’re not rosy : “We erred, and we erred badly“, he writes. It is a profound testament to the depth of the current concerns about the reliability of psychology when such a senior scientist begins to doubt the reality of some of the phenomenon upon which he has built his career investigating.

As someone who has been doing research for nearly twenty years, I now can’t help but wonder if the topics I chose to study are in fact real and robust. Have I been chasing puffs of smoke for all these years?

Don’t panic!

But not everyone is worried. A team of Harvard A-listers, including Timothy Wilson and Daniel Gilbert, have released press release announcing a commentary on the “Reproducibility Project: Psychology”. This was an attempt to estimate the reliability of a large sample of phenomena from the psychology literature (Short introduction in Nature here). The paper from this project was picked as one of the most important of 2015 by the journal Science.

There project is a huge effort, which is open to multiple interpretations. The Harvard team’s press release is headlined “No evidence of a replicability crisis in psychological science” and claimed “reproducibility of psychological science is indistinguishable from 100%”, as well as calling from the project to put effort into repairing the damage done to the reputation of psychological research. I’d link to the press release, but it looks like between me learning of it yesterday and coming to write about it today this material has been pulled from the internet. The commentary announced was due to be released on March the 4th, so we wait with baited breath for the good news about why we don’t need to worry about the reliability of psychology research. Come on boys, we need some good news.

UPDATE 3rd March: The website is back! No Evidence for a Replicability Crisis in Psychological Science. Commentary here, and response

…But whatever you do, optimally weight evidence

Speaking of the Reproducibility Project, Alexander Etz produced a great Bayesian reanalysis of the data from that project (possible because it is all open access, via the Open Science Framework). This take on the project is a great example of how open science allows people to more easily build on your results, as well as being a vital complement to the original report – not least because it stops you naively accepting any simple statistical report of the what the reproducibility project ‘means’ (e.g. “30% of studies do not replicate” etc). Etz and Joachim Vandekerckhove have now upgraded the analysis to a paper, which is available (open access, natch) in PLoS One : “A Bayesian Perspective on the Reproducibility Project: Psychology“. And their interpretation of the reliability of psychology, as informed by the reproducibility project?

Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak …The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication…We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature

Psychotherapies and the space between us

Public domain image from pixabay. Click for source.There’s an in-depth article at The Guardian revisiting an old debate about cognitive behavioural therapy (CBT) versus psychoanalysis that falls into the trap of asking some rather clichéd questions.

For those not familiar with the world of psychotherapy, CBT is a time-limited treatment based on understanding how interpretations, behaviour and emotions become unhelpfully connected to maintain psychological problems while psychoanalysis is a Freudian psychotherapy based on the exploration and interpretation of unhelpful processes in the unconscious mind that remain from unresolved conflicts in earlier life.

I won’t go into the comparisons the article makes about the evidence for CBT vs psychoanalysis except to say that in comparing the impact of treatments, both the amount and quality of evidence are key. Like when comparing teams using football matches, pointing to individual ‘wins’ will tell us little. In terms of randomised controlled trials or RCTs, psychoanalysis has simply played far fewer matches at the highest level of competition.

But the treatments are often compared due to them aiming to treat some of the same problems. However, the comparison is usually unhelpfully shallow.

Here’s how the cliché goes: CBT is evidence-based but superficial, the scientific method applied for a quick fix that promises happiness but brings only light relief. The flip-side of this cliché says that psychoanalysis is based on apprenticeship and practice, handed down through generations. It lacks a scientific seal of approval but examines the root of life’s struggles through a form of deep artisanal self-examination.

Pitching these two clichés against each other, and suggesting the ‘old style craftsmanship is now being recognised as superior’ is one of the great tropes in mental health – and, as it happens, 21st Century consumerism – and there is more than a touch of marketing about this debate.

Which do you think is portrayed as commercial, mass produced, and popular, and which is expensive, individually tailored, and only available to an exclusive clientèle? Even mental health has its luxury goods.

But more widely discussed (or perhaps, admitted to) are the differing models of the mind that each therapy is based on. But even here simple comparisons fall flat because many of the concepts don’t easily translate.

One of the central tropes is that psychoanalysis deals with the ‘root’ of the psychological problem while CBT only deals with its surface effects. The problem with this contrast is that psychoanalysis can only be seen to deal with the ‘root of the problem’ if you buy into to the psychoanalytic view of where problems are rooted.

Is your social anxiety caused by the projection of unacceptable feelings of hatred based in unresolved conflicts from your earliest childhood relationships – as psychoanalysis might claim? Or is your social anxiety caused by the continuation of a normal fear response to a difficult situation that has been maintained due to maladaptive coping – as CBT might posit?

These views of the internal world, are, in many ways, the non-overlapping magisteria of psychology.

Another common claim is that psychoanalysis assumes an unconscious whereas CBT does not. This assertion collapses on simple examination but the models of the unconscious are so radically different that it is hard to see how they easily translate.

Psychoanalysis suggests that the unconscious can be understood in terms of objects, drives, conflicts and defence mechanisms that, despite being masked in symbolism, can ultimately be understood at the level of personal meaning. In contrast, CBT draws on its endowment from cognitive psychology and claims that the unconscious can often only be understood at the sub-personal level because meaning as we would understand it consciously is unevenly distributed across actions, reactions and interpretations rather than being embedded within them.

But despite this, there are also some areas of shared common ground that most critics miss. CBT equally cites deep structures of meaning acquired through early experience that lie below the surface to influence conscious experience – but calls them core beliefs or schemas – rather than complexes.

Perhaps the most annoying aspect of the CBT vs psychoanalysis debate is it tends to ask ‘which is best’ in a general and over-vague manner rather than examining the strengths and weaknesses of each approach for specific problems.

For example, one of the central areas that psychoanalysis excels at is in conceptualising the therapeutic relationship as being a dynamic interplay between the perception and emotions of therapist and patient – something that can be a source of insight and change in itself.

Notably, this is the core aspect that’s maintained in its less purist and, quite frankly, more sensible version, psychodynamic psychotherapy.

CBT’s approach to the therapeutic relationship is essentially ‘be friendly and aim for cooperation’ – the civil service model of psychotherapy if you will – which works wonderfully except for people whose central problem is itself cooperation and the management of personal interactions.

It’s no accident that most extensions of CBT (schema therapy, DBT and so on) add value by paying additional attention to the therapeutic relationship as a tool for change for people with complex interpersonal difficulties.

Because each therapy assumes a slightly different model of the mind, it’s easy to think that they are somehow battling over the ‘what it means to be human’ and this is where the dramatic tension from most of these debates comes from.

Mostly though, models of the mind are just maps that help us get places. All are necessarily stylised in some way to accentuate different aspects of human nature. As long as they sufficiently reflect the territory, this highlighting helps us focus on what we most need to change.

No more Type I/II error confusion

Type I and Type II errors are, respectively, when you allow a statistical test to convinces you of a false effect, and when you allow a statistical test to convince you to dismiss a true effect. Despite being fundamentally important concepts, they are terribly named. Who can ever remember which way around the two errors go? Well now I can, thanks to a comment from a friend I thought so useful I made it into a picture:

Boycriedwolfbarlow

Twelve minutes of consciousness

The Economist has an excellent video on consciousness, what it is, why and how it evolved.

The science section of The Economist has long had some of the best science reporting in the mainstream press and this video is a fantastic introduction to the science of consciousness.

It’s 12 minutes long and it’s worth every second of your time.

The reproducibility of psychological science

The Reproducibility Project results have just been published in Science, a massive, collaborative, ‘Open Science’ attempt to replicate 100 psychology experiments published in leading psychology journals. The results are sure to be widely debated – the biggest result being that many published results were not replicated. There’s an article in the New York Times about the study here: Many Psychology Findings Not as Strong as Claimed, Study Says

This is a landmark in meta-science : researchers collaborating to inspect how psychological science is carried out, how reliable it is, and what that means for how we should change what we do in the future. But, it is also an illustration of the process of Open Science. All the materials from the project, including the raw data and analysis code, can be downloaded from the OSF webpage. That means that if you have a question about the results, you can check it for yourself. So, by way of example, here’s a quick analysis I ran this morning: does the number of citations of a paper predict how large the effect size will be of a replication in the Reproducibility Project? Answer: not so much

cites_vs_effectR

That horizontal string of dots along the bottom is replications with close to zero-effect size, and high citations for the original paper (nearly all of which reported non-zero and statistically significant effects). Draw your own conclusions!

Link: Reproducibility OSF project page

Link: my code for making this graph (in python)