Psychotherapies and the space between us

Public domain image from pixabay. Click for source.There’s an in-depth article at The Guardian revisiting an old debate about cognitive behavioural therapy (CBT) versus psychoanalysis that falls into the trap of asking some rather clichéd questions.

For those not familiar with the world of psychotherapy, CBT is a time-limited treatment based on understanding how interpretations, behaviour and emotions become unhelpfully connected to maintain psychological problems while psychoanalysis is a Freudian psychotherapy based on the exploration and interpretation of unhelpful processes in the unconscious mind that remain from unresolved conflicts in earlier life.

I won’t go into the comparisons the article makes about the evidence for CBT vs psychoanalysis except to say that in comparing the impact of treatments, both the amount and quality of evidence are key. Like when comparing teams using football matches, pointing to individual ‘wins’ will tell us little. In terms of randomised controlled trials or RCTs, psychoanalysis has simply played far fewer matches at the highest level of competition.

But the treatments are often compared due to them aiming to treat some of the same problems. However, the comparison is usually unhelpfully shallow.

Here’s how the cliché goes: CBT is evidence-based but superficial, the scientific method applied for a quick fix that promises happiness but brings only light relief. The flip-side of this cliché says that psychoanalysis is based on apprenticeship and practice, handed down through generations. It lacks a scientific seal of approval but examines the root of life’s struggles through a form of deep artisanal self-examination.

Pitching these two clichés against each other, and suggesting the ‘old style craftsmanship is now being recognised as superior’ is one of the great tropes in mental health – and, as it happens, 21st Century consumerism – and there is more than a touch of marketing about this debate.

Which do you think is portrayed as commercial, mass produced, and popular, and which is expensive, individually tailored, and only available to an exclusive clientèle? Even mental health has its luxury goods.

But more widely discussed (or perhaps, admitted to) are the differing models of the mind that each therapy is based on. But even here simple comparisons fall flat because many of the concepts don’t easily translate.

One of the central tropes is that psychoanalysis deals with the ‘root’ of the psychological problem while CBT only deals with its surface effects. The problem with this contrast is that psychoanalysis can only be seen to deal with the ‘root of the problem’ if you buy into to the psychoanalytic view of where problems are rooted.

Is your social anxiety caused by the projection of unacceptable feelings of hatred based in unresolved conflicts from your earliest childhood relationships – as psychoanalysis might claim? Or is your social anxiety caused by the continuation of a normal fear response to a difficult situation that has been maintained due to maladaptive coping – as CBT might posit?

These views of the internal world, are, in many ways, the non-overlapping magisteria of psychology.

Another common claim is that psychoanalysis assumes an unconscious whereas CBT does not. This assertion collapses on simple examination but the models of the unconscious are so radically different that it is hard to see how they easily translate.

Psychoanalysis suggests that the unconscious can be understood in terms of objects, drives, conflicts and defence mechanisms that, despite being masked in symbolism, can ultimately be understood at the level of personal meaning. In contrast, CBT draws on its endowment from cognitive psychology and claims that the unconscious can often only be understood at the sub-personal level because meaning as we would understand it consciously is unevenly distributed across actions, reactions and interpretations rather than being embedded within them.

But despite this, there are also some areas of shared common ground that most critics miss. CBT equally cites deep structures of meaning acquired through early experience that lie below the surface to influence conscious experience – but calls them core beliefs or schemas – rather than complexes.

Perhaps the most annoying aspect of the CBT vs psychoanalysis debate is it tends to ask ‘which is best’ in a general and over-vague manner rather than examining the strengths and weaknesses of each approach for specific problems.

For example, one of the central areas that psychoanalysis excels at is in conceptualising the therapeutic relationship as being a dynamic interplay between the perception and emotions of therapist and patient – something that can be a source of insight and change in itself.

Notably, this is the core aspect that’s maintained in its less purist and, quite frankly, more sensible version, psychodynamic psychotherapy.

CBT’s approach to the therapeutic relationship is essentially ‘be friendly and aim for cooperation’ – the civil service model of psychotherapy if you will – which works wonderfully except for people whose central problem is itself cooperation and the management of personal interactions.

It’s no accident that most extensions of CBT (schema therapy, DBT and so on) add value by paying additional attention to the therapeutic relationship as a tool for change for people with complex interpersonal difficulties.

Because each therapy assumes a slightly different model of the mind, it’s easy to think that they are somehow battling over the ‘what it means to be human’ and this is where the dramatic tension from most of these debates comes from.

Mostly though, models of the mind are just maps that help us get places. All are necessarily stylised in some way to accentuate different aspects of human nature. As long as they sufficiently reflect the territory, this highlighting helps us focus on what we most need to change.

No more Type I/II error confusion

Type I and Type II errors are, respectively, when you allow a statistical test to convinces you of a false effect, and when you allow a statistical test to convince you to dismiss a true effect. Despite being fundamentally important concepts, they are terribly named. Who can ever remember which way around the two errors go? Well now I can, thanks to a comment from a friend I thought so useful I made it into a picture:

Boycriedwolfbarlow

Twelve minutes of consciousness

The Economist has an excellent video on consciousness, what it is, why and how it evolved.

The science section of The Economist has long had some of the best science reporting in the mainstream press and this video is a fantastic introduction to the science of consciousness.

It’s 12 minutes long and it’s worth every second of your time.

The reproducibility of psychological science

The Reproducibility Project results have just been published in Science, a massive, collaborative, ‘Open Science’ attempt to replicate 100 psychology experiments published in leading psychology journals. The results are sure to be widely debated – the biggest result being that many published results were not replicated. There’s an article in the New York Times about the study here: Many Psychology Findings Not as Strong as Claimed, Study Says

This is a landmark in meta-science : researchers collaborating to inspect how psychological science is carried out, how reliable it is, and what that means for how we should change what we do in the future. But, it is also an illustration of the process of Open Science. All the materials from the project, including the raw data and analysis code, can be downloaded from the OSF webpage. That means that if you have a question about the results, you can check it for yourself. So, by way of example, here’s a quick analysis I ran this morning: does the number of citations of a paper predict how large the effect size will be of a replication in the Reproducibility Project? Answer: not so much

cites_vs_effectR

That horizontal string of dots along the bottom is replications with close to zero-effect size, and high citations for the original paper (nearly all of which reported non-zero and statistically significant effects). Draw your own conclusions!

Link: Reproducibility OSF project page

Link: my code for making this graph (in python)

Fifty psychological terms to just, well, be aware of

CC Licensed Photo by Flickr user greeblie. Click for source.Frontiers in Psychology has just published an article on ‘Fifty psychological and psychiatric terms to avoid’. These sorts of “here’s how to talk about” articles are popular but themselves can often be misleading, and the same applies to this one.

The article supposedly contains 50 “inaccurate, misleading, misused, ambiguous, and logically confused words and phrases”.

The first thing to say is that by recommending that people avoid certain words or phrases, the article is violating its own recommendations. That may seem like a trivial point but it isn’t when you’re giving advice about how to use language in scientific discussion.

It’s fine to use even plainly wrong terms to discuss how they’re used, the multiple meanings and misconceptions behind them. In fact, a lot of scientific writing does exactly this. When there are misconceptions that may cloud people’s understanding, it’s best to address them head on rather than avoid them.

Sometimes following the recommendations for ‘phrases to avoid’ would actually hinder this process.

For example, the piece recommends you avoid the term ‘autism epidemic’ as there is no good evidence that there is an actual epidemic. But this is not advice about language, it’s just an empirical point. According to this list, all the research that has used the term, to discuss the actual evidence in contrary to the popular idea, should have avoided the term and presumably referred to it as ‘the concept that shall not be named’.

The article also recommends against using ‘ambiguous’ words but this recommendation would basically kill the English language as many words have multiple meanings – like the word ‘meaning’ for example – but that doesn’t mean you should avoid them.

If you’re a fan of pedantry you may want to go through the article and highlight where the authors have used other ambiguous psychological phrases (starter for 10, “memory”) and post it to some obscure corner of the internet.

Many of the recommendations also rely on you agreeing with the narrow definition and limits of use that the authors premise their argument on. Do you agree that “antidepressant medication” means that the medication has a selective and specific effect on depression and no other conditions – as the authors suggest? Or do you think this just describes a property of the medication? This is exactly how medication description works throughout medicine. Aspirin is an analgesic medication and an anti-inflammatory medication, as well as having other properties. No banning needed here.

And in fact, this sort of naming is just a property of language. If you talk about an ‘off-road vehicle’, and someone pipes up to tell you “actually, off-road vehicles can also go on-road so I recommend you avoid that description” I recommend you ignore them.

The same applies to many of the definitions in this list. The ‘chemical imbalance’ theory of depression is not empirically supported, so don’t claim it is, but feel free to use the phrase if you want to discuss this misconception. Some conditions genuinely do involve a chemical imbalance though – like the accumulation of copper in Wilson’s disease, so you can use the phrase accurately in this case, being aware of how its misused in other contexts. Don’t avoid it, just use it clearly.

With ‘Lie detector test’ no accurate test has ever been devised to detect lies. But you may be writing about research which is trying to develop one or research that has tested the idea. ‘No difference between groups’ is fine if there is genuinely no difference in your measure between the groups (i.e. they both score exactly the same).

Some of the recommendations are essentially based on the premise that you ‘shouldn’t use the term except for how it was first defined or defined where we think is the authoritative source’. This is just daft advice. Terms evolve over time. Definitions shift and change. The article recommends against using ‘Fetish’ except for in its DSM-5 definition, despite the fact this is different to how it’s used commonly and how it’s widely used in other academic literature. ‘Splitting’ is widely used in a form to mean ‘team splitting’ which the article says is ‘wrong’. It isn’t wrong – the term has just evolved.

I think philosophers would be surprised to hear ‘reductionism’ is a term to be avoided – given the massive literature on reductionism. Similarly, sociologists might be a bit baffled by ‘medical model’ being a banned phrase, given the debates over it and, unsurprisingly, its meaning.

Some of the advice is just plain wrong. Don’t use “Prevalence of trait X” says the article because apparently prevalence only applies to things that are either present or absent and “not dimensionally distributed in the population, such as personality traits and intelligence”. Many traits are defined by cut-off scores along dimensionally defined constructs, making them categorical. If you couldn’t talk about the prevalence in this way, we’d be unable to talk about prevalence of intellectual disability (widely defined as involving an IQ of less than 70) or dementia – which is diagnosed by a cut-off score on dimensionally varying neuropsychological test performance.

Some of the recommended terms to avoid are probably best avoided in most contexts (“hard-wired”, “love molecule”) and some are inherently self-contradictory (“Observable symptom”, “Hierarchical stepwise regression”) but again, use them if you want to discuss how they’re used.

I have to say, the piece reminds me of Stephen Pinker’s criticism of ‘language mavens’ who have come up with rules for their particular version of English which they decide others must follow.

To be honest, I think the Frontiers in Psychology article is well-worth reading. It’s a great guide to how some concepts are used in different ways, but it’s not good advice for things to avoid.

The best advice is probably: communicate clearly, bearing in mind that terms and concepts can have multiple meanings and your audience may not be aware of which you want to communicate, so make an effort to clarify where needed.
 

Link to Frontiers in Psychology article.

Are online experiment participants paying attention?

factoryOnline testing is sure to play a large part in the future of Psychology. Using Mechanical Turk or other crowdsourcing sites for research, psychologists can quickly and easily gather data for any study where the responses can be provided online. One concern, however, is that online samples may be less motivated to pay attention to the tasks they are participating in. Not only is nobody watching how they do these online experiments, they whole experience is framed as a work-for-cash gig, so there is pressure to complete any activity as quickly and with as low effort as possible. To the extent that the online participants are satisficing or skimping on their attention, can we trust the data?

A newly submitted paper uses data from the Many Labs 3 project, which recruited over 3000 participants from both online and University campus samples, to test the idea that online samples are different from the traditional offline samples used by academic psychologists:

The findings strike a note of optimism, if you’re into online testing (perhaps less so if you use traditional university samples):

Mechanical Turk workers report paying more attention and exerting more effort than undergraduate students. Mechanical Turk workers were also more likely to pass an instructional manipulation check than undergraduate students. Based on these results, it appears that concerns over participant inattentiveness may be more applicable to samples recruited from traditional university participant pools than from Mechanical Turk

This fits with previous reports showing high consistency when classic effects are tested online, and with reports that satisficing may have been very high in offline samples, we just weren’t testing for it.

However, an issue I haven’t seen discussed is whether, because of the relatively small pool of participants taking experiments on MTurk, online participants have an opportunity to get familiar with typical instructional manipulation checks (AKA ‘catch questions’, which are designed to check if you are paying attention). If online participants adapt to our manipulation checks, then the very experiments which set out to test if they are paying more attention may not be reliable.

Link: new paperGraduating from Undergrads: Are Mechanical Turk Workers More Attentive than Undergraduate Participants?

This paper provides a useful overview: Conducting perception research over the internet: a tutorial review

Computation is a lens

CC Licensed Photo from Flickr user Jared Tarbell. Click for source.“Face It,” says psychologist Gary Marcus in The New York Times, “Your Brain is a Computer”. The op-ed argues for understanding the brain in terms of computation which opens up to the interesting question – what does it mean for a brain to compute?

Marcus makes a clear distinction between thinking that the brain is built along the same lines as modern computer hardware, which is clearly false, while arguing that its purpose is to calculate and compute. “The sooner we can figure out what kind of computer the brain is,” he says, “the better.”

In this line of thinking, the mind is considered to be the brain’s computations at work and should be able to be described in terms of formal mathematics.

The idea that the mind and brain can be described in terms of information processing is the main contention of cognitive science but this raises a key but little asked question – is the brain a computer or is computation just a convenient way of describing its function?

Here’s an example if the distinction isn’t clear. If you throw a stone you can describe its trajectory using calculus. Here we could ask a similar question: is the stone ‘computing’ the answer to a calculus equation that describes its flight, or is calculus just a convenient way of describing its trajectory?

In one sense the stone is ‘computing’. The physical properties of the stone and its interaction with gravity produce the same outcome as the equation. But in another sense, it isn’t, because we don’t really see the stone as inherently ‘computing’ anything.

This may seem like a trivial example but there are in fact a whole series of analog computers that use the physical properties of one system to give the answer to an entirely different problem. If analog computers are ‘really’ computing, why not our stone?

If this is the case, what makes brains any more or less of a computer than flying rocks, chemical reactions, or the path of radio waves? Here the question just dissolves into dust. Brains may be computers but then so is everything, so asking the question doesn’t tell us anything specific about the nature of brains.

One counter-point to this is to say that brains need to algorithmically adjust to a changing environment to aid survival which is why neurons encode properties (such as patterns of light stimulation) in another form (such as neuronal firing) which perhaps makes them a computer in a way that flying stones aren’t.

But this definition would also include plants that also encode physical properties through chemical signalling to allow them to adapt to their environment.

It is worth noting that there are other philosophical objections to the idea that brains are computers, largely based on the the hard problem of consciousness (in brief – could maths ever feel?).

And then there are arguments based on the boundaries of computation. If the brain is a computer based on its physical properties and the blood is part of that system, does the blood also compute? Does the body compute? Does the ecosystem?

Psychologists drawing on the tradition of ecological psychology and JJ Gibson suggest that much of what is thought of as ‘information processing’ is actually done through the evolutionary adaptation of the body to the environment.

So are brains computers? They can be if you want them to be. The concept of computation is a tool. Probably the most useful one we have, but if you say the brain is a computer and nothing else, you may be limiting the way you can understand it.
 

Link to ‘Face It, Your Brain Is a Computer’ in The NYT.