rational judges, not extraneous factors in decisions

The graph tells a drammatic story of irrationality, presented in the 2011 paper Extraneous factors in judicial decisions. What it shows is the outcome of parole board decisions, as ruled by judges, against the order those decisions were made. The circles show the meal breaks taken by the judges.

parole_decisionsAs you can see, the decisions change the further the judge gets from his/her last meal, dramatically decreasing from around 65% chance of a favourable decision if you are the first case after a meal break, to close to 0% if you are the last case in a long series before a break.

In their paper, the original authors argue that this effect of order truly is due to the judges’ hunger, and not a confound introduced by some other factor which affects the order of cases and their chances of success (the lawyers sit outside the closed doors of the court, for example, so can’t time their best cases to come just after a break – they don’t know when the judge is taking a meal; The effect survives additional analysis where severity of prisoner’s crime and length of sentence are factored it; and so on). The interpretation is that as the judges tire they more and more fall back on a simple heuristic – playing safe and refusing parole.

This seeming evidence of the irrationality of judges has been cited hundreds of times, in economics, psychology and legal scholarship. Now, a new analysis by Andreas Glöckner in the journal Judgement and Decision Making questions these conclusions.

Glöckner’s analysis doesn’t prove that extraneous factors weren’t influencing the judges, but he shows how the same effect could be produced by entirely rational judges interacting with the protocols required by the legal system.

The main analysis works like this: we know that favourable rulings take longer than unfavourable ones (~7 mins vs ~5 mins), and we assume that judges are able to guess how long a case will take to rule on before they begin it (from clues like the thickness of the file, the types of request made, the representation the prisoner has and so on). Finally, we assume judges have a time limit in mind for each of the three sessions of the day, and will avoid starting cases which they estimate will overrun the time limit for the current session.

It turns out that this kind of rational time-management is sufficient to  generate the drops in favourable outcomes. How this occurs isn’t straightforward and interacts with a quirk of original author’s data presentation (specifically their graph shows the order number of cases when the number of cases in each session varied day to day – so, for example, it shows that the 12th case after a break is least likely to be judged favourably, but there wasn’t always a 12 case in each session. So sessions in which there were more unfavourable cases were more likely to contribute to this data point).

This story of claim and counter-claim shows why psychologists prefer experiments, since only then can you truly isolate causal explanations (if you are a judge and willing to go without lunch please get in touch). Also, it shows the benefit of simulations for extending the horizons of our intuition. Glöckner’s achievement is to show in detail how some reasonable assumptions – including that of a rational judge – can generate a pattern which hitherto seemed only explainable by the influence of an irrelevant factor on the judges decisions. This doesn’t settle the matter, but it does mean we can’t be so confident that this graph shows what it is often claimed to show. The judges decisions may not be irrational after all, and the timing of the judges meal breaks may not be influencing parole decision outcome.

Original finding: Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011). Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences, 108(17), 6889-6892.

New analysis: Glöckner, A. (2016). The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated. Judgment and Decision Making, 11(6), 601-610.

Elsewhere I have written about how evidence of human irrationality is often over-egged : For argument’s sake: evidence that reason can change minds


Serendipity in psychological research

micDorothy Bishop has an excellent post ‘Ten serendipitous findings in psychology’, in which she lists ten celebrated discoveries which occurred by happy accident.

Each discovery is interesting in itself, but Prof Bishop puts the discoveries in the context of the recent discussion about preregistration (declaring in advance what you are looking for and how you’ll look). Does preregistration hinder serendipity? Absolutely not says Bishop, not least because the context of ‘discovery’ is never a one-off experiment.

Note that, in all cases, having made the initial unexpected observation – either from unstructured exploratory research, or in the course of investigating something else – the researchers went on to shore up the findings with further, hypothesis-driven experiments. What they did not do is to report just the initial observation, embellished with statistics, and then move on, as if the presence of a low p-value guaranteed the truth of the result.

(It’s hard not to read into these comments a criticism of some academic journals which seem happy to publish single experiments reporting surprising findings.)

Bishop’s list contains 3 findings from electrophysiology (recording brain cell activity directly with electrodes), which I think is notable. In these cases neural recording acts in the place of a microscope, allowing fairly direct observation of the system the scientist is investigating at a level of detail hitherto unavailable. It isn’t surprising to me that given a new tool of observation, the prepared mind of the scientists will make serendipitous discoveries. The catch is whether, for the rest of psychology, such observational tools exist. Many psychologists use their intuition to decide where to look, and experiments to test whether their intuition is correct. The important serendipitous discoveries from electrophysiology suggest that measures which are new ways of observing, rather than merely tests of ideas, must also be important for psychological discoveries. Do such observational measures exist?

Images of ultra-thin models need your attention to make you feel bad

I have a guest post over at the BPS Research Digest, covering research on the psychological effects of pictures of ultra-thin fashion models.

A crucial question is whether the effect of these thin-ideal images is automatic. Does the comparison to the models, which is thought to be the key driver in their negative effects, happen without our intention, attention or both? Knowing the answer will tell us just how much power these images have, and also how best we might protect ourselves from them.

It’s a great study from the lab of Stephen Want (Ryerson University). For the full details of the research, head over: Images of ultra-thin models need your attention to make you feel bad

Update: Download the preprint of the paper, and the original data here

CBT is becoming less effective, like everything else

‘Researchers have found that Cognitive Behavioural Therapy is roughly half as effective in treating depression as it used to be’ writes Oliver Burkeman in The Guardian, arguing that this is why CBT is ‘falling out of favour’. It’s worth saying that CBT seems as popular as ever, but even if it was in decline, it probably wouldn’t be due to diminishing effectiveness – because this sort of reduction in effect is common across a range of treatments.

Burkeman is commenting on a new meta-analysis that reports that more recent trials of CBT for depression find it to be less effective than older trials but this pattern is common as treatments are more thoroughly tested. This has been reported in antipsychotics, antidepressants and treatments for OCD to name but a few.

Interestingly, one commonly cited reason treatments become less effective in trials is because response to placebo is increasing, meaning many treatments seem to lose their relative potency over time.

Counter-intuitively, for something considered to be ‘an inert control condition’ the placebo response is very sensitive to the design of the trial, so even comparing placebo against several rather than one active treatment can affect placebo response.

This has led people to suggest lots of ‘placebo’ hacks. “In clinical trials,” noted one 2013 paper in Drug Discovery, “the placebo effect should be minimized to optimize drug–placebo difference”.

It’s interesting that it is still not entirely clear whether this approach is ‘revealing’ the true effects of the treatment or just another way of ‘spinning’ trials for the increasingly worried pharmaceutical and therapy industries.

The reasons for the declining treatment effects over time are also likely to include different types of patients selected into trials, more methodologically sound research practices meaning less chance of optimistic measuring and reporting, the fact that if chance gives you a falsely inflated treatment effect first time round it is more likely to be re-tested than initially less impressive first trials, and the fact that older known treatments might bring a whole load of expectations with them that brand new treatments don’t.

The bottom line is that lots of our treatments, across medicine as a whole, have quite modest effects when compared to placebo. But if placebo represents an attempt to address the problem, it provides quite a boost to the moderate effects that the treatment itself brings.

So the reports of the death of CBT have been greatly exaggerated but this is mostly due to the fact that lots of treatments start to look less impressive when they’ve been around for a while. This is less due to them ‘losing’ their effect and more likely due to us more accurately measuring their true but more modest effect over time.

Phantasmagoric neural net visions

dreaming neural network imageA starling galley of phantasmagoric images generated by a neural network technique has been released. The images were made by some computer scientists associated with Google who had been using neural networks to classify objects in images. They discovered that by using the neural networks “in reverse” they could elicit visualisations of the representations that the networks had developed over training.

These pictures are freaky because they look sort of like the things the network had been trained to classify, but without the coherence of real-world scenes. In fact, the researchers impose a local coherence on the images (so that neighbouring pixels do similar work in the image) but put no restraint on what is globally represented.

The obvious parallel is to images from dreams or other altered states – situations where ‘low level’ constraints in our vision are obviously still operating, but the high-level constraints – the kind of thing that tries to impose an abstract and unitary coherence on what we see – is loosened. In these situations we get to observe something that reflects our own processes as much as what is out there in the world.

Link: The researchers talk about their ‘dreaming neural networks’
Gallery: Inceptionism: Going deeper into Neural Networks

Explore our back pages

At our birthday party on Thursday I told people how I’d crunched the stats for the 10 years of mindhacks.com posts. Nearly 5000 posts, and over 2 million words – an incredible achievement (for which 96% of the credit should go to Vaughan).

In 2010 we had an overhaul (thanks JD for this, and Matt for his continued support of the tech side of the site). I had a look at the stats, which only date back till then, and pulled out our all time most popular posts. Here they are:


Something about the enthusiasm of last Thursday inspired me to put the links the top ten posts on a wiki. Since it is a wiki anyone can jump in and edit, so if there are any bits of the mindhacks.com back catalogue that you think are worth leaving a placeholder to, feel free to add it. Vaughan and I will add links to a few of our favourite posts, so check back and see how it is coming along.

Link: Mind Hacks wiki