Laughter as a window on the infant mind

What makes a baby laugh? The answer might reveal a lot about the making of our minds, says Tom Stafford.

What makes babies laugh? It sounds like one of the most fun questions a researcher could investigate, but there’s a serious scientific reason why Caspar Addyman wants to find out.

He’s not the first to ask this question. Darwin studied laughter in his infant son, and Freud formed a theory that our tendency to laugh originates in a sense of superiority. So we take pleasure at seeing another’s suffering – slapstick style pratfalls and accidents being good examples – because it isn’t us.

The great psychologist of human development, Jean Piaget, thought that babies’ laughter could be used to see into their minds. If you laugh, you must ‘get the joke’ to some degree – a good joke is balanced in between being completely unexpected and confusing and being predictable and boring. Studying when babies laugh might therefore be a great way of gaining insight into how they understand the world, he reasoned. But although he proposed this in the 1940s, this idea remains to be properly tested. Despite the fact that some very famous investigators have studied the topic, it has been neglected by modern psychology.

Addyman, of Birkbeck, University of London, is out to change that. He believes we can use laughter to get at exactly how infants understand the world. He’s completed the world’s largest and most comprehensive survey of what makes babies laugh, presenting his initial results at the International Conference on Infant Studies, Berlin, last year. Via his website he surveyed more than 1000 parents from around the world, asking them questions about when, where and why their babies laugh.The results are – like the research topic – heart-warming. A baby’s first smile comes at about six weeks, their first laugh at about three and a half months (although some took three times as long to laugh, so don’t worry if your baby hasn’t cracked its first cackle just yet). Peekaboo is a sure-fire favourite for making babies laugh (for a variety of reasons I’ve written about here), but tickling is the single most reported reason that babies laugh.

Importantly, from the very first chuckle, the survey responses show that babies are laughing with other people, and at what they do. The mere physical sensation of something being ticklish isn’t enough. Nor is it enough to see something disappear or appear suddenly. It’s only funny when an adult makes these things happen for the baby. This shows that way before babies walk, or talk, they – and their laughter – are social. If you tickle a baby they apparently laugh because you are tickling them, not just because they are being tickled.

What’s more, babies don’t tend to laugh at people falling over. They are far more likely to laugh when they fall over, rather than someone else, or when other people are happy, rather than when they are sad or unpleasantly surprised. From these results, Freud’s theory (which, in any case, was developed based on clinical interviews with adults, rather than any rigorous formal study of actual children) – looks dead wrong.

Although parents report that boy babies laugh slightly more than girl babies, both genders find mummy and daddy equally funny.

Addyman continues to collect data, and hopes that as the results become clearer he’ll be able to use his analysis to show how laughter tracks babies’ developing understanding of the world – how surprise gives way to anticipation, for example, as their ability to remember objects comes online.

Despite the scientific potential, baby laughter is, as a research topic, “strangely neglected”, according to Addyman. Part of the reason is the difficulty of making babies laugh reliably in the lab, although he plans to tackle this in the next phase of the project. But partly the topic has been neglected, he says, because it isn’t viewed as a subject for ‘proper’ science to look into. This is a prejudice Addyman hopes to overturn – for him, the study of laughter is certainly no joke.

This is my BBC Future column from Tuesday. The original is here. If you are a parent you can contribute to the science of how babies develop at Dr Addyman’s babylaughter.net (specialising in laughter) or at babylovesscience.com (which covers humour as well as other topics).

Spike activity 24-07-2015

Quick links from the past week in mind and brain news:

Why does the concept of ‘schizophrenia’ still persist? Great post from Psychodiagnosticator.

Nature reviews two new movies on notorious psychology experiments: the Stanford Prison Experiment and Milgram’s conformity experiments.

Can the thought of money make people more conservative? Another social priming effect bites the dust Neuroskeptic with a great analysis.

The Psychologist has a transcript of a recent ‘teenagers debunked’ talk at the Latitude Festival.

Oliver Sack’s excellent biography On The Move serialised on BBC Radio 4. Streaming only, online for a month only, but definitely worth it.

Science reports a new study finding that the ‘rise in autism’ is likely due to diagnostic substitution as intellectual disability diagnoses have fallen by the same amount.

Great piece in the New England Journal of Medicine on placebo effects in medicine.

The New York Times has an op-ed on ‘Psychiatry’s Identity Crisis’.

Brain Crash is an innovative online documentary from the BBC where you have to piece together a car crash and brain injury for other people’s memories.

Gamasutra has an absolutely fascinating piece on innovative behavioural approaches to abusive gamers.

Are online experiment participants paying attention?

factoryOnline testing is sure to play a large part in the future of Psychology. Using Mechanical Turk or other crowdsourcing sites for research, psychologists can quickly and easily gather data for any study where the responses can be provided online. One concern, however, is that online samples may be less motivated to pay attention to the tasks they are participating in. Not only is nobody watching how they do these online experiments, they whole experience is framed as a work-for-cash gig, so there is pressure to complete any activity as quickly and with as low effort as possible. To the extent that the online participants are satisficing or skimping on their attention, can we trust the data?

A newly submitted paper uses data from the Many Labs 3 project, which recruited over 3000 participants from both online and University campus samples, to test the idea that online samples are different from the traditional offline samples used by academic psychologists:

The findings strike a note of optimism, if you’re into online testing (perhaps less so if you use traditional university samples):

Mechanical Turk workers report paying more attention and exerting more effort than undergraduate students. Mechanical Turk workers were also more likely to pass an instructional manipulation check than undergraduate students. Based on these results, it appears that concerns over participant inattentiveness may be more applicable to samples recruited from traditional university participant pools than from Mechanical Turk

This fits with previous reports showing high consistency when classic effects are tested online, and with reports that satisficing may have been very high in offline samples, we just weren’t testing for it.

However, an issue I haven’t seen discussed is whether, because of the relatively small pool of participants taking experiments on MTurk, online participants have an opportunity to get familiar with typical instructional manipulation checks (AKA ‘catch questions’, which are designed to check if you are paying attention). If online participants adapt to our manipulation checks, then the very experiments which set out to test if they are paying more attention may not be reliable.

Link: new paperGraduating from Undergrads: Are Mechanical Turk Workers More Attentive than Undergraduate Participants?

This paper provides a useful overview: Conducting perception research over the internet: a tutorial review

Conspiracy theory as character flaw

NatureBrainPhilosophy professor Quassim Cassam has a piece in Aeon arguing that conspiracy theorists should be understood in terms of the intellectual vices. It is a dead-end, he says, to try to understand the reasons someone gives for believing a conspiracy theory. Consider someone called Oliver who believes that 9/11 was an inside job:

Usually, when philosophers try to explain why someone believes things (weird or otherwise), they focus on that person’s reasons rather than their character traits. On this view, the way to explain why Oliver believes that 9/11 was an inside job is to identify his reasons for believing this, and the person who is in the best position to tell you his reasons is Oliver. When you explain Oliver’s belief by giving his reasons, you are giving a ‘rationalising explanation’ of his belief.

The problem with this is that rationalising explanations take you only so far. If you ask Oliver why he believes 9/11 was an inside job he will, of course, be only too pleased to give you his reasons: it had to be an inside job, he insists, because aircraft impacts couldn’t have brought down the towers. He is wrong about that, but at any rate that’s his story and he is sticking to it. What he has done, in effect, is to explain one of his questionable beliefs by reference to another no less questionable belief.

So the problem is not their beliefs as such, but why the person came to have the whole set of (misguided) beliefs in the first place. The way to understand conspiracists is in terms of their intellectual character, Cassam argues, the vices and virtues that guide as us as thinking beings.

A problem with this account is that – looking at the current evidence – character flaws don’t seem that strong a predictor of conspiracist beliefs. The contrast is with the factors that have demonstrable influence on people’s unusual beliefs. For example, we know that social influence and common cognitive biases have a large, and measurable, effect on what we believe. The evidence isn’t so good on how intellectual character traits such as closed/open-mindedness, skepticism/gullibility are constituted and might affect conspiracist beliefs. That could be because the personality/character trait approach is inherently limited, or just that there is more work to do. One thing is certain, whatever the intellectual vices are that lead to conspiracy theory beliefs, they are not uncommon. One study suggested that 50% of the public endorse at least one conspiracy theory.

Link : Bad Thinkers by Quassim Cassam

Paper on personality and conspiracy theories: Unanswered questions: A preliminary investigation of personality and individual difference predictors of 9/11 conspiracist beliefs

Paper on widespread endorsement of conspiracy theories: Conspiracy Theories and the Paranoid Style(s) of Mass Opinion

Previously on Mindhacks.com That’s what they want you to believe

And a side note, this view that the problem with conspiracy theorists isn’t the beliefs helps explain why throwing facts at them doesn’t help, better to highlight the fallacies in how they are thinking.

Spike activity 13-07-2015

A slightly belated Spike Activity to capture some of the responses to the APA report plus quick links from the past week in mind and brain news:

APA makes a non-apology on Twitter and gets panned in response.

“the organization’s long-standing ethics director, Stephen Behnke, had been removed from his position as a result of the report and signaled that other firings or sanctions could follow” according to the Washington Post.

Psychologist accused of enabling US torture backed by former FBI chief, reports The Guardian. The wrangling begins.

PsychCentral editor John Grohol resigns from the APA in protest at the ethical failings.

Remarkable comments from long-time anti-torture campaigners Stephen Soldz and Steven Reisner made to a board meeting of the APA: “I see that some of the people who need to go are in this room. That in itself tells me that you don’t really yet understand the seriousness of your situation.”

European Federation of Psychology Associations releases statement on APA revelations: “Interrogations are a NO-GO zone for psychologists” – which seems to confuse interrogations, which can be done ethically and benefit from psychological input, and torture, which cannot.

Jean Maria Arrigo, the psychologist who warned of torture collusion and was subjected to a smear campaign is vindicated by the report, reports The Guardian.

And now on to more pleasant, non-torture, non-complete institutional breakdown in ethical responsibility news…

What It’s Like to Be Profoundly ‘Face-Blind’. Interesting piece from the Science of Us.

Wired reports that Bitcoins can be ‘stolen from your brain’. A bit of an exaggeration but a fascinating story nonetheless.

Could Travelling Waves Upset Cognitive Neuroscience? asks Neuroskeptic.

The New Yorker has a great three-part series on sleep and sleeplessness.

Robotic shelves! MIT Tech Review has the video. To the bunkers!

APA facilitated CIA torture programme at highest levels

The long-awaited independent report, commissioned by the American Psychological Association, into the role of the organisation in the CIA’s torture programme has cited direct collusion at the highest levels of the APA to ensure psychologists could participate in abusive interrogation practices.

Reporter James Risen, who has been chasing the story for some time, revealed the damning report and its conclusions in an article for The New York Times but the text of the 524 page report more than speaks for itself. From page 9:

Our investigation determined that key APA officials, principally the APA Ethics Director joined and supported at times by other APA officials, colluded with important DoD [Department of Defense] officials to have APA issue loose, high-level ethical guidelines that did not constrain DoD in any greater fashion than existing DoD interrogation guidelines. We concluded that APA’s principal motive in doing so was to align APA and curry favor with DoD. There were two other important motives: to create a good public-relations response, and to keep the growth of psychology unrestrained in this area.

We also found that in the three years following the adoption of the 2005 PENS [Psychological Ethics and National Security] Task Force report as APA policy, APA officials engaged in a pattern of secret collaboration with DoD officials to defeat efforts by the APA Council of Representatives to introduce and pass resolutions that would have definitively prohibited psychologists from participating in interrogations at Guantanamo Bay and other U.S. detention centers abroad. The principal APA official involved in these efforts was once again the APA Ethics Director, who effectively formed an undisclosed joint venture with a small number of DoD officials to ensure that APA’s statements and actions fell squarely in line with DoD’s goals and preferences. In numerous confidential email exchanges and conversations, the APA Ethics Director regularly sought and received pre-clearance from an influential, senior psychology leader in the U.S. Army Special Operations Command before determining what APA’s position should be, what its public statements should say, and what strategy to pursue on this issue.

The report is vindication for the long-time critics of the APA who have accused the organisation of a deliberate cover-up in its role in the CIA’s torture programme.

Nevertheless, even critics might be surprised at the level of collusion which was more direct and explicit than many had suspected. Or perhaps, suspected would ever be revealed.

The APA have released a statement saying “Our internal checks and balances failed to detect the collusion, or properly acknowledge a significant conflict of interest, nor did they provide meaningful field guidance for psychologists” and pledges a number of significant reforms to prevent psychologists from being involved in abusive practices including the vetting of all changes to ethics guidance.

The repercussions are likely to be significant and long-lasting not least as the full contents of the reports 524 pages are fully digested.
 

Link to article in The New York Times.
Link to full text of report from the APA.

CBT is becoming less effective, like everything else

‘Researchers have found that Cognitive Behavioural Therapy is roughly half as effective in treating depression as it used to be’ writes Oliver Burkeman in The Guardian, arguing that this is why CBT is ‘falling out of favour’. It’s worth saying that CBT seems as popular as ever, but even if it was in decline, it probably wouldn’t be due to diminishing effectiveness – because this sort of reduction in effect is common across a range of treatments.

Burkeman is commenting on a new meta-analysis that reports that more recent trials of CBT for depression find it to be less effective than older trials but this pattern is common as treatments are more thoroughly tested. This has been reported in antipsychotics, antidepressants and treatments for OCD to name but a few.

Interestingly, one commonly cited reason treatments become less effective in trials is because response to placebo is increasing, meaning many treatments seem to lose their relative potency over time.

Counter-intuitively, for something considered to be ‘an inert control condition’ the placebo response is very sensitive to the design of the trial, so even comparing placebo against several rather than one active treatment can affect placebo response.

This has led people to suggest lots of ‘placebo’ hacks. “In clinical trials,” noted one 2013 paper in Drug Discovery, “the placebo effect should be minimized to optimize drug–placebo difference”.

It’s interesting that it is still not entirely clear whether this approach is ‘revealing’ the true effects of the treatment or just another way of ‘spinning’ trials for the increasingly worried pharmaceutical and therapy industries.

The reasons for the declining treatment effects over time are also likely to include different types of patients selected into trials, more methodologically sound research practices meaning less chance of optimistic measuring and reporting, the fact that if chance gives you a falsely inflated treatment effect first time round it is more likely to be re-tested than initially less impressive first trials, and the fact that older known treatments might bring a whole load of expectations with them that brand new treatments don’t.

The bottom line is that lots of our treatments, across medicine as a whole, have quite modest effects when compared to placebo. But if placebo represents an attempt to address the problem, it provides quite a boost to the moderate effects that the treatment itself brings.

So the reports of the death of CBT have been greatly exaggerated but this is mostly due to the fact that lots of treatments start to look less impressive when they’ve been around for a while. This is less due to them ‘losing’ their effect and more likely due to us more accurately measuring their true but more modest effect over time.

Computation is a lens

CC Licensed Photo from Flickr user Jared Tarbell. Click for source.“Face It,” says psychologist Gary Marcus in The New York Times, “Your Brain is a Computer”. The op-ed argues for understanding the brain in terms of computation which opens up to the interesting question – what does it mean for a brain to compute?

Marcus makes a clear distinction between thinking that the brain is built along the same lines as modern computer hardware, which is clearly false, while arguing that its purpose is to calculate and compute. “The sooner we can figure out what kind of computer the brain is,” he says, “the better.”

In this line of thinking, the mind is considered to be the brain’s computations at work and should be able to be described in terms of formal mathematics.

The idea that the mind and brain can be described in terms of information processing is the main contention of cognitive science but this raises a key but little asked question – is the brain a computer or is computation just a convenient way of describing its function?

Here’s an example if the distinction isn’t clear. If you throw a stone you can describe its trajectory using calculus. Here we could ask a similar question: is the stone ‘computing’ the answer to a calculus equation that describes its flight, or is calculus just a convenient way of describing its trajectory?

In one sense the stone is ‘computing’. The physical properties of the stone and its interaction with gravity produce the same outcome as the equation. But in another sense, it isn’t, because we don’t really see the stone as inherently ‘computing’ anything.

This may seem like a trivial example but there are in fact a whole series of analog computers that use the physical properties of one system to give the answer to an entirely different problem. If analog computers are ‘really’ computing, why not our stone?

If this is the case, what makes brains any more or less of a computer than flying rocks, chemical reactions, or the path of radio waves? Here the question just dissolves into dust. Brains may be computers but then so is everything, so asking the question doesn’t tell us anything specific about the nature of brains.

One counter-point to this is to say that brains need to algorithmically adjust to a changing environment to aid survival which is why neurons encode properties (such as patterns of light stimulation) in another form (such as neuronal firing) which perhaps makes them a computer in a way that flying stones aren’t.

But this definition would also include plants that also encode physical properties through chemical signalling to allow them to adapt to their environment.

It is worth noting that there are other philosophical objections to the idea that brains are computers, largely based on the the hard problem of consciousness (in brief – could maths ever feel?).

And then there are arguments based on the boundaries of computation. If the brain is a computer based on its physical properties and the blood is part of that system, does the blood also compute? Does the body compute? Does the ecosystem?

Psychologists drawing on the tradition of ecological psychology and JJ Gibson suggest that much of what is thought of as ‘information processing’ is actually done through the evolutionary adaptation of the body to the environment.

So are brains computers? They can be if you want them to be. The concept of computation is a tool. Probably the most useful one we have, but if you say the brain is a computer and nothing else, you may be limiting the way you can understand it.
 

Link to ‘Face It, Your Brain Is a Computer’ in The NYT.

Spike activity 03-07-2015

Quick links from the past week in mind and brain news:

It is Time to Temper Our Artificial Intelligence Hysteria says PSFK

Oxford academic warns humanity runs the risk of creating super intelligent computers that eventually destroy us all in The Telegraph.

Fusion reports on how artificial intelligence is evolving to recognise porn.

BBC Radio 4’s The Life Scientific featured neurosurgeon Henry Marsh.

Counterpunch has an extended, detailed piece on ‘The Rise and Fall of the Human Terrain System’ – the US Army’s group of ‘war on terror’ weaponised anthropologists.

What kind of a person volunteers for a free brain scan? asks BPS Research Digest.

Neurocritic has an interesting ethical angle on the BRAIN Initiative’s aim to develop brain implants. Do we have the funding or expertise to actually use the medical technology if it is developed?

BBC Radio 4’s The Report has a documentary on chemsex (extended shagging while high) in London’s gay scene.

Mosaic has an interesting piece on being homesick in the modern world.

Wrinkled brain mimics crumpled paper. I know the feeling. Science News with the story.

For argument’s sake

ebook cover
I have (self) published an ebook For argument’s sake: evidence that reason can change minds. It is the collection of two essays that were originally published on Contributoria and The Conversation. I have revised and expanded these, and added a guide to further reading on the topic. There are bespoke illustrations inspired by Goya (of owls), and I’ve added an introduction about why I think psychologists and journalists both love stories that we’re irrational creatures incapable of responding to reasoned argument. Here’s something from the book description:

Are we irrational creatures, swayed by emotion and entrenched biases? Modern psychology and neuroscience are often reported as showing that we can’t overcome our prejudices and selfish motivations. Challenging this view, cognitive scientist Tom Stafford looks at the actual evidence. Re-analysing classic experiments on persuasion, as well as summarising more recent research into how arguments change minds, he shows why persuasion by reason alone can be a powerful force.

All in, it’s close to 7000 words and available from Amazon and Smashwords now