Spike activity 28-05-2016

Quick links from the past week in mind and brain news:

One of the earliest hominin constructions ever found hundreds of metres deep into a cave. Fascinating piece in The Atlantic.

Aeon has a fascinating piece on how we come to have knowledge of our own minds.

PET brain metabolism linked to return of consciousness in vegetative state patients. The ‘predict’ headline on the article is a bit misleading in everyday terms – it’s only one study so not good enough evidence to make clinical predictions – but fascinating work covered by Stat.

The Guardian has a piece on psychology’s study pre-registration revolution.

ABC Radio’s The Science Show has an excellent hour-long tribute to Oliver Sacks – in his own words.

How do we choose a romantic partner? Interesting review of studies from The Conversation.

Social Minds has a fascinating post on arguing that it’s about time we identified cognitive phenotypes for the social deficits in autism.

The science of the Psychoactive Drugs Act

The world’s stupidest drugs law, the Psychoactive Drugs Act, has come into effect in the UK last week and it claims to prohibit the creation and supply of all psychoactive substances not already covered by pre-existing drugs laws.

Apart from taking us further down the futile road of prohibition it is premised on something that’s scientifically impossible – testing if a seized drug is psychoactive from looking at its chemical structure.

The government claimed that they had ‘solved’ this problem and they’ve just released their forensic strategy document which, unsurprisingly, doesn’t actually solve it.

What it does do, however, is worthy of attention as it likely raises a whole new set of problems.

We learn from the forensic strategy that the test for ‘psychoactivity’ is to submit mystery substances to receptor binding assays – a lab test where the substance is added to cells ‘in a dish’ which have receptors for certain neurotransmitters to see if substances bind to and activate the receptors.

Your brain has many, many different forms of receptors, so the government has defined a list that will supposedly indicate whether a substance is ‘psychoactive’ based on whether a substance binds to and activates one of the following:

  • CB1 (targeted by cannabis and synthetic cannabinoid type drugs)
  • GABAA (targeted by benzodiazepine type drugs)
  • 5HT2A (targeted by hallucinogenic type drugs – these can be from a number of different types of drugs)
  • NMDA (targeted by dissociative/hallucinogenic drugs e.g. ketamine)
  • µ-opioid (targeted by opioid drugs e.g. heroin) and
  • monoamine transporters (targeted by stimulant drugs e.g. MDMA, cocaine).

These are indeed receptors that facilitate some of the major recreational drug groups but this is not an adequate definition of ‘psychoactivity’ not least because there are several psychoactive substances that don’t affect these receptors.

Most notable is long-running ‘legal high’ salvia divinorum which is wildly hallucinogenic but has its effect through the non-listed κ-opioid receptor.

So produce a lab-based tweak on the salvinorin A molecule, the ‘active ingredient’ in Salvia, and you have something that won’t be picked up by government tests.

The main problem though, is likely to be that these tests will be over-inclusive. Lots of substances will activate these receptors without having a psychoactive effect.

For example, epinastine is a drug in eye drops that strongly activates the 5HT2A in the lab but which doesn’t have a psychoactive effect because it doesn’t cross the blood-brain barrier.

Acamprosate is a drug used to treat alcoholism, not typically considered to be psychoactive, and yet activates GABAA receptors.

There are many more examples and they’re not hard to track down – mainly because we now have several open databases of drugs and receptor interactions so you can easily find psychoactive drugs that will screen negative or non psychoactive ones that will be falsely detected as mind-altering.

In practice, what this means is that lots of substances – chemicals from the home, the workshop, the lab, and the pharmacy – may screen for ‘psychoactivity’ but not be psychoactive. False positives, in other words.

But this approach also shows that the Psychoactive Drugs Act fails at solving the problem it is meant to overcome: underground labs producing new substances faster than they can be added to a list of banned drugs.

The Act just complements a fixed list of banned drugs with a fixed list of banned drug effects – making it just another target for grey market labs to innovate around.

What’s also interesting from the list is what drug effects are not proscribed – and we can probably expect underground innovation in pure D2 dopamine agonists that don’t affect monoamine transporters for uppers, and antihistamines as downers, among others. Although to be honest, most will likely just keep on using the same substances.

But considering that the biggest take home from ‘legal highs’ is that they were much worse for your health than ‘illegal highs’ – perhaps the best public health result we can hope for is that the Psychoactive Drugs Act pushes recreational drug users back to using the less harmful classics – speed, MDMA, weed and so on.

And when that’s the best you can hope for, you really know that your drug laws are in a dismal state.

Serendipity in psychological research

micDorothy Bishop has an excellent post ‘Ten serendipitous findings in psychology’, in which she lists ten celebrated discoveries which occurred by happy accident.

Each discovery is interesting in itself, but Prof Bishop puts the discoveries in the context of the recent discussion about preregistration (declaring in advance what you are looking for and how you’ll look). Does preregistration hinder serendipity? Absolutely not says Bishop, not least because the context of ‘discovery’ is never a one-off experiment.

Note that, in all cases, having made the initial unexpected observation – either from unstructured exploratory research, or in the course of investigating something else – the researchers went on to shore up the findings with further, hypothesis-driven experiments. What they did not do is to report just the initial observation, embellished with statistics, and then move on, as if the presence of a low p-value guaranteed the truth of the result.

(It’s hard not to read into these comments a criticism of some academic journals which seem happy to publish single experiments reporting surprising findings.)

Bishop’s list contains 3 findings from electrophysiology (recording brain cell activity directly with electrodes), which I think is notable. In these cases neural recording acts in the place of a microscope, allowing fairly direct observation of the system the scientist is investigating at a level of detail hitherto unavailable. It isn’t surprising to me that given a new tool of observation, the prepared mind of the scientists will make serendipitous discoveries. The catch is whether, for the rest of psychology, such observational tools exist. Many psychologists use their intuition to decide where to look, and experiments to test whether their intuition is correct. The important serendipitous discoveries from electrophysiology suggest that measures which are new ways of observing, rather than merely tests of ideas, must also be important for psychological discoveries. Do such observational measures exist?

A new wave of interrogation

Wired has an excellent article that tracks the development of police interrogation techniques from the dark days of physical violence, to the largely hand-me-down techniques depicted in classic cop shows, to a new era of interrogation developed and researched in secret.

It’s probably one of the best pieces you’ll read on interrogation psychology for, well, a very long time, because they don’t come around very often. This one is brilliantly written.

One key part tracks the influence of still-secret interrogation techniques from the US Government’s High-Value Detainee Interrogation Group or HIG as they have filtered through from the ‘war on terror’ to civilian law enforcement.

In 2010, to make good on a campaign promise that he would end the use of torture in US terror investigations, President Obama announced the formation of the High-Value Detainee Interrogation Group, a joint effort of the FBI, the CIA, and the Pentagon. In place of the waterboarding and coercion that took place at facilities like Abu Ghraib during the Bush years, the HIG was created to conduct noncoercive interrogations. Much of that work is top secret. HIG-trained interrogators, for instance, are said to have questioned would-be Times Square bomber Faisal Shahzad and convicted Boston Marathon bomber Dzhokhar Tsarnaev. The public knows nothing about how those interrogations, or the dozen or so others the HIG is said to have conducted, unfolded. Even the specific training methods the HIG employs—and that it has introduced to investigators in the Air Force, Navy, and elsewhere—have never been divulged.

At the same time, however, the HIG has become one of the most powerful funders of public research on interrogations in America.

A fascinating and compelling read.

 
Linked to Wired article on the new wave of interrogation.

Reconstructing through altered states

Yesterday, I had the pleasure of doing a post-screening Q&A with the film-makers of an amazing documentary called My Beautiful Broken Brain.

One of the many remarkable things about the documentary is that one of the film-makers is also the subject, as she began making the film a few days after her life-threatening brain injury.

The documentary follows Lotje Sodderland who experienced a major brain haemorrhage at the age of 34.

She started filming herself a few days afterwards on her iPhone, initially to make sense of her suddenly fragmented life, but soon contacted film-maker Sophie Robinson to get an external perspective.

It’s interesting both as a record of an emotional journey through recovery, but also because Lotje spent a lot of time working with a special effects designer to capture her altered experience of the world and make it available to the audience.

I also really recommend a long-form article Lotje wrote about her experience of brain injury for The Guardian.

It’s notable because it’s written so beautifully. But Lotje told me she while she had regained the ability to write and type after her injury, she has been left unable to read. So the whole article was written through a process of typing text and getting Siri on her iPhone to read it back to her.

The documentary is available on Netflix.
 

Link to My Beautiful Broken Brain on Wikipedia.
Link to full documentary on Netflix.
Link to long-form article in The Guardian.

Spike activity 13-05-2016

Quick links from the past week in mind and brain news:

A new paper by AI experts explores the construction of dangerous artificial intelligence. TechRepublic covers the latest step in the inevitable march towards bunker humanity.

“Brain-dead patients have served as research subjects for decades”. Interesting piece in Discover Magazine.

Neuroskeptic has started to produce videos and this is excellent: The Myth of the Brain.

There’s a crowdfunding campaign to make episode 3 of a cyberpunk / sociology of neuroscience queer porn movie. Looping effect? No, I just lost my concentration for a second.

BBC Future has an excellent piece on the hearing voices movement approach to living with hallucinated voices.

There’s an insightful piece on the changing history of names and concepts of intellectual disability in The New York Times.

The Atlantic has a sensible take on the ‘genetics of staying in school’ study and what it does, and doesn’t tell us.

Somewhat awkward title but Science News has a piece on how Bayesian approaches to cognitive science are helping us understand psychopathology.

Good tests make children fail – here’s why

Many parents and teachers are critical of the Standardised Assessment Tests (SATs) that have recently been taken by primary school children. One common complaint is that they are too hard. Teachers at my son’s school sent children home with example questions to quiz their parents on, hoping to show that getting full marks is next to impossible.

Invariably, when parents try out these tests, they focus on the most difficult or confusing items. Some parents and teachers can be heard complaining on social media that if they get questions wrong, surely the tests are too hard for ten-year-olds.

But how hard should tests for children be?

As a psychologist, I know we have some well-developed principles that can help us address the question. If we look at the SATs as measures of some kind of underlying ability, then we can turn to one of the oldest branches of psychology – “psychometrics” – for some guidance.

Getting it just right

A good test shouldn’t be too hard. If most people get most questions wrong, then you have what is called a “floor effect”. The result is that you can’t tell any difference in ability between the people taking the test.

If we started the school sports day high jump with the bar at two metres high (close to the world record), then we’d finish sports day with everybody getting the same – zero successful jumps – and no information about how good anyone is at the high jump.

But at the same time, a good test shouldn’t be too easy. If most people get everything right, then the effect is, as you might expected, called a “ceiling effect”. If everybody gets everything right then again we don’t get any information from the test.

The key idea is that tests must discriminate. In psychometric terms, the value of a test is about the match between the thing it is supposed to measure and the difficulty of the items on the test. If I wanted to gauge maths ability in six-year-olds and I gave them all an A-Level paper, we can presume that nearly everyone would score zero. Although the A-Level paper might be a good test, it is completely uninformative if it is badly matched to the ability of the people taking the test.

Here’s the rub: for a test to be sensitive to differences in ability, it must contain items which people get wrong. Actually, there’s a precise answer to the proportion that you should get wrong – in the most sensitive test it should be half of the items. Questions which you are 50% likely to get right are the ones which are most informative.

How we feel about measuring and labelling children according to their skill at taking these tests is a big issue, but it is important that we recognise that this is what tests do. A well designed test will make all children get some items wrong – it is inherent in their design. It is up to us how we conceptualise that: whether tests are an unnecessary distraction from true education, or a necessary challenge we all need to be exposed to.

Better tests?

If you adopt this psychometric perspective, it becomes clear that the tests we use are an inefficient way of measuring any individual child’s particular ability to do the test. Most children will be asked a bunch of questions which are too easy for them, before they get to the informative ones which are at the edge of their ability. Then they will go on to attempt a bunch of questions which are far too hard. And pity the people for who the test is poorly matched to their ability and consists mostly of questions they’ll get wrong – which is both uninformative in psychometric terms, and dispiriting emotionally.

A hundred years ago, when we began our modern fixation with testing and measuring, it was hard to avoid the waste where many uninformative and potentially depressing questions were asked. This was simply because all children had to take the same exam paper.

Nowadays, however, examiners can administer tests via computer, and algorithmically identify the most informative questions for each child’s ability – making the tests shorter, more accurate, and less focused on the experience of failure. You could throw in enough easy questions that no child would ever have the experience of getting most of the questions wrong. But still there’s no getting around the fact that an informative test has to contain questions most people sitting it will get wrong.

Even a good test can measure an educationally irrelevant ability (such as merely the ability to do the test, or memorise abstract grammar rules), or be used in ways that harm children. But having difficult items isn’t a problem with the SATs, it’s a problem with all tests.

The Conversation

This article was originally published on The Conversation. Read the original article.

information theory and psychology

I have read a good deal more about information theory and psychology than I can or care to remember. Much of it was a mere association of new terms with old and vague ideas. Presumably the hope was that a stirring in of new terms would clarify the old ideas by a sort of sympathetic magic.

From: John R. Piece’s 1961 An introduction to information theory: symbols, signals and noise. Plus ça change.

Pierce’s book is really quite wonderful and contains lots of chatty asides and examples, such as:

Gottlob Burmann, a German poet who lived from 1737 to 1805, wrote 130 poems, including a total of 20,000 words, without once using the letter R. Further, during the last seventeen years of his life, Burmann even omitted the letter from his daily conversation.

The two word games that trick almost everyone

270px-Cowicon.svgPlaying two classic schoolyard games can help us understand everything from sexism to the power of advertising.

There’s a word game we used to play at my school, or a sort of trick, and it works like this. You tell someone they have to answer some questions as quickly as possible, and then you rush at them the following:

“What’s one plus four?!”
“What’s five plus two?!”
“What’s seven take away three?!”
“Name a vegetable?!”

Nine times out of 10 people answer the last question with “Carrot”.

Now I don’t think the magic is in the maths questions. Probably they just warm your respondent up to answering questions rapidly. What is happening is that, for most people, most of the time, in all sorts of circumstances, carrot is simply the first vegetable that comes to mind.

This seemingly banal fact reveals something about how our minds organise information. There are dozens of vegetables, and depending on your love of fresh food you might recognise a good proportion. If you had to list them you’d probably forget a few you know, easily reaching a dozen and then slowing down. And when you’re pressured to name just one as quickly as possible, you forget even more and just reach for the most obvious vegetable you can think of – and often that’s a carrot.

In cognitive science, we say the carrot is “prototypical” – for our idea of a vegetable, it occupies the centre of the web of associations which defines the concept. You can test prototypicality directly by timing how long it takes someone to answer whether the object in question belongs to a particular category. We take longer to answer “yes” if asked “is a penguin a bird?” than if asked “is a robin a bird?”, for instance. Even when we know penguins are birds, the idea of penguins takes longer to connect to the category “bird” than more typical species.

So, something about our experience of school dinners, being told they’ll help us see in the dark, the 37 million tons of carrots the world consumes each year, and cartoon characters from Bugs Bunny to Olaf the Snowman, has helped carrots work their way into our minds as the prime example of a vegetable.

The benefit to this system of mental organisation is that the ideas which are most likely to be associated are also the ones which spring to mind when you need them. If I ask you to imagine a costumed superhero, you know they have a cape, can probably fly and there’s definitely a star-shaped bubble when they punch someone. Prototypes organise our experience of the world, telling us what to expect, whether it is a superhero or a job interview. Life would be impossible without them.

The drawback is that the things which connect together because of familiarity aren’t always the ones which should connect together because of logic. Another game we used to play proves this point. You ask someone to play along again and this time you ask them to say “Milk” 20 times as fast as they can. Then you challenge them to snap-respond to the question “What do cows drink?”. The fun is in seeing how many people answer “milk”. A surprising number do, allowing you to crow “Cows drink water, stupid!”. We drink milk, and the concept is closely connected to the idea of cows, so it is natural to accidentally pull out the answer “milk” when we’re fishing for the first thing that comes to mind in response to the ideas “drink” and “cow”.

Having a mind which supplies ready answers based on association is better than a mind which never supplies ready answers, but it can also produce blunders that are much more damaging than claiming cows drink milk. Every time we assume the doctor is a man and the nurse is woman, we’re falling victim to the ready answers of our mental prototypes of those professions. Such prototypes, however mistaken, may also underlie our readiness to assume a man will be a better CEO, or a philosophy professor won’t be a woman. If you let them guide how the world should be, rather than what it might be, you get into trouble pretty quickly.

Advertisers know the power of prototypes too, of course, which is why so much advertising appears to be style over substance. Their job isn’t to deliver a persuasive message, as such. They don’t want you to actively believe anything about their product being provably fun, tasty or healthy. Instead, they just want fun, taste or health to spring to mind when you think of their product (and the reverse). Worming their way into our mental associations is worth billions of dollars to the advertising industry, and it is based on a principle no more complicated than a childhood game which tries to trick you into saying “carrots”.

This is my BBC Future column from last week. The original is here. And, yes, I know that baby cows actually do drink milk.

Is there a child mental health crisis?

CC Licensed Image from Wikimedia Commons. Click for source.It is now common for media reports to mention a ‘child mental health crisis’ with claims that anxiety and depression in children are rising to catastrophic levels. The evidence behind these claims can be a little hard to track down and when you do find it there seems little evidence for a ‘crisis’ but there are still reasons for us to be concerned.

The commonest claim is something to the effect that ‘current children show a 70% increase in rates of mental illness’ and this is usually sourced to the website of the UK child mental health charity Young Minds which states that “Among teenagers, rates of depression and anxiety have increased by 70% in the past 25 years, particularly since the mid 1980’s”

This is referenced to a pdf report by the Mental Health Foundation which references a “paper presented by Dr Lynne Friedli”, which probably means this pdf report which finally references this 2004 study by epidemiologist Stephan Collishaw.

Does this study show convincing evidence for a 70% increase in teenage mental health problems in the last 25 years? In short, no, for two important reasons.

The first is that the data is quite mixed – with both flatlines and increases at different times and in different groups – and the few statistically significant results may well be false positives because the study doesn’t control for running lots of analyses.

The second reason is because it looked at a 25-year period but only up to 1999 – so it is now 17 years out-of-date.

Lots of studies have been published since then, which we’ll look at in a minute, but these findings prompted the Nuffield Foundation to collect another phase of data in 2008 in exactly the same way as this original study, and they found that “the overall level of teenage mental health problems is no longer on the increase and may even be in decline.”

Putting both these studies together, this is typical of the sort of mixed picture that is common in these studies, making it hard to say whether there genuinely is an increase in child mental health problems or not.

This is reflected in data reported by three recent review papers on the area. Two articles focused on data from rating scales – questionnaires given to parents, teachers and occasionally children, and one paper focused on population studies that use diagnosis.

The first thing to say, is that there is no stand-out clear finding that child mental health problems are increasing in general, because the results are so mixed. It’s also worth saying that even where there is evidence of an increase, the effects are small to moderate. And because there is not a lot of data, the conclusions are quite provisional.

So is there evidence for a ‘child mental health crisis’? Probably not. Are there things to be concerned about – yes, there are.

Here’s perhaps what we can make out in terms of rough trends from the data.

It doesn’t seem there is an increase in child mental health problems for young children, that is, those below about 12. If anything, their mental health has been improving over the since the early 2000s. Here, however, the data is most scarce.

Globally, and lumping all children together, there is no convincing evidence for an increase in child mental health problems. One review of rating scale data suggests there is an increase, the other paper using the more rigorous systematic review approach suggests not – in line with the data from the review of diagnostic studies.

However, there does seem to be a trend for an increase in anxiety and depression in teenage girls. And data from the UK particularly does seem to show a mild-moderate upward trend for mental health problems in adolescents in general, in comparison to other countries where the data is much more mixed. Again, though, the data isn’t as solid as it needs to be.

This leaves open some important questions though. If we’re talking about a crisis – maybe the levels were already too high so even a drop means we’re still at ‘crisis level’. So one of the most important questions is – what would be an acceptable level of mental health problems in children?

The first answer that comes to mind is ‘zero’ and not unreasonably – but considering that some mental health problems arise from largely unavoidable life stresses, bereavements, natural disasters and accidents, it would be unrealistic to expect that no child suffered periods of disabling anxiety or depression.

This also raises the question of where we decide to make the cut-off for ’emotional problems’ or ’emotional disorders’ in comparison to ‘healthy emotions’. We need anxiety, sadness and anger but they can also become disabling. Deciding where we draw the line is key in answering questions about child mental health.

So there is no way of answering the question about ‘acceptable levels of mental health problems’ without raising the question of the appropriateness of how we define problems.

Similarly, a very common finding is huge variation between countries and cultures. Concepts, reporting, and the experience of emotions can vary greatly between different cultural groups, making it difficult to make direct comparisons across the globe.

For example, the broadly Western understanding of anxiety as a distinct psychological and emotional experience which can be understood separately from its bodily effects is not one shared by many cultures.

It’s worth saying that cultural changes occur not only between peoples but also over times. Are children more likely to report emotional distress in 2016 compared to 1974 even if they feel the same? Really, we don’t know.

All of which brings us to the question- why is there so much talk about a ‘mental health crisis’ in young people if there is no strong data that there is one?

Partly this is because the mental health of children is often a way of expressing concerns about societal changes. It’s “won’t someone think of the children” given a clinical sheen. But it is also important to realise that consultations and treatment for child mental health problems have genuinely rocketed, probably because of greater awareness and better treatment.

In the UK at least, it’s also clear that talk of a ‘child mental health crisis’ can refer to two things: concerns about rising levels of mental problems, but also concerns about the ragged state of child mental health services in Britain. There is a crisis in that more children are being referred for treatment and the underfunded services are barely keeping their head above water.

So talk of a ‘crisis in rising levels of child mental health problems’ is, on balance, an exaggeration, but we shouldn’t dismiss the trends that the data do suggest.

One of the strongest is the rise in anxiety and depression in teenage girls. We clearly have a long way to go, but the world has never been safer, more equal and more full of opportunities for our soon-to-be-women. Yet there seems to be a growing minority of girls affected by anxiety and depression.

At the very least, it should make us think about whether the society we are building is appropriately supporting the future 50% of the adult population.

The memory trap

CC Licensed Photo by Flickr user greeblie. Click for source.I had a piece in the Guardian on Saturday, ‘The way you’re revising may let you down in exams – and here’s why. In it I talk about a pervasive feature of our memories: that we tend to overestimate how much of a memory is ‘ours’, and how little is actually shared with other people, or the environment (see also the illusion of explanatory depth). This memory trap can combine with our instinct to make things easy for ourselves and result in us thinking we are learning when really we’re just flattering our feeling of familiarity with a topic.

Here’s the start of the piece:

Even the most dedicated study plan can be undone by a failure to understand how human memory works. Only when you’re aware of the trap set for us by overconfidence, can you most effectively deploy the study skills you already know about.
… even the best [study] advice can be useless if you don’t realise why it works. Understanding one fundamental principle of human memory can help you avoid wasting time studying the wrong way.

I go on to give four evidence-based pieces of revision advice, all of which – I hope – use psychology to show that some of our intuitions about how to study can’t be trusted.

Link: The way you’re revising may let you down in exams – and here’s why

Previously at the Guardian by me:

The science of learning: five classic studies

Five secrets to revising that can improve your grades