Spike activity 28-05-2016

Quick links from the past week in mind and brain news:

One of the earliest hominin constructions ever found hundreds of metres deep into a cave. Fascinating piece in The Atlantic.

Aeon has a fascinating piece on how we come to have knowledge of our own minds.

PET brain metabolism linked to return of consciousness in vegetative state patients. The ‘predict’ headline on the article is a bit misleading in everyday terms – it’s only one study so not good enough evidence to make clinical predictions – but fascinating work covered by Stat.

The Guardian has a piece on psychology’s study pre-registration revolution.

ABC Radio’s The Science Show has an excellent hour-long tribute to Oliver Sacks – in his own words.

How do we choose a romantic partner? Interesting review of studies from The Conversation.

Social Minds has a fascinating post on arguing that it’s about time we identified cognitive phenotypes for the social deficits in autism.

The science of the Psychoactive Drugs Act

The world’s stupidest drugs law, the Psychoactive Drugs Act, has come into effect in the UK last week and it claims to prohibit the creation and supply of all psychoactive substances not already covered by pre-existing drugs laws.

Apart from taking us further down the futile road of prohibition it is premised on something that’s scientifically impossible – testing if a seized drug is psychoactive from looking at its chemical structure.

The government claimed that they had ‘solved’ this problem and they’ve just released their forensic strategy document which, unsurprisingly, doesn’t actually solve it.

What it does do, however, is worthy of attention as it likely raises a whole new set of problems.

We learn from the forensic strategy that the test for ‘psychoactivity’ is to submit mystery substances to receptor binding assays – a lab test where the substance is added to cells ‘in a dish’ which have receptors for certain neurotransmitters to see if substances bind to and activate the receptors.

Your brain has many, many different forms of receptors, so the government has defined a list that will supposedly indicate whether a substance is ‘psychoactive’ based on whether a substance binds to and activates one of the following:

  • CB1 (targeted by cannabis and synthetic cannabinoid type drugs)
  • GABAA (targeted by benzodiazepine type drugs)
  • 5HT2A (targeted by hallucinogenic type drugs – these can be from a number of different types of drugs)
  • NMDA (targeted by dissociative/hallucinogenic drugs e.g. ketamine)
  • µ-opioid (targeted by opioid drugs e.g. heroin) and
  • monoamine transporters (targeted by stimulant drugs e.g. MDMA, cocaine).

These are indeed receptors that facilitate some of the major recreational drug groups but this is not an adequate definition of ‘psychoactivity’ not least because there are several psychoactive substances that don’t affect these receptors.

Most notable is long-running ‘legal high’ salvia divinorum which is wildly hallucinogenic but has its effect through the non-listed κ-opioid receptor.

So produce a lab-based tweak on the salvinorin A molecule, the ‘active ingredient’ in Salvia, and you have something that won’t be picked up by government tests.

The main problem though, is likely to be that these tests will be over-inclusive. Lots of substances will activate these receptors without having a psychoactive effect.

For example, epinastine is a drug in eye drops that strongly activates the 5HT2A in the lab but which doesn’t have a psychoactive effect because it doesn’t cross the blood-brain barrier.

Acamprosate is a drug used to treat alcoholism, not typically considered to be psychoactive, and yet activates GABAA receptors.

There are many more examples and they’re not hard to track down – mainly because we now have several open databases of drugs and receptor interactions so you can easily find psychoactive drugs that will screen negative or non psychoactive ones that will be falsely detected as mind-altering.

In practice, what this means is that lots of substances – chemicals from the home, the workshop, the lab, and the pharmacy – may screen for ‘psychoactivity’ but not be psychoactive. False positives, in other words.

But this approach also shows that the Psychoactive Drugs Act fails at solving the problem it is meant to overcome: underground labs producing new substances faster than they can be added to a list of banned drugs.

The Act just complements a fixed list of banned drugs with a fixed list of banned drug effects – making it just another target for grey market labs to innovate around.

What’s also interesting from the list is what drug effects are not proscribed – and we can probably expect underground innovation in pure D2 dopamine agonists that don’t affect monoamine transporters for uppers, and antihistamines as downers, among others. Although to be honest, most will likely just keep on using the same substances.

But considering that the biggest take home from ‘legal highs’ is that they were much worse for your health than ‘illegal highs’ – perhaps the best public health result we can hope for is that the Psychoactive Drugs Act pushes recreational drug users back to using the less harmful classics – speed, MDMA, weed and so on.

And when that’s the best you can hope for, you really know that your drug laws are in a dismal state.

Serendipity in psychological research

micDorothy Bishop has an excellent post ‘Ten serendipitous findings in psychology’, in which she lists ten celebrated discoveries which occurred by happy accident.

Each discovery is interesting in itself, but Prof Bishop puts the discoveries in the context of the recent discussion about preregistration (declaring in advance what you are looking for and how you’ll look). Does preregistration hinder serendipity? Absolutely not says Bishop, not least because the context of ‘discovery’ is never a one-off experiment.

Note that, in all cases, having made the initial unexpected observation – either from unstructured exploratory research, or in the course of investigating something else – the researchers went on to shore up the findings with further, hypothesis-driven experiments. What they did not do is to report just the initial observation, embellished with statistics, and then move on, as if the presence of a low p-value guaranteed the truth of the result.

(It’s hard not to read into these comments a criticism of some academic journals which seem happy to publish single experiments reporting surprising findings.)

Bishop’s list contains 3 findings from electrophysiology (recording brain cell activity directly with electrodes), which I think is notable. In these cases neural recording acts in the place of a microscope, allowing fairly direct observation of the system the scientist is investigating at a level of detail hitherto unavailable. It isn’t surprising to me that given a new tool of observation, the prepared mind of the scientists will make serendipitous discoveries. The catch is whether, for the rest of psychology, such observational tools exist. Many psychologists use their intuition to decide where to look, and experiments to test whether their intuition is correct. The important serendipitous discoveries from electrophysiology suggest that measures which are new ways of observing, rather than merely tests of ideas, must also be important for psychological discoveries. Do such observational measures exist?

A new wave of interrogation

Wired has an excellent article that tracks the development of police interrogation techniques from the dark days of physical violence, to the largely hand-me-down techniques depicted in classic cop shows, to a new era of interrogation developed and researched in secret.

It’s probably one of the best pieces you’ll read on interrogation psychology for, well, a very long time, because they don’t come around very often. This one is brilliantly written.

One key part tracks the influence of still-secret interrogation techniques from the US Government’s High-Value Detainee Interrogation Group or HIG as they have filtered through from the ‘war on terror’ to civilian law enforcement.

In 2010, to make good on a campaign promise that he would end the use of torture in US terror investigations, President Obama announced the formation of the High-Value Detainee Interrogation Group, a joint effort of the FBI, the CIA, and the Pentagon. In place of the waterboarding and coercion that took place at facilities like Abu Ghraib during the Bush years, the HIG was created to conduct noncoercive interrogations. Much of that work is top secret. HIG-trained interrogators, for instance, are said to have questioned would-be Times Square bomber Faisal Shahzad and convicted Boston Marathon bomber Dzhokhar Tsarnaev. The public knows nothing about how those interrogations, or the dozen or so others the HIG is said to have conducted, unfolded. Even the specific training methods the HIG employs—and that it has introduced to investigators in the Air Force, Navy, and elsewhere—have never been divulged.

At the same time, however, the HIG has become one of the most powerful funders of public research on interrogations in America.

A fascinating and compelling read.

 
Linked to Wired article on the new wave of interrogation.

Reconstructing through altered states

Yesterday, I had the pleasure of doing a post-screening Q&A with the film-makers of an amazing documentary called My Beautiful Broken Brain.

One of the many remarkable things about the documentary is that one of the film-makers is also the subject, as she began making the film a few days after her life-threatening brain injury.

The documentary follows Lotje Sodderland who experienced a major brain haemorrhage at the age of 34.

She started filming herself a few days afterwards on her iPhone, initially to make sense of her suddenly fragmented life, but soon contacted film-maker Sophie Robinson to get an external perspective.

It’s interesting both as a record of an emotional journey through recovery, but also because Lotje spent a lot of time working with a special effects designer to capture her altered experience of the world and make it available to the audience.

I also really recommend a long-form article Lotje wrote about her experience of brain injury for The Guardian.

It’s notable because it’s written so beautifully. But Lotje told me she while she had regained the ability to write and type after her injury, she has been left unable to read. So the whole article was written through a process of typing text and getting Siri on her iPhone to read it back to her.

The documentary is available on Netflix.
 

Link to My Beautiful Broken Brain on Wikipedia.
Link to full documentary on Netflix.
Link to long-form article in The Guardian.

Spike activity 13-05-2016

Quick links from the past week in mind and brain news:

A new paper by AI experts explores the construction of dangerous artificial intelligence. TechRepublic covers the latest step in the inevitable march towards bunker humanity.

“Brain-dead patients have served as research subjects for decades”. Interesting piece in Discover Magazine.

Neuroskeptic has started to produce videos and this is excellent: The Myth of the Brain.

There’s a crowdfunding campaign to make episode 3 of a cyberpunk / sociology of neuroscience queer porn movie. Looping effect? No, I just lost my concentration for a second.

BBC Future has an excellent piece on the hearing voices movement approach to living with hallucinated voices.

There’s an insightful piece on the changing history of names and concepts of intellectual disability in The New York Times.

The Atlantic has a sensible take on the ‘genetics of staying in school’ study and what it does, and doesn’t tell us.

Somewhat awkward title but Science News has a piece on how Bayesian approaches to cognitive science are helping us understand psychopathology.

Good tests make children fail – here’s why

Many parents and teachers are critical of the Standardised Assessment Tests (SATs) that have recently been taken by primary school children. One common complaint is that they are too hard. Teachers at my son’s school sent children home with example questions to quiz their parents on, hoping to show that getting full marks is next to impossible.

Invariably, when parents try out these tests, they focus on the most difficult or confusing items. Some parents and teachers can be heard complaining on social media that if they get questions wrong, surely the tests are too hard for ten-year-olds.

But how hard should tests for children be?

As a psychologist, I know we have some well-developed principles that can help us address the question. If we look at the SATs as measures of some kind of underlying ability, then we can turn to one of the oldest branches of psychology – “psychometrics” – for some guidance.

Getting it just right

A good test shouldn’t be too hard. If most people get most questions wrong, then you have what is called a “floor effect”. The result is that you can’t tell any difference in ability between the people taking the test.

If we started the school sports day high jump with the bar at two metres high (close to the world record), then we’d finish sports day with everybody getting the same – zero successful jumps – and no information about how good anyone is at the high jump.

But at the same time, a good test shouldn’t be too easy. If most people get everything right, then the effect is, as you might expected, called a “ceiling effect”. If everybody gets everything right then again we don’t get any information from the test.

The key idea is that tests must discriminate. In psychometric terms, the value of a test is about the match between the thing it is supposed to measure and the difficulty of the items on the test. If I wanted to gauge maths ability in six-year-olds and I gave them all an A-Level paper, we can presume that nearly everyone would score zero. Although the A-Level paper might be a good test, it is completely uninformative if it is badly matched to the ability of the people taking the test.

Here’s the rub: for a test to be sensitive to differences in ability, it must contain items which people get wrong. Actually, there’s a precise answer to the proportion that you should get wrong – in the most sensitive test it should be half of the items. Questions which you are 50% likely to get right are the ones which are most informative.

How we feel about measuring and labelling children according to their skill at taking these tests is a big issue, but it is important that we recognise that this is what tests do. A well designed test will make all children get some items wrong – it is inherent in their design. It is up to us how we conceptualise that: whether tests are an unnecessary distraction from true education, or a necessary challenge we all need to be exposed to.

Better tests?

If you adopt this psychometric perspective, it becomes clear that the tests we use are an inefficient way of measuring any individual child’s particular ability to do the test. Most children will be asked a bunch of questions which are too easy for them, before they get to the informative ones which are at the edge of their ability. Then they will go on to attempt a bunch of questions which are far too hard. And pity the people for who the test is poorly matched to their ability and consists mostly of questions they’ll get wrong – which is both uninformative in psychometric terms, and dispiriting emotionally.

A hundred years ago, when we began our modern fixation with testing and measuring, it was hard to avoid the waste where many uninformative and potentially depressing questions were asked. This was simply because all children had to take the same exam paper.

Nowadays, however, examiners can administer tests via computer, and algorithmically identify the most informative questions for each child’s ability – making the tests shorter, more accurate, and less focused on the experience of failure. You could throw in enough easy questions that no child would ever have the experience of getting most of the questions wrong. But still there’s no getting around the fact that an informative test has to contain questions most people sitting it will get wrong.

Even a good test can measure an educationally irrelevant ability (such as merely the ability to do the test, or memorise abstract grammar rules), or be used in ways that harm children. But having difficult items isn’t a problem with the SATs, it’s a problem with all tests.

The Conversation

This article was originally published on The Conversation. Read the original article.