Cognitive restructuring and the fist bump terrorists

The recently satirical New Yorker cover depicting Obama and his wife as fist-bumping Islamic terrorists comes under fire in an article for The Chronicle by psychologist Mahzarin Banaji who argues that it irresponsibly creates an implicit association between “Obama and Osama”. Banaji is almost certainly right, but neglects higher levels of cognition which can make this ineffectual.

Banaji is most known for her extensive work on the implicit association test (IAT), which we discussed only the other day. What this and other work has shown is that despite our conscious thoughts (“hair colour has no association with intelligence”) we still might have an unconscious bias that associates certain concepts (‘blonde’ and ‘dim’).

Along these lines, Banaji suggests that the artist, Barry Blitt, who created the picture has harmed the political debate by unintentionally strengthening an inappropriate link:

The brain, Blitt would be advised to understand, is a complex machine whose operating principles we know something about. When presented with A and B in close spatial or temporal proximity, the mind naturally and effortlessly associates the two. Obama=Osama is an easy association to produce via simple transmogrification. Flag burning=unpatriotic=un-American=un-Christian=Muslim is child’s play for the cortex. Learning by association is so basic a mechanism that living beings are jam-packed with it ‚Äî ask any dog the next time you see it salivating to a tone of a bell. There is no getting around the fact that the very association Blitt helplessly confessed he didn’t intend to create was made indelibly for us, by him.

It is not unreasonable, given the inquiring minds that read The New Yorker, to expect that an obvious caricature would be viewed as such. In fact, our conscious minds can, in theory, accomplish such a feat. But that doesn’t mean that the manifest association (Obama=Osama lover) doesn’t do its share of the work. To some part of the cognitive apparatus, that association is for real. Once made, it has a life of its own because of a simple rule of much ordinary thinking: Seeing is believing. Based on the research of my colleague, the psychologist Daniel Gilbert, on mental systems, one might say that the mind first believes, and only if it is relaxing in an Adirondack chair doing nothing better, does it question and refute. There is a power to all things we see and hear ‚Äî exactly as they are presented to us.

It strikes me that Banaji is perhaps being a little disingenuous here. Certainly the advert does strengthen that unconscious association, but, as as the intention of most satire, it attempts to include another association into the mix – that of absurdity.

In other words, the idea of the cartoon is presumably to trigger the association Obama = terrorist, but also include another so it becomes Obama = terrorist = absurd. It’s the humourists equivalent of the reductio ad absurdum argument.

Of course, this can rely as much on the same implicit associations as Banaji mentions, but it can be also seen to work very effectively through a process of reinterpretation that alters the impact of automatic connections through changing their meaning.

In fact, this process so can be so powerful that it is used to treat psychiatric problems.

In clinical work it is called ‘cognitive restructuring’. For example, in panic disorder, people begin to interpret normal bodily reactions (increased heart rate, temperature etc) as a sign of impending heart attack or other danger, which leads to more anxiety, further interpretations and a spiral of terrifying anxiety.

Cognitive restructuring teaches people that these bodily changes and worried thoughts aren’t signs of an impending heart attack, they’re normal reactions, and the spiral of anxiety is not a risk to your health, just a pattern you’ve got into. In other words, they begin to believe something different about the significance of the link.

Humour also relies on a process of reinterpretation. Most theories of humour stress that it usually requires the reframing of a previously held association.

However, the key to good satire is that this reframing should be obvious and we might speculate that the reframing effect should be more powerful than the effect of simply reviving the old association.

We can perhaps wonder then, whether the controversy over the New Yorker cover is not that it made an association between Obama and terrorism, but that it was not effective enough in making it obviously absurd.

I suspect one of the difficulties is that the cartoon was actually attempting to satirise not Obama, but the media discussion of him. This is always a risky strategy because it requires so much cognitive abstraction that the automatic association is far more apparent.

Link to Banaji’s article in The Chronicle.

The theatre of hysteria

I’m currently reading Elaine Showalter’s book Hystories, a cultural history of the concept of ‘hysteria‘, a term which has variously described the supposed effects of a ‘wandering womb’, unexplained neurological symptoms, panic, nervousness or just ‘making a fuss’.

She describes where medicine and media have collided, and highlights how popular interest in the condition has driven a long-standing tradition of fictional interpretations that have developed alongside medical understanding.

Showalter has a feminist angle although is generally even handed with the evidence and is not shy in highlighting the excesses of some past feminist writing on the subject.

One particularly interesting part is where she discusses how theatre interpreted the work of 19th century French neurologist Jean-Martin Charcot as it was happening.

Charcot is perhaps most famous for his work on hysteria and held regular Tuesday lectures at the Salp√™tri√®re hospital in Paris where he would theatrically demonstrate the symptoms of hysteria in favourite female patients who apparently ‘performed’ with an equal flourish.

As we mentioned previously, one of the reasons Charcot’s work was so widely known is because he used the newly developed technology of photography to create striking and sometimes pseudo-erotic portraits documenting the bodily contortions of his (largely) female patients. The picture on the right is of Augustine, one of his ‘star patients’.

These have been the inspiration for numerous contemporary plays, ballets, exhibitions and novels.

What I didn’t know was that these are not a modern phenomena, shows based on Charcot’s work work were popular since Charcot first began publishing his work and giving lectures (from p100):

As Charcot’s clinic achieved celebrity in the 1890s, images of hysteria cross over to theatre and cabaret. At the Chat Noir and Folies Berg√®re, performers, singers, and mimes who called themselves the “Harengs Saurs √âpileptiques” (The Epileptic Sour Herrings) or “Hydropathes” mimicked the jerky, zigzag movements of the hysterical seizure…

The poses of grande hyst√©rie enacted at the Friday spectacles of the Salp√™tri√®re closely resembled the stylized movements of French classical acting. Indeed, hysterical women at the clinic and fallen women in melodrama were virtually indistinguishable; the theatre critic Elin Diamond comments that both displayed “eye rolling, facial grimaces, gnashing teeth, heavy sighs, fainting, shrieking and choking; ‘hysterical laughter’ was a frequent stage direction as well as a common occurrence in medical asylums”…

Arthur Symons regarded the Moulin Rouge dancer Jane Avril as the embodiment of the age’s “pathological choreography.” These resemblances were not coincidental: writers, actresses cabaret performers and dancers like Avril attended Charcot’s matinees and then worked the Salp√™tri√®re style into their own performances.

An interesting twist is that Avril was actually treated by Charcot as a young girl after she ran away from an abusive mother and was admitted to the Salp√™tri√®re for ‘insanity’.

Link to details of Showalter’s book Hystories.
Link to first chapter.

Whatever happened to symptom substitution?

Symptom substitution is at the core of Freudian psychology but according to a new article in Clinical Psychology Review there is virtually no evidence for its existence and the concept should be abandoned.

The idea is that if you treat a symptom, say a phobia of social situations, without addressing the underlying conflict, another symptom will just appear because the core problem is unchanged. It is based on the Freudian theory that all symptoms of mental illness are simply a reflection of an underlying unconscious conflict.

Freud was inspired by the first law of thermodynamics that says that energy cannot be created or destroyed just turned into another form. His psychology, and much Freudian-inspired psychodynamic psychotherapy that follows, applies a similar idea to emotions.

In this model, a conflict is caused by a forbidden unconscious impulse being held back by our conscious ego. Supposedly, we want to banish them from our conscious mind to maintain a positive self-image, so we repress them into our unconscious. But because they can’t just disappear they are expressed in other ways – i.e. as neurotic symptoms.

However, this model also plays an important symbolic role in the politics of mental health. It suggests that psychoanalysis is the only truly effective treatment, because it supposedly deals with the ‘root cause’, while drugs, behaviour therapy and CBT just alleviate symptoms and leave the patient open to further suffering.

Rather unusually for a Freudian idea, it leads to a directly testable hypothesis. Psychoanalytic treatment should lead to a better long-term prognosis, whereas we should see other other symptoms appear after treatment with other approaches.

Psychologist Warren Tryon decided to look at the medical literature to see whether other approaches were more likely to result in the appearance of other symptoms, and found no evidence from relevant empirical studies.

In fact, Tryon found only two cases studies that claimed to provide direct evidence for symptom substitution and one of them didn’t even fulfil the definition, it just reported that the same symptoms came back – therefore describing a relapse rather than a substitution.

Despite their being a lack of evidence so far, he does note that not many studies have directly addressed the issue, but proposes a direct test:

The following experimental design could identify genuine psychoanalytic symptoms. Form two groups of demographically matched patients displaying a hypothesized symptom. Provide psychoanalytic treatment to one group and symptomatic treatment to the other group. The hypothesized symptom can be considered to be a bone fide psychoanalytic symptom if patients receiving psychoanalytic therapy get better and symptom substitution occurs in patients receiving symptom oriented therapy. Helping these patients to get better by providing psychoanalytic therapy would provide additional supportive evidence and be ethically responsible. The literature review reported above indicates that the presence of bona fide psychoanalytic symptoms has yet to be demonstrated.

Link to ‘Whatever happened to symptom substitution?’ (thanks Karel!).
Link to PubMed entry for article.

Impossible experiments

Psychology Today have asked a group of leading thinkers to discuss their ‘impossible experiment’, if the impractical, unethical or unattainable was not an obstacle to the ultimate mind and brain study.

Presumably riffing on the BPS Research Digests’ search for the ‘most important psychology experiment that’s never been done’, they’ve gathered proposals that involve everything from brain swapping to behavioural mega-economics.

My favourite is from psychologist Bella DePaulo who has come up with a cunning way of studying the psychological effects of marriage:

I’d like to take couples who are living together and randomly assign half of them to marry and the others to stay unmarried. Then we could really know something about the implications of co-habitation vs. marriage. More outrageously, take people who are not in a serious romantic relationship, and assign half of them, at random, to marry. Single people are randomly assigned to a spouse who is chosen at random, or to a spouse who fits their description of their perfect partner, or to stay single. Who do you think would end up the happiest a decade later? Same for divorce. If married parents are already at each other’s throats, is it better for the children if they divorce, or stay together? Randomly assign half of them to divorce, and half to stay together; then we’ll see. Now take married couples who say they are happy and are not considering divorce. Randomly assign half of them to divorce! Now who will be happier ten years hence?

There’s plenty more blue sky thinking, and a curious video involving a mannequin.

Link to ‘Impossible Experiments’.

Intuitions about phenomenal consciousness

Illustrating how this ‘experimental philosophy’ idea has really struck a chord, Scientific American Mind has an article on our intuitions about whether things can have mental states, whether that be animals, humans, machines or corporations.

The piece is by philosopher Joshua Kobe and contains lots of fascinating examples of how we tend to be comfortable attributing mental states likes ‘beliefs’ to corporations, but not emotions.

The same goes for robots, it turns out, but one key factor seems to be not what we think about its thinking ‘machinery’ but how human the body seems.

In one of Huebner’s studies [pdf], for example, subjects were told about a robot who acted exactly like a human being and asked what mental states that robot might be capable of having. Strikingly, the study revealed exactly the same asymmetry we saw above in the case of corporations.

Subjects were willing to say:
• It believes that triangles have three sides.
But they were not willing to say:
• It feels happy when it gets what it wants.

Here again, we see a willingness to ascribe certain kinds of mental states, but not to ascribe states that require phenomenal consciousness. Interestingly enough, this tendency does not seem to be due entirely to the fact that a CPU, instead of an ordinary human brain, controls the robot. Even controlling in the experiment for whether the creature had a CPU or a brain, subjects were more likely to ascribe phenomenal consciousness when the creature had a body that made it look like a human being.

Link to ‘Can a Robot, an Insect or God Be Aware?’
pdf of draft Huebner paper.

Dan Gilbert on the importance of social psychology

Dan Gilbert has a brief interview in this month’s (paywalled) Psychologist magazine. From which the following nugget of wisdom:


Psychologists have a penchant for irrational exuberances, and whenever we discover something new we feel the need to discard everything old. Social psychology is the exception. We kept cognition alive during the behaviourist revolution that denied it, we kept emotion alive during the cognitive revolution that ignored it, and today we are keeping behaviour alive as the neuroscience revolution steams on and threatens
to make it irrelevant. But psychological revolutions inevitably collapse under their own weight and psychologists start hunting for all the babies they tossed out with the bathwater. Social psychology is where they typically go to find them. So the challenge for social psychologists watching yet another revolution that promises to leave them in the dustbin of history is to remember that we’ve outlived every revolutionary who has ever pronounced us obsolete.

Link Gilbert Lab
Link Psychologist Magazine (sorry, subscribers only, but you can browse issues older than six months for free)

The science of theory

Philosopher Eric Schwitzgebel has written an excellent piece on experimental philosophy, the practice of testing out philosophical ideas by using experiments or gathering data.

Now, the more astute of you might be thinking, “isn’t that just science?”, and, you’d be right. Sorta.

Schwitzgebel makes the important point that lots of the things that are taken for granted in the philosophy of mind, like what it is like to have have certain conscious experiences, haven’t actually been examined to see how widely these assumptions or experiences are shared.

Partly, he notes, because psychology is too scared about being called unscientific to start returning to introspection, and partly because philosophers are the ones most concerned about these issues.

In the philosophy of perception, there‚Äôs a long-standing dispute between those who think that our concepts and categories thoroughly permeate and infect even the most basic perceptual experiences and those who hold that people with very different understandings of a scene may still have exactly the same perceptual experience of it…

Such phenomenological claims have two things in common with claims about what’s intuitive that make them ripe for inclusion under the umbrella of “experimental philosophy”: First, it is mainly philosophers who make such claims; and second, there is no substantial tradition outside of philosophy dedicated to the empirical evaluation of the claims.

These facts may be mere historical accident: Back in the days of introspective psychology, psychologists loved to dispute issues of this sort. But fortunately or unfortunately, psychology still has not sufficiently rebounded from the behaviorist revolution that such general phenomenological claims are broadly discussed by mainstream psychologists.

If you consider tradition of phenomenological philosophy, which aims to describe the subjective structure of the mind, it’s striking that it’s been almost entirely based on philosophers’ own intuitions about their mental states, which they then extrapolate to everyone else.

Schwitzgebel also suggests that experimental philosophy could be used for exploring an anthropology of philosophy. In other words, how culture affects our general assumptions about how the mind works.

I have looked at the relationship between culture-specific metaphors and the prevalence of certain views about conscious experience. To highlight some of my own work: Are people (including philosophers) more likely to say that dreams rarely contain colored elements if the film media around them are predominantly black and white? Are people more likely to say that a circular object (such as a coin) viewed obliquely looks elliptical if the dominant media for describing vision are media like paintings and photographs that involve flat, projective distortions?

Of course, there’s a big overlap with psychology here, but the fact is, psychologists just aren’t that interested gathering the data that philosophers would often find most useful, and so they’re setting about gathering it themselves.

The first book on experimental philosophy was recently published, and Schwitzgebel’s article is a fantastic introduction, as well as an eye-opening look at the possibilities of philosophers armed with clipboards.

Link to article ‘The Psychology of Philosophy’.

Back to the future, but this time with data

IEEE Spectrum Online magazine has a special and rather splendid feature on the ‘singularity‘ – the supposed point when technology will outpace the human brain and we’ll be catapulted into a time of intelligent machines, neurologically enhanced humans and never ending life.

If you think this sounds like science fiction, then you’re probably right. Loathe as they are to admit it, transhumanists are essentially pining for the future as depicted in late 20th / early 21st century speculative fiction.

This is not necessarily such a bad thing. Like science fiction itself, some of it obviously stretches credibility to the point of self-parody, while some tackles the limits of technology and human experience in a profound and sophisticated way.

One notable difference is that some of the biggest names in science are involved in the transhumanist movement, and so despite their somewhat, let’s say, ‘ambitious’ aims, the discussions tend to start from what is already possible.

IEEE Spectrum calls the singularity the technological rapture and it’s hard to escape the quasi-mystical aspect of some transhumanism, although perhaps more akin to 21st century alchemy than any explicit belief in the tenants of mainstream religion.

Nevertheless, this new feature sticks largely to the science and contains a wealth of articles, interactive features and video interviews that focus mainly on neuroscience and artificial intelligence. Consequently, there are many highlights to absorb and enjoy.

There’s even a wall chart which tells you “who’s who” in the movement, which is handily illustrated by the disembodied (presumably cryogenically frozen) heads of some of the key thinkers in the field.

UPDATE: It wouldn’t be transhumanism without a mention of Ray Kurzweil! Never fear, for today’s New York Times fills the gap with a piece noting that, like Christmas, the singularity will be here sooner than you think.

Link to ‘The Singularity: A Special Report’.

Do Bayesian statistics rule the brain?

This week’s New Scientist has a fascinating article on a possible ‘grand theory’ of the brain that suggests that virtually all brain functions can be modelled with Bayesian statistics – an approach discovered by an 18th century vicar.

Bayesian statistics allow the belief in the hypothesis to shift as new evidence is collected. This means the same evidence can have a different influence on certainty, depending on how much other evidence there is.

In other words, it asks the question ‘what is the probability of the belief being true, given the data so far?’.

The NewSci article looks at the work neuroscientist Karl Friston, who increasingly believes that from the level of neurons to the level of circuits, the brain operates as if it uses Bayesian statistics.

The essential idea is that the brain makes models upon which it bases predictions, and these models and predictions are updated in a Bayesian like-way as new information becomes available

Over the past decade, neuroscientists have found that real brains seem to work in this way. In perception and learning experiments, for example, people tend to make estimates – of the location or speed of a moving object, say – in a way that fits with Bayesian probability theory. There’s also evidence that the brain makes internal predictions and updates them in a Bayesian manner. When you listen to someone talking, for example, your brain isn’t simply receiving information, it also predicts what it expects to hear and constantly revises its predictions based on what information comes next. These predictions strongly influence what you actually hear, allowing you, for instance, to make sense of distorted or partially obscured speech.

In fact, making predictions and re-evaluating them seems to be a universal feature of the brain. At all times your brain is weighing its inputs and comparing them with internal predictions in order to make sense of the world. “It’s a general computational principle that can explain how the brain handles problems ranging from low-level perception to high-level cognition,” says Alex Pouget, a computational neuroscientist at the University of Rochester in New York.

Friston is renowned for having a solid grasp of both high-level neuroscience and statistics. In fact, he’s was the original creator of SPM, probably the most popular tool for statistically analysing brain scan data.

Needless to say, his ideas have been quite influential and ‘Bayesian fever’ has swept the research centre where he works.

I was interested to see that his colleague, neuroscientist Chris Frith, has applied the idea to psychopathology and will be arguing that delusions and hallucinations can be both understood as the breakdown of Bayesian inference in an upcoming lecture in London.

This edition of NewSci also has a great article on how cosmic rays affect the brains of astronauts, so it’s well worth a look.

Link to NewSci article ‘Is this a unified theory of the brain?’.
Link to article ‘Space particles play with the mind’.

Review: “Why the mind is not a computer”

tallis.png
“Why the mind is not a computer: A pocket lexicon of neuromythology”
Raymond Tallis (2004, originally published 1994).

Neuromythology is the shibboleth of cognitive science that the mind is a machine, and that somehow our theories of information, complexity, patterns or representations are sufficient to explain consciousness. Tallis accuses cognitive scientists, and philosophers of cognitive science such as Chalmers, Churchland and Dennett, of the careless use of words which can apply both to thinking and to non-thinking systems (‘computing’, ‘goals’, ‘memory’, for example). This obfuscation “provides a framework within which the real problems can be by-passed and the illusion of progress maintained”. At his best Tallis is a useful reminder that many of the features of the brain which are evoked to ‘explain’ consciousness really only serve as expressions of faith, rather than true explanations. Does the mind arise from the brain because of the complexity of all those intertwined neurons? The processes inside a cell are equally complex, why aren’t cells conscious? Similarly for patterns, which depend on the subjective perspective (yes, the consciousness) of the observer rather than having an objective existence which is sufficient to generate consciousness; and for levels of description, which, with careless thinking are sometimes reified so that the mind can ‘act’ on the brain, when in fact, if you are physicalist, the mind and brain don’t have separate existences. Moments of the argument can appear willfully obstructive. Tallis maintains that there is no meaningful sense in which information can exist without someone being informed, any more, he says, than a watch can tell the time without someone looking at it. He’s right that we should be careful the word information, which has a very precise technical meaning and also colloquial meanings, but if you suppose that subjective consciousness is required to make information exist (and rule-following, representation and computation to pick a few other concepts about which he makes similar arguments) then you effectively disallow any attempts to use these concepts as part of your theory of consciousness. The disagreement between Tallis and many philosophers of cognitive science seems to me to be somewhat axiomatic — either you believe that our current models of reality can explain how matter can produce mind, or you don’t — but Tallis is right to remind us that the things we feel might eventually provide an answer don’t in themselves constitute an answer.

In essence what this book amounts to is a vigorous restatement of the ‘hard problem’ of consciousness — the stubborn inadequacy of our physical theories when faced with explaining how phenomenal experience might arise out of ordinary matter, or even with beginning to comprehend what form such an explanation might take.

Disclaimer: I bought this book with my own money, because I needed something to read at the Hay Festival after finishing Ahdaf Soueif’s wonderful ‘Map of Love’ (200) and because Raymond Tallis’s essay here was so good. I was not paid or otherwise encouraged to review it.

Uncanny valley of the dolls

Human-computer interaction scientist Karl MacDorman has produced a fantastically illustrated video lecture on the psychology of the ‘uncanny valley‘ – the effect where androids become creepy when they’re almost human.

It comes in seven 3-4 minute sections, each of which is packed with some completely fascinating science and some wonderful examples of humanoid androids in action and how people react to them.

It’s a bit hard to navigate the YouTube links between sections, so I’ve collected the links to all the parts of the talk, entitled ‘Charting the Uncanny Valley’, below:

1. Introduction
2. Form Dynamics Contingency
3. Human Perception
4. Do Looks Matter?
5. Android Science
6. Explanations
7. What makes a robot uncanny?

While reviewing the whole area of android – human interaction, MacDorman seems to have done some fascinating research himself, often taking paradigms from existing psychology studies and seeing how androids alter the experience.

For example, in one study [pdf] he morphed android faces with human ones (using Philip K Dick as the human face!) and measured how the images trigger differing feelings of familiarity, eeriness and the like.

A very well spent 20 minutes and a great introduction to a fascinating area.

pdf of MacDorman’s paper on the Uncanny Valley.
Link to MeFi post which alerted me to the lecture.

Does economics make you selfish?

Philosopher Eric Schwitzgebel has been investigating whether ethics professors are more moral than other people, and it turns out, they’re possibly less. He’s now turned his attention to economics and wonders whether too much exposure to ‘rational choice theory‘ – that says it’s always rational to maximise profit – makes people more selfish.

Surprisingly, there have been several studies on exactly this topic, several which seem to suggest that economics students are more selfish than other students, but these all seem to be flawed in quite important ways.

They either use exactly the same sorts of tasks that students study in class to demonstrate that ‘selfish’ actions are the most economically rational strategy, or they rely on self-report – something also potentially biased by the association between ‘selfishness’ and irrationality.

Apparently, only three studies have looked at the link between studying economics and real-world selfishness, and none provide good evidence for the link.

Schwitzgebel has a bigger issue in mind than simply investigating the personal habits of economists, however.

This is part of his project to question the utility of certain types of theory. For example, if studying ethics makes people no more ethical and studying economics makes people no more economically rational, how useful are they?

Link to post ‘Does Studying Economics Make You Selfish?’.

Language and schizophrenia make us uniquely human

ABC Radio National’s science programme Ockham’s Razor just had a fascinating edition on a maverick theory about schizophrenia and the evolution of language.

It purports to discuss the history of schizophrenia but is really a great summary of psychiatrist Tim Crow’s theory that schizophrenia is the consequence of the human evolution of language.

Crow is a professor of psychiatry at Oxford University who heads up a large research group so is quite mainstream to be a maverick, but his theory ruffles a lot of feathers.

He tries to address the puzzle over why schizophrenia has survived in the population if it is strongly influenced by genetics, particularly as it markedly reduces chances of reproduction. Surely it would have been ‘bred out’ of the population?

His theory [pdf] suggests that schizophrenia is the breakdown of the normal left-sided brain specialisation for language, owing to the disruption of genes that are involved in making the left hemisphere dominant.

Like other theories that attempt to account for the puzzle, it suggests that the risk is increased by pathological combination of usually important genes.

Crow has amassed a great deal of evidence that people with schizophrenia show less left-sided dominance for language and have altered patterns of brain asymmetry that can be seen in brain structure as well as in functional tasks.

He is also highly critical of a lot of the current molecular genetic work in schizophrenia, and argues that epigenetic variation is key and that its possible to see where the genes altered in human evolution to make us more likely to have language and consequently develop schizophrenia.

If you want a great brief guide to his theory, this edition of Ockham’s Razor is a great discussion of the main points.

Link to Ockham’s Razor on Crow’s evolutionary approach.
pdf of scientific paper by Crow outlining his theory.

Neuroweapons, war crimes and the preconscious brain

A new generation of military technology interfaces directly with the brain to target and trigger weapons before our conscious mind is fully engaged.

In a new article in the Cornell International Law Journal, lawyer Stephen White asks whether the concept of a ‘war crime’ becomes irrelevant if the unconscious mind is pulling the trigger.

In most jurisdictions, the legal system makes a crucial distinction between two elements of a crime: the intent (mens rea) and the action (actus rea).

Causing something dreadful to happen without any intent or knowledge is considered an accident and not a crime. Hence, a successful prosecution demands that the accused is shown to have intended to violate the law in some way.

This concept is based on the theory that the conscious mind forms an intention, and an actions follows. Unfortunately, we now know that this idea is outdated.

In the 1980s, pioneering experiments by Benjamin Libet demonstrated that activity in the brain’s action areas can be reliably detected up to 200ms before we experience the conscious decision to act. In other words, consciousness seems to lag behind action.

Although with only limited reliability (just 60%), a recent fMRI study found that areas in the frontal lobes were starting to become more active up to seven seconds before the conscious intention to act.

While these sorts of study raise interesting questions about free will, their effect on the courts has been minimal, because it is assumed that, at least for healthy individuals, we have as much control over stopping our own actions as starting them.

The US government’s defence research agency, DARPA, is currently developing new military technologies, dubbed ‘neuroweapons’, that may throw these assumptions into disarray.

The webpage of DARPA’s Human Assisted Neural Devices Program only mentions the use of brain-machine interfaces in terms of helping injured veterans, but p11 of the US Dept of Defense budget justification [pdf] explicitly states that “This program will develop the scientific foundation for novel concepts that will improve warfighter performance on the battlefield as well as technologies for enhancing the quality of life of paralyzed veterans”.

In other words, the same technology that allows humans to control computer cursors, robot arms or wheelchairs by thought alone, could be used to target and trigger weapons.

Even if only part of the process, such as selecting possible targets, is delegated to technology that reads the unconscious orienting response from the brain, that still means that part of the thought process has automatically become part of the action.

Notably, international law outlaws indiscriminate weapons and aggression, but if the unconscious thought becomes the weapon, how can we possibly prosecute a war crime?

White reviews the current state of the technology from the unclassified evidence and carefully examines the ethical and legal issues, ultimately arguing that we need a new legal framework for 21st century ‘neurowarfare’.

The first preconsious war may soon be upon us.

pdf of ‘Brave New World: Neurowarfare and the Limits of International Humanitarian Law’.

The shifting sands of the ‘autism epidemic’

The Economist has a short but telling article on whether the so-called ‘autism epidemic’, occasionally touted in the media, may simply be a change in how developmental problems are diagnosed.

It covers a new study that did something really simple – it tracked down 38 people who, years ago, had been diagnosed with a delay in language and re-assessed them using the latest diagnostic interviews.

They used the ADOS (an activity and observation schedule) and the ADI (an interview for parents). This combination is often considered the ‘gold standard’ for a reliable and comprehensive diagnosis.

All the people were originally diagnosed with a problem in the development of language, so it was clear they weren’t without difficulties. Language delay is part of the autism diagnosis, so the researchers wondered whether we’d just classify them differently now.

Despite the fact that none were diagnosed with autism spectrum disorders when they were first assessed, when re-assessed using modern methods, a third were classified as on the spectrum.

It’s only a small study, but matches with the findings of previous research that found that while the narrow diagnosis of autism is at less than 0.4% in the UK, the newer, wider definition of the less severe ‘autism spectrum’ diagnoses, unsurprisingly, is much more prevalent (just over 1%).

In other words, the looser the diagnosis becomes the more people get the diagnosis and more good evidence that the increase in cases of autism is due to wider classification rather than new ‘narrow definition’ cases.

Link to Economist article ‘Not more, just different’.
Link to Ben Goldacre on last autism epidemic media scare.

Neuroaesthetics my arse

Physician and philosopher Raymond Tallis has written a scorching article in The Times berating art critics for using poorly understood ideas from neuroscience when reviewing or interpreting literature, art or film.

He particularly focuses on an article by famed novelist A.S. Byatt where she suggests that the reason John Donne’s poetry is so compelling is because it engages particular brain processes.

Byatt is an interesting focus for criticism because she is probably one of the modern writers who is most engaged with cognitive and neuroscience.

She often does talks with psychologists and neuroscientists and has contributed to a Cambridge University Press book with a number of distinguished memory researchers and has just released a new jointly edited book charting similar territory.

However, Tallis takes Byatt to task for using neuroscience as little more than window dressing, and suggests the whole field of literary criticism is simply jumping on the brain science bandwagon to make up for the declining popularity of Freudian, Marxist, and postmodern theories that it used to be based on.

Implicitly, Tallis is suggesting that if Byatt can’t get it right, what hope is there for the rest of the critics:

A. S. Byatt’s neural approach to literary criticism is not only unhelpful but actually undermines the calling of a humanist intellectual, for whom literary art is an extreme expression of our distinctively human freedom, of our liberation from our organic, indeed material, state.

At any rate, attempting to find an explanation of a sophisticated twentieth-century reader’s response to a sophisticated seventeenth-century poet in brain activity that is shared between humans and animals, and has been around for many millions of years, rather than in communities of minds that are unique to humans, seems perverse. Neuroaesthetics is wrong about the present state of neuroscience: we are not yet able to explain human consciousness, even less articulate self-consciousness as expressed in the reading and writing of poetry. It is wrong about our experience of literature. And it is wrong about humanity.

Ouch!

It’s also notable that Tallis reserves some of his criticism for neuroscientists who oversell their work in the media, perhaps leading the public to justifiably think that they have explained some central human attribute when they’ve really done an interesting but limited lab experiment.

Link to Times article ‘The neuroscience delusion’ (via 3QD).