Gotham psychologist

Andrea Letamendi is a clinical psychologist who specialises in the treatment and research of traumatic stress disorders but also has a passionate interest in how psychological issues are depicted in comics.

She puts her thoughts online in her blog Under the Mask which also discuss social issues in fandom and geek culture.

Recently, she was paid a wonderful compliment when she appeared in Batgirl #16 as Barbara Gordon’s psychologist.
 

I’ve always been of the opinion that comics are far more psychologically complex than they’re given credit for. In fact, one of my first non-academic articles was about the depiction of madness in Batman.

It’s also interesting that comics are now starting to explicitly address psychological issues. It’s not always done entirely successfully it has to be said.

Darwyn’s Cooke’s Ego storyline looked at Batman’s motivations through his traumatic past but shifts between subtle brilliance and clichés about mental illness in a slightly unsettling way.

Andrea Letamendi has a distinctly more nuanced take, however, and if you would like to know more about her work with superheroes do check the interview on Nerd Span.
 

Link to Letamendi’s Under the Mask (on Twitter as @ArkhamAsylumDoc)
Link to Nerd Span interview.

Hallucinating sheet music

Oliver Sacks has just published an article on ‘Hallucinations of musical notation’ in the neurology journal Brain that recounts eight cases of illusory sheet music escaping into the world.

The article makes the interesting point that the hallucinated musical notation is almost always nonsensical – either unreadable or not describing any listenable music – as described in this case study.

Arthur S., a surgeon and amateur pianist, was losing vision from macular degeneration. In 2007, he started ‘seeing’ musical notation for the first time. Its appearance was extremely realistic, the staves and clefs boldly printed on a white background ‘just like a sheet of real music’, and Dr. S. wondered for a moment whether some part of his brain was now generating his own original music. But when he looked more closely, he realized that the score was unreadable and unplayable. It was inordinately complicated, with four or six staves, impossibly complex chords with six or more notes on a single stem, and horizontal rows of multiple flats and sharps. It was, he said, ‘a potpourri of musical notation without any meaning’. He would see a page of this pseudo-music for a few seconds, and then it would suddenly disappear, replaced by another, equally nonsensical page. These hallucinations were sometimes intrusive and might cover a page he was trying to read or a letter he was trying to write.

Though Dr. S. has been unable to read real musical scores for some years, he wonders, as did Mrs. J., whether his lifelong immersion in music and musical scores might have determined the form of his hallucinations.

Sadly, the article is locked behind a paywall. However you can always request it via the #icanhazpdf hashtag on twitter .
 

Link to locked article on ‘Hallucinations of musical notation’.

The postmortem portraits of Phineas Gage

A new artform has emerged – the post-mortem neuroportrait. Its finest subject, Phineas Gage.

Gage was a worker extending the tracks of the great railways until he suffered the most spectacular injury. As he was setting a gunpowder charge in a rock with a large tamping iron, the powder was lit by an accidental spark. The iron was launched through his skull.

He became famous in neuroscience because he lived – rare for the time – and had psychological changes as a result of his neurological damage.

His story has been better told elsewhere but the interest has not died – studies on Gage’s injury have continued to the present day.

There is a scientific veneer, of course, but it’s clear that the fascination with the freak Phineas has its own morbid undercurrents.

Image from Wikipedia. Click for source.The image is key.

The first such picture was constructed with nothing more than pen and ink. Gage’s doctor John Harlow sketched his skull which Harlow had acquired after the patient’s death.

This Gage is forever fleshless, the iron stuck mid-flight, the shattered skull frozen as it fragments.

Harlow’s sketch is the original and the originator. The first impression of Gage’s immortal soul.

Gage rested as this rough sketch for over 100 years but he would rise again.

In 1994, a team led by neuroscientist Hannah Damasio used measurements of Gage’s skull to trace the path of the tamping iron and reconstruct its probable effect on the brain.

Gage’s disembodied skull appears as a strobe lit danse macabre, the tamping iron turned into a bolt of pure digital red and Gage’s brain, a deep shadowy grey.

It made Gage a superstar but it sealed his fate.

Every outing needed a more freaky Phineas. Like a low-rent-celebrity, every new exposure demanded something more shocking.

A 2004 study by Peter Ratiu and Ion-Florin Talos depicted Gage alongside his actual cranium – his digital skull screaming as a perfect blue iron pushed through his brain and shattered his face – the disfigurement now a gory new twist to the portrait.

In contrast, his human remains are peaceful – unmoved by the horrors inflicted on their virtual twin.

But the most recent Gage is the most otherworldly. A study by John Darrell Van Horn and colleagues examined how the path of the tamping iron would have affected the strands of white matter – the “brain’s wiring” – that connects cortical areas.

From Van Horn et al. (2012) PLoS One. 2012;7(5):e37454A slack-jawed Gage is now pierced by a ghostly iron bar that passes almost silently though his skull.

Gage himself is equally supernatural.

Blank white eyes float lifelessly in his eye sockets – staring into the digital blackness.

His white matter tracts appear within his cranium but are digitally dyed and seem to resemble multi-coloured hair standing on end like the electrified mop of a fairground ghoul.

But as the immortal Gage has become more horrifying over time, living portraits of the railwayman have been discovered. They show an entirely different side to the shattered skull celebrity.

To date, two portraits have been identified. They both show a ruggedly handsome, well-dressed man.

He has gentle flesh. Rather than staring into blackness, he looks at us.

Like a 19th century auto-whaler holding his self-harpoon, he grips the tamping iron, proud and defiant.

I prefer this living Phineas.

He does not become more alien with every new image.

He is at peace with a brutal, chaotic world.

He knows what he has lived through.

Fuck the freak flag, he says.

I’m a survivor.

A new horizon of sex and gender

Image from Wikipedia. Click for source.If you only listen to one radio programme this week, make it the latest edition of BBC Radio 4’s Analysis on the under-explored science of gender.

The usual line goes that ‘sex is biological while gender is social’ – meaning that while genetics determines our sex, how masculine or feminine we are is determined by specific cultural practices.

It turns out to be a little more complicated than this. It has long been known (although frequently forgotten) that typical sex markers like body shape and genitalia are actually quite diverse to the point of being ambiguous in some.

Similarly, while genetics is considered the ultimate arbiter of sex with XX indicating female and XY indicating male – XYY, XXY and XXX are surprisingly common.

On the other hand, there is evidence that some gender-related behaviours may be related to the biology of development and not solely to cultural factors.

But even with these caveats considered, what gender we ‘feel’ also turns out to be subject to a wide amount of variation with some people saying they have the gender of another sex, or that their gender is fluid, or that they have no gender at all.

The latest edition of Analysis explores this in detail, looking at how we can understand ‘disorders’ of gender in this context, what it means to you are transgender, or whether we should just dump the whole concept of a one-or-the-other gender completely.

A genuinely challenging, horizon pushing programme.
 

Link to programme page with streamed audio.
mp3 of programme.

When your actions contradict your beliefs

Last week’s BBC Future column. The original is here. Classic research, digested!

If at first you don’t succeed, lower your standards. And if you find yourself acting out of line with your beliefs, change them. This sounds like motivational advice from one of the more cynical self-help books, or perhaps a Groucho Marx line (“Those are my principles, and if you don’t like them… well, I have others…”), but in fact it is a caricature of one of the most famous theories in social psychology.

Leon Festinger’s Dissonance Theory is an account of how our beliefs rub up against each other, an attempt at a sort of ecology of mind. Dissonance Theory offers an explanation of topics as diverse as why oil company executives might not believe in climate change, why army units have brutal initiation ceremonies, and why famous books might actually be boring.

The classic study on dissonance theory was published by Festinger and James Carlsmith in 1959. You can find a copy thanks to the Classics in the History of Psychology archive. I really recommend reading the full thing. Not only is it short, but it is full of enjoyable asides. Back in the day psychology research was a lot more fun to write up.

Festinger and Carlsmith were interested in testing what happened when people acted out of line with their beliefs. To do this, they made their participants spend an hour doing two excruciatingly boring tasks. The first task was filling a tray with spools, emptying it, then filling it again (and so on). The second was turning 48 small pegs a quarter-turn clockwise; and then once that was finished, going back to the beginning and doing another quarter-turn for each peg (and so on). Only after this tedium, and at the point which the participants believed the experiment was over, did the real study get going. The experimenter said that they needed someone to fill in at the last minute and explain the tasks to the next subject. Would they mind? And also, could they make the points that “It was very enjoyable”, “I had a lot of fun”, “I enjoyed myself”, “It was very interesting”, “It was intriguing”, and “It was exciting”?

Of course the “experiment” was none of these things. But, being good people, with some pleading if necessary, they all agreed to explain the experiment to the next participant and make these points. The next participant was, of course, a confederate of the experimenter. We’re not told much about her, except that she was an undergraduate specifically hired for the role. The fact that all 71 participants in the experiment were male, and, that one of the 71 had to be excluded from the final analysis because he demanded her phone number so he could explain things further, suggests that Festinger and Carlsmith weren’t above ensuring that there were some extra motivational factors in the mix.

Money talks

For their trouble, the participants were paid $1, $20, or nothing. After explaining the task the original participants answered some questions about how they really felt about the experiment. At the time, many psychologists would have predicted that the group paid the most would be affected the most – if our feelings are shaped by rewards, the people paid $20 should be the ones who said they enjoyed it the most.

In fact, people paid $20 tended to feel the same about the experiment as the people paid nothing. But something strange happened with the people paid $1. These participants were more likely to say they really did find the experiment enjoyable. They judged the experiment as more important scientifically, and had the highest desire to participate in future similar experiments. Which is weird, since nobody should really want to spend another hour doing mundane, repetitive tasks.

Festinger’s Dissonance theory explains the result. The “Dissonance” is between the actions of the participants and their beliefs about themselves. Here they are, nice guys, lying to an innocent woman. Admittedly there are lots of other social forces at work – obligation, authority, even attraction. Festinger’s interpretation is that these things may play a role in how the participants act, but they can’t be explicitly relied upon as reasons for acting. So there is a tension between their belief that they are a nice person and the knowledge of how they acted. This is where the cash payment comes in. People paid $20 have an easy rationalisation to hand. “Sure, I lied”, they can say to themselves, “but I did it for $20”. The men who got paid the smaller amount, $1, can’t do this. Giving the money as a reason would make them look cheap, as well as mean. Instead, the story goes, they adjust their beliefs to be in line with how they acted. “Sure, the experiment was kind of interesting, just like I told that girl”, “It was fun, I wouldn’t mind being in her position” and so on.

So this is cognitive dissonance at work. Normally it should be a totally healthy process – after all, who could object to people being motivated to reduce contradictions in their beliefs (philosophers even make a profession of out this), but in circumstances where some of our actions or our beliefs exist for reasons which are too complex, too shameful, or too nebulous to articulate, it can lead to us changing perfectly valid beliefs, such as how boring and pointless a task was.

Fans of cognitive dissonance will tell you that this is why people forced to defend a particular position – say because it is their job – are likely to end up believing it. It can also suggest a reason for why military services, high school sports teams and college societies have bizarre and punishing initiation rituals. If you’ve been through the ritual, dissonance theory predicts, you’re much more likely to believe the group is a valuable one to be a part of (the initiation hurt, and you’re not a fool, so it must have been worth it right?).

For me, I think dissonance theory explains why some really long books have such good reputations, despite the fact that they may be as repetitive and pointless as Festinger’s peg task. Get to the end of a three-volume, several thousand page, conceptual novel and you’re faced with a choice: either you wasted your time and money, and you feel a bit of a fool; or the novel is brilliant and you are an insightful consumer of literature. Dissonance theory pushes you towards the latter interpretation, and so swells the crowd of people praising a novel that would be panned if it was 150 pages long.

Changing your beliefs to be in line with how you acted may not be the most principled approach. But it is certainly easier than changing how you acted.

A brief history of narcoanalysis

Photo by Flickr user Andres Rueda. Click for source.The judge in the case of ‘Colorado shooter’ James Holmes has made the baffling decision that a ‘narcoanalytic interview’ and ‘polygraph examination’ can be used in an attempt to support an insanity plea.

While polygraph ‘lie detectors’ are known to be seriously flawed, some US states still allow evidence from them to be admitted in court although the fact they’re being considered in such a key case is frankly odd.

But the ‘narcoanalytic interview’ is so left-field as to leave some people scratching their heads as to whether the judge has been at the narcotics himself.

The ‘narcoanalytic interview’ is sometimes described as the application of a ‘truth drug’ but the actual practice is far more interesting.

It has been variously called ‘narcoanalysis’, ‘narcosynthesis’ and the ‘amytal interview’ and involves, as you might expect, interviewing the person under the influence of some sort of narcotic.

It’s roots lie in the very early days of 1890s pre-psychoanalysis where Freud used hypnosis to relax patients to help them discuss emotionally difficult matters.

The idea that being relaxed overcame the mind’s natural resistance to entertaining difficult thoughts and helped get access to the unconscious became the foundation of Freud’s work. Narcoanalysis is still essentially based on this idea.

But, of course, the concept had to wait until the discovery of the first suitable drugs – the barbituates.

Psychiatrist William Bleckwenn found that giving barbital to patients with catatonic schizophrenia led to a “lucid interval” where they seemed to be able to discuss their own mental state in a way previously impossible.

You can see the parallels in the first ever use of ‘narcoanalysis’ to the current case, but through the rest of the century the concept merged with the idea of creating a “truth drug”.

This was born in the 1920s where the gynaecologist Robert House noticed that women who were given scopolamine to ease the birth process seemed to go into a ‘twilight state’ and were more pliant and talkative.

House decided to test this on criminals and went about putting prisoners under the influence of the drug while interviewing them as a way of ‘determining innocence or guilt’. Encouraged by some initial, albeit later recanted, confessions House began to claim that it should be used routinely in police investigations.

This probably would have died a death as a dubious medical curiosity had Time magazine not run an article in their 1923 edition entitled “The Truth-Compeller” about House’s theory – making him and the ‘truth drug’ idea national stars.

These approaches became militarised: firstly as ‘narcoanalysis’ was used to treat traumatised soldiers in the World War Two, and secondly as it was taken up by the CIA in the Cold War as a method for interrogation and became a centrepiece of the secret Project MKUltra.

It has continued to be used in criminal investigations in the US, albeit infrequently, although it has popped up in the legal rulings.

In 1985 the US Supreme Court rejected an appeal by two people convicted of murder that their ‘narcoanalysis police interview’ made their conviction unsafe.

However, the psychiatrist who conducted the interview didn’t convince any of the judges that ‘narcoanalysis’ was actually of benefit:

At one point he testified that it would elicit an accurate statement of subjective memory, but later said that the subject could fabricate memories. He refused to agree that the subject would be more likely to tell the truth under narcoanalysis than if not so treated.

The concept seemed to disappear after that but strong suspicions were raised that ‘narcoanalysis’ was still a CIA favourite when the Bush government’s infamous ‘torture memo‘ justified the use of “mind-altering substances” as part of ‘enhanced interrogation techniques’.

There is no evidence that ‘narcoanalysis’ actually helps in any way, shape or form, and at moderate to high doses, some of the drugs may actually impede memory or make it more likely that the person misremembers.

I suspect that the actual result of the bizarre ruling in the ‘Colorado shooter’ case will just be that psychiatrists will be able to give a potentially psychotic suspect a simple anti-anxiety drug without the resulting evidence being challenged.

This would be no different than giving an anxious or agitated witness the same drug to help them recount what happened.

But the fact that the judge included ‘lie detectors’ and ‘narcoanalysis’ in his ruling as useful legal tools rather than recognising them as flawed investigative techniques is still very concerning and suggests legal thinking mired in the 1950s.
 

pdf of judge’s ruling.
Link to (ironically locked) article on the history of ‘narcoanalysis’

Happiness rebuilt

I’ve written a piece for SpotOn NYC on the contrast between the effects of brain injury depicted in Oliver Sacks-type books and the typical effects in patients on neurology wards.

These books are not inaccurate but neither do they represent the common outcomes of brain injury.

Sometimes the reality is quite different from what people expect.

It is not that the patients described by Oliver Sacks, or any of the other chroniclers of fragile neurology, are in any way inaccurate. I have met patients who show us something about our brain function in equally stark clarity. But such cases are interesting, scientifically, precisely because they are atypical. In contrast, most brain injury is blurry and scientifically mundane. Some difficulties are concealed by other more pressing problems. It’s hard to mistake your wife for a hat when you’re paralysed. It’s hard to have an awakening when you’re not sure where you are. Their importance lies not in a contribution to an understanding of the brain but to the people concerned. An adjusted life. A refactored family. Tears amid the challenges. Happiness rebuilt.

The piece part of a series of posts written by neuroscience bloggers looking at the difficulties with communicating the subtlety and complexity of brain disorders.

There are some excellent pieces there so do have a browse.
 

Link to ‘The Man Who Mistook His Wife For A Nurse’
Link to communicating brain disorders series.

The history of the birth of neuroculture

My recent Observer piece examined how neuroscience has saturated popular culture but the story of how we found ourselves living in a ‘neuroculture’ is itself quite fascinating.

Everyday brain concepts have bubbled up from their scientific roots and integrated themselves into popular consciousness over several decades. Neuroscience itself is actually quite new. Although the brain, behaviour and the nervous system have been studied for millennia the concept of a dedicated ‘neuroscience’ that attempts to understand the link between the brain, mind and behaviour only emerged in the 1960s and the term itself was only coined in 1962. Since then several powerful social currents propelled this nascent science into the collective imagination.

The sixties were a crucial decade for the idea that the brain could be the gateway to the self. Counter-culture devotees, although enthusiastic users of mind-altering drugs, were more interested in explaining the effects in terms of social changes than neurological ones. In contrast, pharmaceutical companies had discovered the first useful psychiatric drugs only a few years before and they began to plough millions both into both divining the neurochemistry of experience and into massive marketing campaigns that linked brain functions to the psyche.

Drug marketing executives targeted two main audiences. Asylum psychiatrists dealt with institutionalised chronic patients and the adverts were largely pitched in terms of management and control, but for office-based psychiatrists, who mainly used psychotherapy to treat their patients, the spin was different. The new medications were sold as having specific psychological effects that could be integrated into a Freudian understanding of the self. According to the marketing, psychoactive chemicals could break down defences, reduce neurotic anxiety and resolve intra-psychic conflict.

In the following years, as neuroscience became prominent and psychoanalysis waned, pharmaceutical companies realised they had to sell theories to make their drugs marketable. The theories couldn’t be the messy ideas of actual science, however, they needed to be straightforward stories of how specific neurotransmitters were tied to simple psychological concepts, not least because psychiatric medication was now largely prescribed by family doctors. Low serotonin leads to depression, too much dopamine causes madness. The fact these theories were wrong was irrelevant, they just needed to be reason enough to prescribe the advertised pill. The Prozac generation was sold and the pharmacology of self became dinner table conversation.

Although not common knowledge at the time, the sixties also saw the rise of neuroscience as a military objective. Rattled by Korean War propaganda coups where American soldiers renounced capitalism and defected to North Korea, the US started the now notorious MKULTRA research programme. It aimed to understand communist ‘brain washing’ in the service of mastering behavioural control for the benefit of the United States.

Many of the leading psychologists and psychiatrists of the time were on the payroll and much of the military top brass was involved. As a result, the idea that specific aspects of the self could be selectively manipulated through the brain became common among the military elite. When the two decade project was revealed amid the pages of The New York Times and later investigated by a 1975 Congressional committee, the research and the thinking behind it made headline news around the world.

Mainstream neuroscience also became a source of fascination due to discoveries that genuinely challenged our understanding of the self and the development of technologies to visualise the brain. As psychologists became interested in studying patients with brain injury it became increasingly clear that the mind seemed to break down in specific patterns depending on how the brain was damaged, suggesting the intriguing possibility of an inherent structure to the mind. The fact that brain damage can cause someone to believe that a body part is not their own, a condition known of somatoparaphrenia, suggests body perception and body ownership are handled separately in the brain. The self was breaking down along fault lines we never knew existed and a new generation of scientist-writers like Oliver Sacks became our guides.

The rise of functional neuroimaging in the eighties and nineties allowed scientists to see a fuzzy outline of brain activity in healthy individuals as they undertook recognisable tasks. The fact that these brightly coloured brain scans were immensely media friendly and seemingly easy to understand (mostly, misleadingly so) made neuroscience appear accessible to anyone. But it wasn’t solely the curiosity of science journalists that propelled these discoveries into the public eye. In 1990 President G.W. Bush launched the Decade of the Brain, a massive project “to enhance public awareness of the benefits to be derived from brain research”. A ten-year programme of events aimed at both the public and scientists followed that sealed the position of neuroscience in popular discourse.

These various cultural threads began weaving a common discourse through the medical, political and popular classes that closely identified the self with brain activity and which suggested that our core humanity could be understood and potentially altered at the neurobiological level.

These cultural forces that underlie our ‘neuroculture’ are being increasingly mapped out by sociologists and historians. One of the best sources is ‘The birth of the neuromolecular gaze’ by Joelle Abi-Rached and Nikolas Rose. Sadly, it’s a locked article although a copy has mysteriously appeared online

However, some excellent work is also being done by Fernando Vidal, who looks at how we understand ourselves through new scientific ‘self’ disciplines, and by Davi Johnson Thornton who studies who neuroscience is being communicated through popular culture.
 

Link to ‘The birth of the neuromolecular gaze’.

2013-03-08 Spike activity

Quick links from the past week in mind and brain news:

Brain freeze from a slurpee was blamed for a five car pile up in Texas according to Jalopnik.

Salon takes a nuanced look at hook-up culture. It’s a culture? I thought it was a hobby.

Housewives, tranquilliser use and the nuclear family in Cold War America. Wellcome History have a fascinating piece on the first fashionable psychiatric drug.

Time reports that enhancing one type of maths ability with brain stimulation impairs another. My own experience is that it helps with spelling but not with grammatical.

What do museums of madness tell us about who we were and who we are? BBC Radio 4 programme Mad Houses is fascinating but no podcast because the BBC love the 20th century.

Futurity reports on a new study finding that the infant brain controls blood flow differently – which could have huge implications for brain scanning technologies like fMRI which rely on blood flow.

The oddly recursive Brain Awareness Day will happen on March 14th.

Retraction Watch covers a case of scientific fraud in studies on the response to reward.

New Neuropod. You know the drill.

Science News reports that heavy drinkers get extra brain fuel from alcohol. Like putting rocket boosters on a one legged donkey.

The uncertain dance of the spoken word

Stanford Magazine has a wonderful article by a writer who relies on lip-reading and experiences speech through this subtle movement-based language.

Rachel Kolb skilfully describes how this works, and more importantly, feels.

The part where she describes how she experiences accents is just amazing:

Accents are a visible tang on people’s lips. Witnessing someone with an accent is like taking a sip of clear water only to find it tainted with something else. I startle and leap to attention. As I explore the strange taste, my brain puzzles itself trying to pinpoint exactly what it is and how I should respond. I dive into the unfamiliar contortions of the lips, trying to push my way to some intelligible meaning. Accented words pull against the gravity of my experience; like slime-glossed fish, they wriggle and leap out of my hands. Staring down at my fingers’ muddy residue, my only choice is to shrug and cast out my line again.

The full article is highly recommended. Both fascinating and wonderfully written.
 

Link to ‘Seeing at the Speed of Sound’ (via and thanks to @stevesilberman)

The essence of intelligence is feedback

Here’s last week’s BBC Future column. The original is here, where it was called “Why our brains love feedback”. I  was inspired to write it by a meeting with artist Tim Lewis, which happened as part of a project I’m involved with : Furnace Park, which is seeing a piece of reclaimed land in an old industrial area of Sheffield transformed into a public space by the University.

A meeting with an artist gets Tom Stafford thinking about the essence of intelligence. Our ability to grasp, process and respond to information about the world allows us follow a purpose. In some ways, it’s what makes us, us.

In Tim Lewis’s world, bizarre kinetic sculptures move, flap wings, draw and even walk around. The British artist creates mechanical animals and animal machines – like Pony, a robotic ostrich with an arm for a neck and a poised hand for a head – that creak into life in a way that can seem unsettling, as if they have a strange, if awkward, life of their own. His latest creations are able to respond to the environment, and it makes me ponder the essence of intelligence – in some ways revealing what makes us, us.
I met Tim on a cold Friday afternoon to talk about his work, and while talking about the cogs and gears he uses to make his artwork move, he made a remark that made me stop in my tracks. The funny thing is, he said, all of the technology existed to make machines like this in the sixteenth century – the thing that stopped them wasn’t the technical know-how, it was because they lacked the right model of the mind.

p015lq0qJetsam 2012, by Tim Lewis (Courtesy: Tim Lewis)

What model of the mind do you need to create a device like Tim’s Jetsam, a large wire mesh Kiwi-like creature that forages around its cage for pieces of a nest to build. The intelligence in this creation isn’t in the precision of the craftwork (although it is precise), or in the faithfulness to the kind of movements seen in nature (although it is faithful). The intelligence is in how it responds to the placing of the sticks. It isn’t programmed in advance, it identifies where each piece is and where it needs to go.

This gives Jetsam the hallmark of intelligence – flexibility. If the environment changes, say when the sticks are re-scattered at random, it can still adapt and find the materials to build its nest. Rather than a brain giving instructions such as “Do this”, feedback allows instructions such as “If this, do that; if that, do the other”. Crucially, feedback allows a machine to follow a purpose – if the goal changes, the machine can adapt.

It’s this quality that the sixteenth century clockwork models lacked, and one that we as humans almost take for granted. We grasp and process information about the world in many forms, including sights, smells or sounds. We may give these information sources different names, but in some sense, these are essentially the same stuff.

Information control

Cybernetics is the name given to the study of feedback, and systems that use feedback, in all their forms. The term comes from the Greek word for “to steer”, and inspiration for some of the early work on cybernetics sprang from automatic guiding systems developed during World War II for guns or radar antennae. Around the middle of the twentieth century cybernetics became an intellectual movement across many different disciplines. It created a common language that allowed engineers to talk with psychologists, or ecologists to talk to mathematicians, about living organisms from the viewpoint of information control systems.

A key message of cybernetics is that you can’t control something unless you have feedback – and that means measurement of the outcomes. You can’t hit a moving target unless you get feedback on changes to its movement, just as you can’t tell if a drug is a cure unless you get feedback on how many more people recover when they are given it. The flip side of this dictum is the promise that with feedback, you can control anything. The human brain seems to be the arch embodiment of this cybernetic principle. With the right feedback, individuals have been known to control things as unlikely as their own heart rate, or learn to shrink and expand their pupils at will. It even seems possible to control the firing of individual brain cells.

But enhanced feedback methods can accelerate learning about more mundane behaviours. For example, if you are learning to take basketball shots, augmented feedback in the form of “You were 3 inches off to the left” can help you learn faster and reach a higher skill level quicker. Perhaps the most powerful example of an augmented feedback loop is the development of writing, which allowed us to take language and experiences, and make them permanent, solidifying it against the ravages of time, space and memory.

Thanks to feedback we can become more than simple programs with simple reflexes, and develop more complex responses to the environment. Feedback allows animals like us to follow a purpose. Tim Lewis’s mechanical bird might seem simple, but in terms of intelligence it has more in common with us than with nearly all other machines that humans have built. Engines or clocks might be incredibly sophisticated, but until they are able to gather their own data about the environment they remain trapped in fixed patterns.

Feedback loops, on the other hand, beginning with the senses but extending out across time and many individuals, allow us to self-construct, letting us travel to places we don’t have the instructions for beforehand, and letting us build on the history of our actions. In this way humanity pulls itself up by its own bootstraps.

The rise of everyday neuroscience

I’ve got a feature article in The Observer about how our culture has become saturated with ‘neuroscience talk’ and how this has led to unhelpful simplifications of the brain to make the same old arguments.

This is often framed as a problem with ‘the media’ but this is just the most obvious aspect of the movement. Actually, it is a cultural change where the use of a sort of everyday ‘folk neuroscience’ has become credible in popular debate – regardless of its relationship to actual science.

Folk neuroscience comes with the additional benefit that it relies on concepts that are not easily challenged with subjective experience. When someone says “James is depressed because he can’t find a job”, this may be dismissed by personal experience, perhaps by mentioning a friend who was unemployed but didn’t get depressed. When someone says that “James is depressed because of a chemical imbalance in his brain”, personal experience is no longer relevant and the claim feels as if it is backed up by the authority of science. Neither usefully accounts for the complex ways in which our social world and neurobiology affect our mood but in non-specialist debate that rarely matters. As politicians have discovered it’s the force of your argument that matters and in rhetorical terms, neuroscience is a force-multiplier, even when it’s misfiring.

The article discusses how this popular neuroscience talk is being used and why is remains popular.

The piece was influenced by the work of sociologist Nikolas Rose who has written a great deal about how neuroscience is used to understand and manage people.

If you want to go in further depth than The Observer article allows I’d recommend his paper ‘Neurochemical Selves’ which is available online as a pdf.

A new book of his came out last week entitled ‘Neuro: The New Brain Sciences and the Management of the Mind’ which looks fascinating.
 

Link to Observer article ‘Our brains, and how they’re not as simple as we think’.

2013-03-01 Spike activity

Quick links from the past week in mind and brain news:

Providentia overs the curious history of Japan’s suicide volcano.

Skepticism about ‘social priming’ is driven by a long-history of doubt about subliminal priming of behaviour. Good piece on Daniel Simons’ Blog.

The New York Times has an amazing video about technology to enhance the perception of motion.

The ‘Vaccine Resistance Movement’ has an anti-vaccination conference in Vancouver on March 12th. Bizarrely it is being hosted by Simon Fraser University. If you want to contact them and make your views known you can do so here.

Neurobonkers covers a genuine scientific study on what gains Twitter followers. Note to self: posting pictures of yourself in underwear only works if you’re a glamour model.

We’re all Jonah Lehrer except me. Neuroskeptic on narrative and neuroscience.

The Fix discusses the overuse of ‘addiction’ to describe bad choices.

UK public art and neuroscience events currenty running: Affecting Perception taking place in Oxford and Wonder happening in London.

Slate has a form from 1889 to leave your brain to science. Only brains of “educated and orderly persons rather than those of the ignorant, criminal or insane”!

London neuroscience centre to map ‘connectome‘ of foetal brain reports Wired UK.

A neurobiological graphic novel

The Guardian has a video about the collaboration between neuroscientist Hana Ros and artist Matteo Farinella as they’ve been working on the neurocomic project to create a brain science graphic novel.

The finished project isn’t quite out yet but the artwork is looking amazing.

The film about the collaboration covers how they worked together and how each approach their work.

There’s a lovely bit where Hana Ros describes how she isolates neurons to work on and mentions she gives them all names.

Make sure you also check out the artwork on the project website.
 

Link to video on the collaboration.
Link to the neurocomic website.

A fine art

It’s not often you get to enrage both feminists and misogynists at the same time but a new study, just published in the Archives of Sexual Behavior, may have managed this impressive feat.

It found that men’s preference for larger breasts was associated with having a greater number of oppressive beliefs about women.

Feminists can be enraged about how a natural variation in body shape has become associated with sexist attitudes while misogynists that their breast size preference can be thought of as a problem.

Social scientists, however, may be left relatively unperturbed at the thought of this study. But please, allow me.

So, come on now. What does it really tell us?

You can thank me later.
 

Link to coverage on Feminist Philosophers blog (via @KateClancy)
Link to locked study.