Hofstadter’s digital thoughts

The Atlantic has an amazing in-depth article on how Douglas Hofstadter, the Pulitzer Prize-winning author of Gödel, Escher, Bach, has been quietly working in the background of artificial intelligence on the deep problems of the mind.

Hofstadter’s vision of AI – as something that could help us understand the mind rather than just a way of solving difficult problems – has gone through a long period of being deeply unfashionable.

Developments in technology and statistics have allowed surprising numbers of problems to be solved by sifting huge amounts of data through relatively simple algorithms – something called machine learning.

Translation software, for example, long ago stopped trying to model language and instead just generates output from statistical associations. As you probably know from Google Translate, it’s surprisingly effective.

The Atlantic article tackles Hofstadter’s belief that, contrary to the machine learning approach, developing AI programmes can be a way of testing out ideas about the components of thought itself. This idea may be now starting to re-emerge.

The piece is also works as a sweeping look at the history of AI and the only thing I was left wondering was what Hofstadter makes of the deep learning approach which is a cross between machine learning stats and neurocognitively-inspired architecture.

It’s a satisfying thought-provoking read that rewards time and attention.

If you want another excellent, in-depth read on AI, a great complement is another Atlantic article from last year where Noam Chomsky is interviewed on ‘where artificial intelligence went wrong’.

Both will tell you as much about the human mind as they do about AI.

Link to ‘The Man Who Would Teach Machines to Think’ on Hofstadter.
Link to ‘Noam Chomsky on Where Artificial Intelligence Went Wrong’.

The death of the chaotic positivity ratio

A new online publication called Narratively has an excellent story about how a part-time student blew apart a long-standing theory in positive psychology.

The article is the geeky yet compelling tale of how weekend student Nick Brown found something fishy about the ‘critical positivity ratio’ theory that says people flourish when they have between 2.9013 and 11.6346 positive emotions for every negative one.

It’s been a big theory in positive psychology but Brown noticed that it was based on the dodgy application of mathematician Lorenz’s equations from fluid dynamics to human emotions.

He recruited psychology professor Harris Friedman and renowned bunk buster Alan Sokal into the analysis and their critique eventually got the paper partially retracted for being based on very shaky foundations.

It’s a great fun read and also serves as a good backgrounder to positive psychology.

I’ve also noticed that the latest edition of Narratively has loads of great articles on psychology.

Link to Narratively on Nick Brown the death of the positivity ratio.
Link to latest edition of Narratively entitled ‘Pieces of Mind’.

Madness and hallucination in The Shining

Roger Ebert’s 2006 review of Stanley Kubrick’s The Shining turns out to be a brilliant exploration of hallucination, madness and unreliable witnessing in a film he describes as “not about ghosts but about madness and the energies it sets loose”.

Kubrick is telling a story with ghosts (the two girls, the former caretaker and a bartender), but it isn’t a “ghost story,” because the ghosts may not be present in any sense at all except as visions experienced by Jack or Danny.

The movie is not about ghosts but about madness and the energies it sets loose in an isolated situation primed to magnify them. Jack is an alcoholic and child abuser who has reportedly not had a drink for five months but is anything but a “recovering alcoholic.” When he imagines he drinks with the imaginary bartender, he is as drunk as if he were really drinking, and the imaginary booze triggers all his alcoholic demons, including an erotic vision that turns into a nightmare. We believe Hallorann when he senses Danny has psychic powers, but it’s clear Danny is not their master; as he picks up his father’s madness and the story of the murdered girls, he conflates it into his fears of another attack by Jack. Wendy, who is terrified by her enraged husband, perhaps also receives versions of this psychic output. They all lose reality together.

A psychologically insightful piece on one of the classics of psychological horror.

Link to Roger Ebert’s 2006 review of The Shining.

Don’t panic but psychology isn’t always a science

Every so often, the ‘is psychology a science?’ debate sparks up again, at which point, I start to weep. It’s one of the most misplaced, misfiring scientific discussions you can have and probably not for the reasons you think.

To understand why it keeps coming around you need to understand something about the politics of studying things.

Science has higher status in academia and industry than the humanities so suggesting to a practitioner that “they’re not a scientist” can be the equivalent of suggesting “you’re not as valuable as you make out”.

This plays in out in two ways: less scientific disciplines get less funding and people start being knobs at parties. The second is clearly more serious.

Probably every psychologist has had the experience of someone coming up to them and drunkenly suggesting that psychology is ‘all made up’. Psychiatrists get the same sort of crap but in the ‘you’re not a real doctor’ vein from other medics.

This makes people who work in psychological disciplines a bit insecure, so they’ll swear blind that ‘psychology is a science’.

Psychology, however, is not a science. It’s a subject area. And you can either study it scientifically or non-scientifically.

I’m going to leave aside the debate of what defines science, which has been better covered elsewhere. No, there isn’t a strict definition of science, but the “you know it when you see it” approach is sufficient if we want to see if something can be widely considered scientific.

I’m also going to leave aside the debate about whether you can study mind and behaviour scientifically. It’s clear that you can, even if some areas are harder to measure than others. This is what is usually meant by the “is psychology a science?” debate. I consider this to be a settled issue but it is also where the debate usually misfires.

In other words, psychology can be a science, but it isn’t only a science.

There are many folks who do legitimate psychology research who are not doing science. It’s not that they think they are but really aren’t (pseudoscience) or that they’re doing it so poorly it barely merits the name (bad science). It’s that they don’t want to do science in the first place.

Instead, they are doing qualitative research, where they intend to uncover patterns in people’s subjective impressions without imposing their own structure onto it.

Let me give you an example.

Perhaps I want to find out what leads victims of serious domestic violence to drop a prosecution despite the abuser already being safely in jail, pending trial.

I could come up with a list of motivations I think might be plausible and then find a way of testing whether they are present, but essentially, no matter how rigorous my methods, the study still depends on what I think is plausible to begin with.

This could be a problem because I may not know a whole lot about the area. Or worse, I may think I do, but might largely be basing my assumptions on prejudice and what passes for ‘common sense’.

Qualitative methods get at how people understand the situation from their own perspective and it looks at common themes across what they say.

In this case, the study by Amy Bonomi and colleagues applied a kind of qualitative analysis called grounded theory to transcripts of jailhouse phone calls between victims of domestic violence and the abusers.

Here’s what they found:

…a victim’s recantation intention was foremost influenced by the perpetrator’s appeals to the victim’s sympathy through descriptions of his suffering from mental and physical problems, intolerable jail conditions, and life without her. The intention was solidified by the perpetrator’s minimization of the abuse, and the couple invoking images of life without each other.

Once the victim arrived at her decision to recant, the couple constructed the recantation plan by redefining the abuse event to protect the perpetrator, blaming the State for the couple’s separation, and exchanging specific instructions on what should be said or done.

There is no pretence that this study has discovered what happens in all cases, or even if these are common factors, but what it has done is shown how this works for the people being studied.

This is massively useful information. If you’re a scientist, suddenly you have a whole bunch of hypotheses to test that are drawn from real-life situations. If you’re not, you understand one instance of this situation in a lot more detail.

The reason that human psychology can be studied both scientifically and non-scientifically is that the object of study can be objectively observed and can describe their own subjective experience.

This doesn’t happen with electrical impulses, enzymes or subatomic particles.

I’m a neuropsychologist by trade, perhaps the most clearly scientific of the psychological disciplines, but I’m not going to pretend that qualitative research psychologists aren’t doing important work that makes psychology more valuable, not less.

So psychology is not just a science, and is better off for it.

Oh yeah, and the drunk guy at the party? He’s like someone who thinks a screaming orgasm is only a drink. I’m laughing at you chump, not with you.

A literary review of the DSM-5

Philosopher Ian Hacking, famous for analysing the effects of psychological and neuroscientific knowledge on how we understand ourselves, has reviewed the DSM-5 for the London Review of Books.

It’s both an excellent look at what the whole DSM project has been designed to do and a cutting take on the checklist approach to diagnosis.

It’s not often that a review gives you a feeling of both a wholesome read and a guilty pleasure, but Hacking does both with this piece.

The DSM is not a representation of the nature or reality of the varieties of mental illness, and this is a far more radical criticism of it than [NIMH Director Thomas] Insel’s claim that the book lacks ‘validity’.

I am saying it is founded on a wrong appreciation of the nature of things. It remains a very useful book for other purposes. It is essential to have something like this for the bureaucratic needs of paying for treatment and assessing prevalence.

But for those purposes the changes effected from DSM-IV to DSM-5 were not worth the prodigious labour, committee meetings, fierce and sometimes acrimonious debate involved. I have no idea how much the revision cost, but it is not that much help to clinicians, and the changes do not matter much to the bureaucracies.

And trying to get it right, in revision after revision, perpetuates the long-standing idea that, in our present state of knowledge, the recognised varieties of mental illness should neatly sort themselves into tidy blocks, in the way that plants and animals do.

The old joke about a dictionary review goes “the plot wasn’t up to much but at least it explained everything as it went along”.

For the DSM it might well be “the plot wasn’t up to much and neither did it explain everything as it went along”.

Link to ‘Lost in the Forest’ in The LRB (via @HuwTube)

Double matrix

This is quite possibly the least comprehensible abstract of a psychology article I have ever read. It starts off dense and wordy and ends up feeling like you’re huffing butane.

The psychologization of humanitarian aid: skimming the battlefield and the disaster zone

Hist Human Sci. 2011;24(3):103-22.

De Vos J.

Humanitarian aid’s psycho-therapeutic turn in the 1990s was mirrored by the increasing emotionalization and subjectivation of fund-raising campaigns. In order to grasp the depth of this interconnectedness, this article argues that in both cases what we see is the post-Fordist production paradigm at work; namely, as Hardt and Negri put it, the direct production of subjectivity and social relations. To explore this, the therapeutic and mental health approach in humanitarian aid is juxtaposed with the more general phenomenon of psychologization.

This allows us to see that the psychologized production of subjectivity has a problematic waste-product as it reduces the human to ‘Homo sacer’, to use Giorgi Agamben’s term. Drawing out a double matrix of a de-psychologizing psychologization connected to a politicizing de-politicization, it will further become possible to understand psycho-therapeutic humanitarianism as a case of how, in these times of globalization, psychology, subjectivity and money are all interrelated.

Hey. I think the walls are melting.

Link to PubMed abstract.

Deeper into genetic challenges to psychiatric diagnosis

For my recent Observer article I discussed how genetic findings are providing some of the best evidence that psychiatric diagnoses do not represent discrete disorders.

As part of that I spoke to Michael Owen, a psychiatrist and researcher based at Cardiff University, who has been leading lots of the rethink on the nature of psychiatric disorders.

As a young PhD student I sat in on lots of Prof Owen’s hospital ward rounds and learnt a great deal about how science bumps up against the real world of individuals’ lives.

One of the things that most interested me about Owen’s work is that, back in the day, he was working towards finding ‘the genetics of’ schizophrenia, bipolar and so on.

But since then he and his colleagues have gathered a great deal of evidence that certain genetic differences raise the chances of developing a whole range of difficulties – from epilepsy to schizophrenia to ADHD – rather these differences being associated with any one disorder.

As many of these genetic changes can affect brain development in subtle ways, it is looking increasingly likely that genetics determines how sensitive we are to life events as the brain grows and develops – suggesting a neurodevelopmental theory of these disorders that considers both neurobiology and life experience as equally important.

I asked Owen several questions for the Observer article but I couldn’t reply the answers in full, so I’ve reproduced them below as they’re a fascinating insight into how genetics is challenging psychiatry.

I remember you looking for the ‘genes for schizophrenia’ – what changed your mind?

For most of our genetic studies we used conventional diagnostic criteria such as schizophrenia, bipolar disorder and ADHD. However, what we then did was look for overlap between the genetic signals across diagnostic categories and found that these were striking. This occurred not just for schizophrenia and bipolar disorder, which to me as an adult psychiatrist who treats these conditions was not surprising, but also between adult disorders like schizophrenia and childhood disorders like autism and ADHD.

What do the current categories of psychiatric diagnosis represent?

The current categories were based on the categories in general use by psychiatrists. They were formalized to make them more reliable and have been developed over the years to take into account developments in thinking and practice. They are broad groupings of patients based upon the clinical presentation especially the most prominent symptoms and other factors such as age at onset, and course of illness. In other words they describe syndromes (clinically recognizable features that tend to occur together) rather than distinct diseases. They are clinically useful in so far as they group patients in regard to potential treatments and likely outcome. The problem is that many doctors and scientists have come to assume that they do in fact represent distinct diseases with separate causes and distinct mechanisms. In fact the evidence, not just from molecular genetics, suggests that there is no clear demarcation between diagnostic categories in symptoms or causes (genetic or environmental).

There is an emerging belief which has been stimulated by recent genetic findings that it is perhaps best to view psychiatric disorders more in terms of constellations of symptoms and syndromes, which cross current diagnostic categories and view these in dimensional terms. This is reflected by the inclusion of dimensional measures in DSM5, which, it is hoped, will allow these new views to stimulate research and to be developed based on evidence.

In the meantime the current categories, slightly modified, remain the focus of DSM-5. But I think that there is a much greater awareness now that these are provisional and will replaced when the weight of scientific evidence is sufficiently strong.

The implications of recent findings are probably more pressing for research where there is a need to be less constrained by current diagnostic categories and to refocus onto the mechanisms underlying symptom domains rather than diagnostic categories. This in turn might lead to new diagnostic systems and markers. The discovery of specific risk genes that cut across diagnostic groupings offers one approach to investigating this that we will take forward in Cardiff.

There is a lot of talk of endophenotypes and intermediate phenotypes that attempt to break down symptoms into simpler form of difference and dysfunction in the mind and brain. How will we know when we have found a valid one?

Research into potential endophenotypes has clear intuitive appeal but I think interpretation of the findings is hampered by a couple of important conceptual issues. First, as you would expect from what I have already said, I don’t think we can expect to find endophenotypes for a diagnostic group as such. Rather we might expect them to relate to specific subcomponents of the syndrome (symptoms, groups of symptoms etc).

Second, the assumption that a putative endophenotype lies on the disease pathway (ie is intermediate between say gene and clinical phenotype) has to be proved and cannot just be assumed. For example there has been a lot of work on cognitive dysfunction and brain imaging in psychiatry and widespread abnormalities have been reported. But it cannot be assumed that an individual cognitive or imaging phenotype lies on the pathway to a particular clinical disorder of component of the disorder. This has to be proven either through an intervention study in humans or model systems (both currently challenging), or statistically which requires much larger studies than are usually undertaken. I think that many of the findings from imaging and cognition studies will turn out to be part of the broad phenotype resulting from whatever brain dysfunction is present and not on the causal pathway to psychiatric disorder.

Using the tools of biological psychiatry you have come to a conclusion often associated with psychiatry’s critics (that the diagnostic categories do not represent specific disorders). What reactions have you encountered from mainstream psychiatry?

I have found that most psychiatrists working at the front line are sympathetic. In fact psychiatrists already treat symptoms rather than diagnoses. For example they will consider prescribing an antipsychotic if someone is psychotic regardless of whether the diagnosis is schizophrenia or bipolar disorder. They also recognize that many patients don’t fall neatly into current categories. For example many patients have symptoms of both schizophrenia and bipolar disorder sometimes at the same time and sometimes at different time points. Also patients who fulfill diagnostic criteria for schizophrenia in adulthood often have histories of childhood diagnoses such as ADHD or autistic spectrum.

The inertia comes in part from the way in which services are structured. In particular the distinction between child and adult services has many justifications but it leads to patents with long term problems being transferred to a new team at a vulnerable age, receiving different care and sometimes a change in diagnosis. Many of us now feel that we should develop services that span late childhood and early adulthood to ensure continuity over this important period. There are also international differences. So in the US mood disorders (including bipolar) are often treated by different doctors in different clinics to schizophrenia.

There is also a justifiable unwillingness to discard the current system until there is strong evidence for a better approach. The inclusion of dimensional measures in DSM5 reflects the acceptance of the psychiatric establishment that change is needed and acknowledges the likely direction of travel. I think that psychiatry’s acknowledgment of its diagnostic shortcomings is a sign of its maturity. Psychiatric disorders are the most complex in medicine and some of the most disabling. We have treatments that help some of the people some of the time and we need to target these to the right people at the right time. By acknowledging the shortcomings of our current diagnostic categories we are recognizing the need to treat patients as individuals and the fact that the outcome of psychiatric disorders is highly variable.

The history of the birth of neuroculture

My recent Observer piece examined how neuroscience has saturated popular culture but the story of how we found ourselves living in a ‘neuroculture’ is itself quite fascinating.

Everyday brain concepts have bubbled up from their scientific roots and integrated themselves into popular consciousness over several decades. Neuroscience itself is actually quite new. Although the brain, behaviour and the nervous system have been studied for millennia the concept of a dedicated ‘neuroscience’ that attempts to understand the link between the brain, mind and behaviour only emerged in the 1960s and the term itself was only coined in 1962. Since then several powerful social currents propelled this nascent science into the collective imagination.

The sixties were a crucial decade for the idea that the brain could be the gateway to the self. Counter-culture devotees, although enthusiastic users of mind-altering drugs, were more interested in explaining the effects in terms of social changes than neurological ones. In contrast, pharmaceutical companies had discovered the first useful psychiatric drugs only a few years before and they began to plough millions both into both divining the neurochemistry of experience and into massive marketing campaigns that linked brain functions to the psyche.

Drug marketing executives targeted two main audiences. Asylum psychiatrists dealt with institutionalised chronic patients and the adverts were largely pitched in terms of management and control, but for office-based psychiatrists, who mainly used psychotherapy to treat their patients, the spin was different. The new medications were sold as having specific psychological effects that could be integrated into a Freudian understanding of the self. According to the marketing, psychoactive chemicals could break down defences, reduce neurotic anxiety and resolve intra-psychic conflict.

In the following years, as neuroscience became prominent and psychoanalysis waned, pharmaceutical companies realised they had to sell theories to make their drugs marketable. The theories couldn’t be the messy ideas of actual science, however, they needed to be straightforward stories of how specific neurotransmitters were tied to simple psychological concepts, not least because psychiatric medication was now largely prescribed by family doctors. Low serotonin leads to depression, too much dopamine causes madness. The fact these theories were wrong was irrelevant, they just needed to be reason enough to prescribe the advertised pill. The Prozac generation was sold and the pharmacology of self became dinner table conversation.

Although not common knowledge at the time, the sixties also saw the rise of neuroscience as a military objective. Rattled by Korean War propaganda coups where American soldiers renounced capitalism and defected to North Korea, the US started the now notorious MKULTRA research programme. It aimed to understand communist ‘brain washing’ in the service of mastering behavioural control for the benefit of the United States.

Many of the leading psychologists and psychiatrists of the time were on the payroll and much of the military top brass was involved. As a result, the idea that specific aspects of the self could be selectively manipulated through the brain became common among the military elite. When the two decade project was revealed amid the pages of The New York Times and later investigated by a 1975 Congressional committee, the research and the thinking behind it made headline news around the world.

Mainstream neuroscience also became a source of fascination due to discoveries that genuinely challenged our understanding of the self and the development of technologies to visualise the brain. As psychologists became interested in studying patients with brain injury it became increasingly clear that the mind seemed to break down in specific patterns depending on how the brain was damaged, suggesting the intriguing possibility of an inherent structure to the mind. The fact that brain damage can cause someone to believe that a body part is not their own, a condition known of somatoparaphrenia, suggests body perception and body ownership are handled separately in the brain. The self was breaking down along fault lines we never knew existed and a new generation of scientist-writers like Oliver Sacks became our guides.

The rise of functional neuroimaging in the eighties and nineties allowed scientists to see a fuzzy outline of brain activity in healthy individuals as they undertook recognisable tasks. The fact that these brightly coloured brain scans were immensely media friendly and seemingly easy to understand (mostly, misleadingly so) made neuroscience appear accessible to anyone. But it wasn’t solely the curiosity of science journalists that propelled these discoveries into the public eye. In 1990 President G.W. Bush launched the Decade of the Brain, a massive project “to enhance public awareness of the benefits to be derived from brain research”. A ten-year programme of events aimed at both the public and scientists followed that sealed the position of neuroscience in popular discourse.

These various cultural threads began weaving a common discourse through the medical, political and popular classes that closely identified the self with brain activity and which suggested that our core humanity could be understood and potentially altered at the neurobiological level.

These cultural forces that underlie our ‘neuroculture’ are being increasingly mapped out by sociologists and historians. One of the best sources is ‘The birth of the neuromolecular gaze’ by Joelle Abi-Rached and Nikolas Rose. Sadly, it’s a locked article although a copy has mysteriously appeared online

However, some excellent work is also being done by Fernando Vidal, who looks at how we understand ourselves through new scientific ‘self’ disciplines, and by Davi Johnson Thornton who studies who neuroscience is being communicated through popular culture.

Link to ‘The birth of the neuromolecular gaze’.

The essence of intelligence is feedback

Here’s last week’s BBC Future column. The original is here, where it was called “Why our brains love feedback”. I  was inspired to write it by a meeting with artist Tim Lewis, which happened as part of a project I’m involved with : Furnace Park, which is seeing a piece of reclaimed land in an old industrial area of Sheffield transformed into a public space by the University.

A meeting with an artist gets Tom Stafford thinking about the essence of intelligence. Our ability to grasp, process and respond to information about the world allows us follow a purpose. In some ways, it’s what makes us, us.

In Tim Lewis’s world, bizarre kinetic sculptures move, flap wings, draw and even walk around. The British artist creates mechanical animals and animal machines – like Pony, a robotic ostrich with an arm for a neck and a poised hand for a head – that creak into life in a way that can seem unsettling, as if they have a strange, if awkward, life of their own. His latest creations are able to respond to the environment, and it makes me ponder the essence of intelligence – in some ways revealing what makes us, us.
I met Tim on a cold Friday afternoon to talk about his work, and while talking about the cogs and gears he uses to make his artwork move, he made a remark that made me stop in my tracks. The funny thing is, he said, all of the technology existed to make machines like this in the sixteenth century – the thing that stopped them wasn’t the technical know-how, it was because they lacked the right model of the mind.

p015lq0qJetsam 2012, by Tim Lewis (Courtesy: Tim Lewis)

What model of the mind do you need to create a device like Tim’s Jetsam, a large wire mesh Kiwi-like creature that forages around its cage for pieces of a nest to build. The intelligence in this creation isn’t in the precision of the craftwork (although it is precise), or in the faithfulness to the kind of movements seen in nature (although it is faithful). The intelligence is in how it responds to the placing of the sticks. It isn’t programmed in advance, it identifies where each piece is and where it needs to go.

This gives Jetsam the hallmark of intelligence – flexibility. If the environment changes, say when the sticks are re-scattered at random, it can still adapt and find the materials to build its nest. Rather than a brain giving instructions such as “Do this”, feedback allows instructions such as “If this, do that; if that, do the other”. Crucially, feedback allows a machine to follow a purpose – if the goal changes, the machine can adapt.

It’s this quality that the sixteenth century clockwork models lacked, and one that we as humans almost take for granted. We grasp and process information about the world in many forms, including sights, smells or sounds. We may give these information sources different names, but in some sense, these are essentially the same stuff.

Information control

Cybernetics is the name given to the study of feedback, and systems that use feedback, in all their forms. The term comes from the Greek word for “to steer”, and inspiration for some of the early work on cybernetics sprang from automatic guiding systems developed during World War II for guns or radar antennae. Around the middle of the twentieth century cybernetics became an intellectual movement across many different disciplines. It created a common language that allowed engineers to talk with psychologists, or ecologists to talk to mathematicians, about living organisms from the viewpoint of information control systems.

A key message of cybernetics is that you can’t control something unless you have feedback – and that means measurement of the outcomes. You can’t hit a moving target unless you get feedback on changes to its movement, just as you can’t tell if a drug is a cure unless you get feedback on how many more people recover when they are given it. The flip side of this dictum is the promise that with feedback, you can control anything. The human brain seems to be the arch embodiment of this cybernetic principle. With the right feedback, individuals have been known to control things as unlikely as their own heart rate, or learn to shrink and expand their pupils at will. It even seems possible to control the firing of individual brain cells.

But enhanced feedback methods can accelerate learning about more mundane behaviours. For example, if you are learning to take basketball shots, augmented feedback in the form of “You were 3 inches off to the left” can help you learn faster and reach a higher skill level quicker. Perhaps the most powerful example of an augmented feedback loop is the development of writing, which allowed us to take language and experiences, and make them permanent, solidifying it against the ravages of time, space and memory.

Thanks to feedback we can become more than simple programs with simple reflexes, and develop more complex responses to the environment. Feedback allows animals like us to follow a purpose. Tim Lewis’s mechanical bird might seem simple, but in terms of intelligence it has more in common with us than with nearly all other machines that humans have built. Engines or clocks might be incredibly sophisticated, but until they are able to gather their own data about the environment they remain trapped in fixed patterns.

Feedback loops, on the other hand, beginning with the senses but extending out across time and many individuals, allow us to self-construct, letting us travel to places we don’t have the instructions for beforehand, and letting us build on the history of our actions. In this way humanity pulls itself up by its own bootstraps.

The Master and His Emissary

I’ve been struggling to understand Iain McGilchrist’s argument about the two hemispheres of the brain, as presented in his book “The Master and His Emissary” [1]. It’s an argument that takes you from neuroanatomy, through behavioural science to cultural studies [2]. The book is crammed with fascinating evidential trees, but I left it without a clear understanding of the overall wood. Watching this RSA Animate helped.

Basically, I think McGilchrist is attempting a neuroscientific rehabilitation of an essentially mystical idea: the map is not the territory, of the important of ends rather than just means [3]. Here’s a tabulation of functions and areas of focus that McGilchrist claims for the two hemispheres:

Left Right
Representation Perception
The Abstract The Concrete
Narrow focus Broad focus
Language Embodiment
Manipulation Experience (?)
Parts Wholes
Machines Life
The Static The Changing
Focus on the known Alertness for the novel
Consistency, familiarity, prediction Contradiction, novelty, surprise
A closed knowledge system An open knowledge system
(Urge after) Consistency (Urge after) Completeness
The Known The Unknown, The ineffable
The explicit The implicit
Generalisation Individuality/uniqueness
Particulars Context

A key idea – which is in the RSA Animate – is the idea of a ‘necessary distance’ from the world. By experiencing yourself as separate (but not totally detached) you are able to empathise with people, manipulate tools, reason on symbols etc. But, of course, there’s always the risk that you end up valuing the tools for their own sake, or believing in the symbol system you have created to understand the world.

From a cognitive neuroscience point of view, this is fair enough, by which I mean that if you are going to look into the (vast) literature on hemispheric specialisation and make some summary claims, as McGilchrist does, then these sort of claims are reasonable. You can enjoy one of the grand-daddies of split brain studies, Michael Gazzaniga, summarise his perspective, which isn’t that discordant, here [4].

From this foundation, McGilchrist goes on to diagnose a historical movement in our culture away from a balanced way of thinking and towards a ‘left brain’ dominated way of thinking. This, to me, also seems fair enough. Modernity does seem characterised by the ascendance of both instrumentalism and bureaucracy, both ‘leftish’ values in the McGilchristian framework.

It is worth noting that dual-systems theories, of which this is one, are perennially popular. McGilchrist is careful and explicit in rejecting the popular Reason vs Emotion distinction that has come to be associated with the two hemispheres. In this RSA report Divided Brain, Divided World, he briefly discusses how his theory relates to the automatic-deliberative distinction, as (for example) set out by Daniel Kahneman in his Thinking Fast and Slow. He says, briefly, that that distinction is orthogonal to the one he’s making; i.e. both hemispheres do automatic and controlled processing.

I was turned on to the book by Helen Mort, who writes a great blog about neuroscience and poetry which you can check out here: poetryonthebrain.blogspot.ca/. If you’re interested in reading more about psychology, divided selves and cultural shifts I recommend Timothy Wilson’s “Strangers to Ourselves” and Walter Ong’s “Orality and Literacy”.


[1] If you buy the paperback they’ve slimmed it down, at least in some editions, by leaving out the reference list at the end. Very frustrating.

[2] Fans of grand theories of hemispheric functioning and the relation to cultural evolution, make sure you check out Julian Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind . Weirdly McGilchrist hardly references this book (noting merely that he is saying something completely different).

[3] And when I use the term ‘mystical’, that is a good thing, not a denigration.

[4] Gazzaniga, M. (2002). The split brain revisited. Scientific American, Special Editions: The Hidden Mind.

Emotions are included

New Republic has an interesting piece on how corporations enforce ’emotional labour’ in their workforce – checking that they are being sufficiently passionate about their work and caring to their customers.

It focuses on the UK sandwich chain Pret who send a mystery shopper to each outlet weekly and “If the employee who rings up the sale is appropriately ebullient, then everyone in the shop gets a bonus. If not, nobody does.”

The concept of ‘emotional labour‘ was invented by sociologist Arlie Hochschild who used it to describe how some professions require people to present as expressing certain emotions regardless of how they feel.

The idea is that the waiter who smiles and tells you to ‘have a nice day’ doesn’t really feel happy to see you and doesn’t particularly care how your day will go, but he’s asked to present as if he does anyway.

The idea has now moved on and this particular example is considered ‘surface acting’ or ‘surface emotional labour’ while ‘deep acting’ or ‘deep emotional labour’ is where the person genuinely feels the emotions. A nurse, for example, is required to be genuinely caring during his or her job.

‘Surface emotional labour’ is known to be particularly difficult when it conflicts too much with what you really feel. This ’emotional dissonance’ leads to burnout, low mood and poor job satisfaction. In contrast, ‘deep emotional labour’ is linked to higher job satisfaction.

The New Republic article links to a deleted but still archived list of ‘Pret behaviours’ written by the company to state what is expected of the employees.

Apart from some classic corporate doublethink (‘Don’t want to see: Uses jargon inappropriately; Pret perfect: Communicates upwards honestly’) you can see how the company is trying to shift their employees from doing ‘surface emotional labour’ to ‘deep emotional labour’.

For example:

  • Don’t want to see: Does things only for show
  • Want to see: Is enthusiastic
  • Pret perfect! Loves food

  • Cynics would suggest this is a form of corporate indoctrination but you could also see it as part of drive for employee well-being. You say tomato, I say “smell that Sir – wonderful isn’t it? Fresh tomatoes from the hills of Italy”.

    Those of a political bent might notice an echo of Marx’s theory of alienation which suggests that capitalism necessarily turns workers into mechanistic processes that alienate them from their own humanity.

    However, the concept of ‘deep emotional labour’ is really where the approach can start becoming unhelpful as it has the capacity to denigrate genuine compassion as ‘required labour’. I doubt many nurses go into their profession intending to ‘monetize their emotions’ or feel they have been ‘alienated’ from their compassion.

    And as armies are loathe to admit, soldiers serve for their country but fight for their platoon mates. Is this really a form of ‘deep emotional labour’ or it is just another job where emotions are central?

    Link to New Republic piece ‘Labor of Love’.

    A brain of warring neurons

    A fascinating talk from philosopher of mind Daniel Dennett where he refutes his earlier claims that neurons can be thought of like transistors in a computational machine that produces the mind.

    This section is particularly striking:

    The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and that have to be properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and it’s fed by a lot of different currents.

    The complete talk is over at Edge.

    Link to Dennett talk at Edge.

    Darwin’s asylum

    Shrewsbury School is one of the oldest public schools in England and it makes much of being the institution that schooled Charles Darwin and introduced him to science.

    While the famous naturalist was certainly a pupil there he probably never set foot inside the building that the famous school now occupies because during Darwin’s time the building was Kingsland Lunatic Asylum.


    As the historian L.D. Smith noted, the Kingsland Asylum was quite unique in its day. Rather than create a separate institution for ‘pauper lunatics’ – as was common at the time – the authorities in the county of Shropshire had decided to license the Shrewsbury ‘House of Industry’ as a private asylum at the same time.

    The workhouse and asylum was opened in 1784 to accommodate paupers and cases of “lunacy”, “sickness” and “single women in a state of pregnancy”.

    By 1844 the Kingsland Asylum contained nearly 90 residents who lived under a tough regime:

    Payment of one-sixth part of their week’s work is made to all except in cases of misconduct, and punishments are given to all who profanely curse or swear, who appear to be in liquor, who are refractory or disobedient to the reasonable orders of the steward or matrons, who pretend sickness, make excuse to avoid working, destroy or spoil material or implements, or are guilty of lewd, immoral or disorderly behaviour.

    But it’s not wholly inappropriate that Darwin has become posthumously linked to an asylum building as he had a powerful, if not fraught, relationship with psychiatry and mental illness.

    Darwin reportedly showed ‘a personal interest in the plight of the mentally ill and an astute empathy for psychiatric patients’ but founded a view of madness as a form of degeneration that was enthusiastically adopted by eugenicists.

    Thankfully, this strain of Darwinian influence has long since died, but both evolution and genetics remain important foundations of modern cognitive science although the role of evolutionary psychology in explaining mental illness remains controversial.

    Curiously, Darwin himself also suffered from poor health for most of his life that has never been fully explained but clearly had many aspects that would be diagnosed as psychiatric disorders today.

    So I quite like the fact that Darwin’s picture is proudly displayed inside an old asylum. It’s an ambiguous tribute and reminds us of his own ambivalent relationship with the unsettled mind.

    BBC Column: political genes

    Here’s my BBC Future column from last week. The original is here. The story here isn’t just about politics, although that’s an important example of capture by genetic reductionists. The real moral is about how the things that we measure are built into our brains by evolution: usually they aren’t written in directly, but as emergent outcomes..

    There’s growing evidence to suggest that our political views can be inherited. But before we decide to ditch the ballot box for a DNA test, Tom Stafford explains why knowing our genes doesn’t automatically reveal how our minds work.

    There are many factors that shape and influence our political views; our upbringing, career, perhaps our friends and partners. But for a few years there’s been growing body of evidence to suggest that there could be a more fundamental factor behind our choices: political views could be influenced by our genes.

    The idea that political views have a genetic component is now widely accepted – or at least widely accepted enough to become a field of study with its own name: genopolitics. This began with a pivotal study, which showed that identical twins shared more similar political opinions than fraternal twins. It suggested that political opinion isn’t just influenced by dinner table conversation (which both kinds of twins share), but through parents’ genes (which identical twins have more in common than fraternal twins). The strongest finding from this field is that the position people occupy on a scale from liberal to conservative is heritable. The finding is surprisingly strong, allowing us to use genetic information to predict variations in political opinion on this scale more reliably than we can use genetic information to predict, say, longevity, or alcoholism.

    Does this mean we can give up on elections soon, and just have people send in their saliva samples? Not quite, and this highlights a more general issue with regards to seeking genetic roots behind every aspect of our minds and bodies.

    Since we first saw the map of the human genome over ten years ago, it might have seemed like we were poised to decode everything about human life. And through military-grade statistics and massive studies of how traits are shared between relatives, biologists are finding more and more genetic markers for our appearance, health and our personalities.

    But there’s a problem – there simply isn’t enough information in the human genome to tell us everything. An individual human has only around 20,000 genes, slightly less than wild rice. This means there is about the same amount of information in your DNA as there is in eight tracks on your mp3 player. What forms the rest of your body and behaviour is the result of a complex unfolding of interactions among your genes, the proteins they create, and the environment.

    In other words, when we talk about genes predicting political opinion, it doesn’t mean we can find a gene for voting behaviour – nor one for something like dyslexia or any other behaviour, for that matter. Leaving aside the fact that the studies measured “political beliefs” using an extremely simple scale, one that will give people with very different beliefs the same score, let’s focus on what it really means to say that genes can predict scoring on this scale.

    Getting emotional

    Obviously there isn’t a gene controlling how people answer questions about their political belief. That would be ridiculous, and require us to assume that somewhere, lurking in the genome, was a gene that lay dormant for millions of years until political scientists invented questionnaire studies. Extremely unlikely.

    But let’s not stop there. It isn’t really any more plausible to imagine a gene for voting for liberal rather than conservative political candidates. How could such a gene evolve before the invention of democracy? What would it do before voting became a common behaviour?

    The limited amount of information in the genome means that it will be rare to talk of “genes for X”, where X is a specific, complex outcome. Yes, some simple traits – like eye colour – are directly controlled by a small number of genes. But most things we’re interested in measuring about everyday life – for instance, political opinions, other personality traits or common health conditions – have no sole genetic cause. The strength of the link between genetics and the liberal-conservative scale suggests that something more fundamental is being influenced by the genes, something that in turn influences political beliefs.

    One candidate could be brain systems controlling our emotional responses. For instance, a study showed that American volunteers who started to sweat most when they heard a sudden noise were also more likely to support capital punishment and the Iraq War. This implies that people whose basic emotional responses to threats are more pronounced end up developing a constellation of more right-wing political opinions. Another study, this time in Britain, showed differences in brain structure between liberals and conservatives – with the amygdala, a part of the brain that learns emotional responses, being larger in conservatives. Again, this suggests that differences in political beliefs might arise from differences in emotional processes.

    But notice that there isn’t any suggestion that the political opinions are directly controlled by biology. Rather, the political opinions are believed to develop differently in people with different basic biology. Something like the size of a particular brain area is influenced by our genes, but the pathway from our DNA to an apparently simple variation in a brain region is one with many twists, turns and opportunities for other genes and accidents of history to intervene on.

    So the idea that genes can have some influence on political views shouldn’t be shocking – it would be weird if there wasn’t some form of genetic influence. But rather than being the end of the story, it just deepens the mystery of how our biology and our ideas interact.

    The grief problem

    Photo by Flickr user Derek Bridges. Click for source.I’ve got an article in The Observer about the sad history of how psychologists have misunderstood grief and why it turns out to be much more individual than traditional theories have suggested.

    As well as the individual variations, it also riffs on the massive diversity of cultural grief and mourning practices.

    At the beginning of Nicole Kidman’s 2008 film Australia, the audience is shown a warning. “Exercise caution when watching this film,” it says, “as it may contain images or voices of deceased persons.” The notice, perplexing for most viewers, was for the benefit of Aboriginal Australians, who may have a taboo against naming or encountering representations of the dead.

    The taboo has spiritual roots relating to not disturbing spirits of the departed but anthropologist Katie Glaskin describes how the naming taboo “serves to make people ‘acutely aware’ of the person whose name is being avoided”. As a form of remembering through non-remembrance, it is a psychological mirror image of more familiar traditions where creating and cherishing a representation of the deceased is considered necessary for healthy mourning. This underlines the fact that mourning can take place in a radically different way, based on a thoroughly different understanding of death, highlighting how any claims to a universal “psychology of grief” pale in the face of human diversity.

    The article has many more examples and we’re now at a stage where the idea that we go through specific ‘stages’ of grief is untenable scientifically – but lives on due to its powerful grip on society.

    This is most worrying because it has been used to pathologise people who don’t seem to be grieving ‘appropriately’, branding them as ‘in denial’ when really they’re just dealing with things in their own way.

    Link to article in The Observer.

    Advances in artificial intelligence: deep learning

    If you want to keep up with advances in artificial intelligence, the New York Times has an essential article on a recent step forward called deep learning.

    There is a rule of thumb for following how AI is progressing: keep track of what Geoffrey Hinton is doing.

    Much of the current science of artificial neural networks and machine learning stems from his work or work he has done with collaborators.

    The New York Times piece riffs on the fact that Hinton and his team just won a competition to design software to help find molecules that are most likely to be good candidates for new drugs.

    Hinton’s team entered late, their software didn’t include a big detailed database of prior knowledge, and they easily won by applying deep learning methods.

    To understand the advance you need to know a little about how modern AI works.

    Most uses abstract statistical representations. For example, a face recognition system will not use human-familiar concepts like ‘mouth’, ‘nose’ and ‘eyes’ but statistical properties derived from the image that may bear no relation to how we talk about faces.

    The innovation of deep learning is that it not only arranges these properties into hierarchies – with properties and sub-properties – but it works out how many levels of hierarchy best fit the data.

    If you’re a machine learning aficionado Hinton described how they won the competition in a recent interview but he also puts all his scientific papers online if you want the bare metal of the science.

    Either way, while the NYT piece doesn’t go into how the new approach works, it nicely captures it’s implications for how AI is being applied.

    And as many net applications now rely on communication with the cloud – think Siri or Google Maps – advances in artificial intelligence very quickly have an impact on our day-to-day tools.

    Link to NYT on deep learning AI (via @hpashler)