You are the last piece in the puzzle

The Economist has an excellent article that discusses the increasingly diverse ways in which information from your social network – drawn from services like Facebook, or from telephone calls or payment patterns – are being used to obtain personal information about you.

This is not information which you have explicitly stated or included, but which can be found out or ‘mined’ from your patterns of behaviour and your connections to other people.

The piece looks at ways in which software, specifically designed for the task, is being increasingly deployed by companies and security agencies to profile their targets.

Telecoms operators naturally prize mobile-phone subscribers who spend a lot, but some thriftier customers, it turns out, are actually more valuable. Known as “influencers”, these subscribers frequently persuade their friends, family and colleagues to follow them when they switch to a rival operator. The trick, then, is to identify such trendsetting subscribers and keep them on board with special discounts and promotions. People at the top of the office or social pecking order often receive quick callbacks, do not worry about calling other people late at night and tend to get more calls at times when social events are most often organised, such as Friday afternoons. Influential customers also reveal their clout by making long calls, while the calls they receive are generally short.

The piece goes on to explain how such analyses have been used in everything from targeting advertising to tracking down Saddam Hussein.
 

Link to ‘Untangling the social web’.

The labyrinth of Inception

When you have a hammer, everything can look like a nail and people have been banging the shit out of Inception. The sci-fi movie of the year has attracted numerous ‘neuroscience of Inception’ reviews despite the fact that the film has little to say about the brain and is clearly more inspired by the psychological theories of Carl Jung than by neurobiology.

It’s easy to why the movie has attracted neuroscience fans, including a brain-based review in this week’s Nature. It’s a science fiction film, the dream entry device presumably alters the brain, and director Christopher Nolan’s previous film Memento was carefully drawn from a detailed reading of the science of brain injury and memory loss.

Inception itself, however, contains so little direct reference to the brain (I counted about three lines) that you have to do some pretty flexible interpretation to draw firm parallels with brain science. Perhaps, most tellingly, for a film supposedly about neuroscience, the dream entry devices don’t even connect to the brain and nothing is made of how they achieve their interface.

But for those familiar with the theories of Carl Jung, the psychoanalyst and dissenter from Freud’s circle, the film is rich with both implicit and explicit references to his work.

As with all psychoanalysts, Jung was concerned with the subconscious mind and believed that it contains powerful emotional processes that, when malformed or disturbed, can break through and cause immense distress to our conscious lives. To protect us, the subconscious tries to hide these forces behind symbols, which appear, most vividly, in dreams.

This is why Freud called dreams “the royal road to the unconscious” and Jung’s work is also based on this core assumption.

Similarly, in Inception, dreams are a way of accessing the subconscious of the dreamer, to the point where they can be used to steal secrets. This dream invasion work is not easy, of course, primarily because the subconscious mind attempts to defend against invaders (a defense mechanism in psychoanalytic terms) and the dreamspace needs to be explored and interpreted by the invaders to get to the secret itself.

This is not the only challenge, as other people in the dream are projections of the dreamer’s subconscious where, in line with the definition from psychoanalysis, personal feelings are perceived as residing in other people.

In the film, the young architect, Ariadne is hired to build dreams in the form of mazes, and the labyrinth forms one of the central symbols in the film (the name, Ariadne, by the way, comes from the Greek legend where she leads Theseus out of the Minotaur’s labyrinth – Jung referred to being lost in life as ‘losing the Ariadne thread’).

In Jungian psychology the labyrinth is one of the most powerful symbols of the subconscious. In his book ‘Man and His Symbols’, he explains its meaning:

“The maze of strange passages, chambers, and unlocked exits in the cellar recalls the old Egyptian representation of the underworld, which is a well-known symbol of the unconscious with its abilities. It also shows how one is “open” to other influences in one’s unconscious shadow side and how uncanny and alien elements can break in.”

Ariadne is hired because Don Cobb can no longer create dreams, owing to the fact that the subconscious representation of his ex-wife, who killed herself due to Cobb’s dream work, appears and attempts to violently stop him. Cobb names her his ‘shade’, directly referencing the Jungian concept of the shadow where we are haunted by the parts of ourselves which we are most ashamed and which we most try to repress.

While Cobb’s main objective is to get back to his children, his main challenge is to overcome his shadow that causes conflicts in his subconscious. Normally, if you wrote a sentence like that about a film you would be using a Jungian interpretation, but in the case of Inception this is also the literal state of affairs.

This is not the only psychological journey that happens in the film, as Cobb’s journey is paralleled by that of Robert Fischer, the target of the dream invaders. Fischer’s father is dying leaving both the state of the family corporation and the father-son relationship unresolved.

The situation is a representation of the Arthurian grail legend, the Fisher King. In the tale, the king responsible for protecting the Holy Grail is wounded and his kingdom decays in parallel to his damaged body. The knight Perceval learns he could heal the king and his kingdom by asking the right questions.

Not coincidentally, Jung was intensely interested in the Grail legend throughout his life as he thought it was one of the best representation of the ‘collective unconscious‘ where common psychological themes of humanity appear as what he called ‘archetypes‘.

His wife, Emma Jung, a psychoanalyst in her own right, wrote a book on the psychological meaning of the legend drawn from Carl Jung’s theories and cited the key theme of the tale to be ‘individuation‘, that is the healthy development of ourselves as distinct individuals by resolving our relationships with those around us and the conflicts within us.

In Inception, Robert Fischer’s journey ends with him resolving his relationship with his wounded father and saving his ‘kingdom’ by learning that he had always wanted him to be his own man and not try and be his father – which, as we learn at the end – is at the core of his subconscious. Again, this is not an interpretation; it is the literal truth of the film.

There are lots of other subtle pointers in the film which may or may not be deliberate. Is it a co-incidence that the lead character Don Cobb, shares a name with Stanley Cobb, the person most responsible for introducing Jungian analysis to the United States? Or that Ariadne gets the job by drawing a mandala style maze, a symbol that Jung believed was a representation of the unconscious self? Or that Mal’s madness is portrayed as her subconscious breaking through into reality, in line with Jung’s definition?

Regardless of whether these are subtle hints or not, the film is Jungian at its core, and what is most interesting for me is that Nolan is deploying different theories of the mind as themes in his films. While Memento was obviously neuropsychological, Inception is clearly Jungian.
 

Link to Wikipedia Inception page.
Link to more on Jung and Inception.

The class of 77%

A study just published in the British Journal of Psychiatry has found that only 23% of the population are without symptoms of personality disorder.

If you’re not familiar with it, personality disorder is a somewhat controversial diagnosis which essentially classifies people who we might otherwise called ‘extremely difficult’ but to the point where they cause themselves significant life problems.

This new survey used the standard diagnostic criteria, but instead of giving people a “you’ve got it or you haven’t” all-or-nothing diagnosis (given when a certain threshold of symptoms are reached) the researchers totalled up the symptoms to make a sliding scale.

The study found that even those who wouldn’t qualify for a diagnosis but still had some symptoms were more likely to have had a history of running away from home, police contacts, homelessness and sexual abuse and were less likely to be employed.

Of course, what the study could be describing is simply that people who have had a rough time come out the worst for wear.

The question is not so much whether this is a high or low figure, but at what point psychiatry and mental health services should offer assistance.

For many years psychiatry has been suffering from ‘mission creep’ where things previously thought to be unhelpful but normal (e.g. low mood after a divorce, shyness) have become classified and promoted as mental illnesses with the accompanying pharmacological treatment.

At what point we decide that something is a mental illness has become one of the central psychological and cultural questions of the 21st century.
 

Link to summary of study at the British Journal of Psychiatry.

I, Jacques Derrida, Used To Be A 97lb Weakling!

Anthropologist Pascal Boyer has written a wonderfully contrarian essay for Cognition and Culture criticising the “crashingly banal assumptions” behind supposedly radical theories of human nature.

While Boyer is clearly making mischief, his main criticism of the post-modernist idea that human nature is entirely socially constructed is spot on.

While there are clear social influences in how we understand ourselves, the extreme relativism of saying everything is ‘defined by culture’ tends to evaporate when examined too closely.

But on closer inspection, it generally turns out that the initial, amazing, challenging statements in fact disguised crashingly banal assumptions. Suppose you point out to your academic ideologue that, for instance, if maleness and manhood really are completely unrelated… then it is puzzling that an extraordinarily vast number of [socially constructed] “men” happen to be [chromosomal] “males”, and that such a coincidence is spooky. You will probably be told that you did not quite understand the original statement. What it meant was that the meaning of maleness could not be derived from possession of the Y chromosome…

Or if you point out that some forms of insanity occur in many cultures at the same rates, that they trigger highly similar behaviors, are associated with the same genetic predispositions and correlate with similar neuro-functional features, you will be told that you did not understand. What was meant was that the cultural construal of madness was not derived directly from brain dysfunction…

At which point, you might be forgiven to think something like “so that was what all the fuss was about?” and you would be right of course. When push comes to shove, the flamboyant, earth-shattering, romantic, swash-and-buckle assault on our entrenched certainties seems to be, well, a bit of a damp squib.

 

Link to Boyer’s essay ‘There is no such thing as sexual intercourse’.

The psychology of advertising in the Mad Men era

Film-maker Adam Curtis has just posted a fascinating look into how the Madison Avenue advertising agencies of the 1960s first understood and applied psychology to marketing.

As well as his account of these early forays into the consumerist mind he also posts some wonderful archive footage of the ad agencies’ training and discussions and some never before broadcast interview footage he recorded himself.

You may know Curtis from his numerous sociological documentaries, most notably The Century of the Self, which is a brilliantly made four-part series which puts forward a distinct and defendable argument about how our understanding of the mind changed through the 20th Century.

Part of this covered how advertisers began to take advantage and promote the increased focus on unconscious motivations and individuality to take advantage and promote the idea of the ‘self’ as consumer, and he expands on that in his BBC article:

The story begins at the end of the 1950s. There were two distinct camps on Madison Avenue. And they loathed each other.

One group was led by Rosser Reeves who ran the Ted Bates agency. Reeves had invented the idea of the USP – the unique selling point. You found a phrase that summed up your product and you repeated it millions and millions of times on all media so it “penetrated” the minds of the consumers.

His favourite was Lucky Strike’s “It’s Toasted”

He laid this all out, with diagrams, in his “bible” – called Reality in Advertising.

The other camp were known as “the depth boys”. They believed the opposite. That you penetrated the consumer’s mind by using all sorts of subtle psychological techniques to find out what they really wanted. These were feelings the consumer often didn’t even consciously realise themselves.

Both the video and the writing are really worth checking out for a revealing insight into how different ideas about the mind played out in the post-war consumerist dream.
 

Link to ‘Experiments in the Laboratory of Consumerism’ (via MeFi).

An epidemic of ghosts

Mozambique is being ravaged by an epidemic of spirit possession. These ‘outbreaks’ have traditionally been dismissed as superstition by commentators from afar, but it is becoming increasingly recognised that different cultures have different ways of expressing mental distress and social anguish – to the point where a team of medical scientists have just published the first large scale epidemiological study on spirit possession and its link to mental and physical illness in post-civil war Mozambique.

In this form of possession, the person feels as if their normal identity has been ‘pushed aside’ by a ‘spirit’, who takes control of their body and typically communicates with other people. After the possession episode, the person usually has amnesia for the episode.

In Western medicine, this is usually understood through a psychological process called dissociation – where normally integrated mental processes become disconnected. It’s like the psychological equivalent of when two teams in a company can’t communicate very well, so they start operating independently rather than as an integrated organisation.

In many societies around the world the concept of spirit possession plays an important role in understanding and explaining both the forces of nature and the psychology of individuals, to the point where it has both positive and negative effects.

Ethnographic studies have found that, during possession, ‘spirits’ may offer opinions or solutions to moral crises and may protect the individual from trauma and despair during times of violence.

However, negative possession states can causes problems or illnesses that are thought to be triggered by the harmful spirits, which can include anything from fertility problems, to family break-up, to physical aches and pains.

As times change, new spirits appear and old ones fade, each having different effects, benefits or risks. One legacy of the Mozambique war was the emergence of a new type of spirit that had a particular interest in the personal and social legacy of the conflict.

In the late twentieth century, as a result of the Mozambican protracted civil war, gamba spirits emerged. They became the principal harmful spirits and source of diagnosis. Gamba refers to the spirit of male soldiers who died in the war. Possession by gamba is a trauma of a double derivation. First, the host and patrikin [family on the father’s side] were severely exposed to warfare that led to vulnerability; and, second, to address that war-related vulnerability, the host’s patrikin were alleged to have perpetrated serious wrongdoings.

The person possessed by a gamba spirit publicly re-enacts the events of war, sometimes violently, and through the possessed person, the spirit demands public acknowledgement of the injustices they suffered. Spirits who are not appeased continue to torment the possessed person to the point of serious illness.

The study, led by medical anthropologist Victor Igreja and published in Social Science and Medicine, surveyed the extent of possession in two districts in central Mozambique and see how it was linked to trauma and physical health.

They used local criteria for the definition of spirit possession and validated interviews to assess trauma – such as the Harvard Trauma Questionnaire – developed to be used across cultures.

Households were selected at random, and out of 941 people evaluated, 175 (18.6%) reported some form of spirit possession while 5.6% had experienced multiple simultaneous spirit possession.

People who had been possessed were more likely to be women and have symptoms of physical illness but less likely to have had a baby. Those who went into trances as part of their possession were more likely to be experiencing psychological trauma, have fertility problems, have had a child die during their life and to suffer nightmares

One particularly striking finding was that the severity of psychological and physical symptoms was directly related to the number of spirits that a person had been possessed by, with more serious problems being associated with greater numbers of intruding spirits.

While the effects of spirit possession can be seen to have some relation to the Western diagnoses of post-traumatic stress disorder (PTSD), depression or anxiety, there are also many distinct features that reflect a more local concept of how a distressed person can express their mental anguish.

For people familiar with diagnoses drawn from the DSM and World Health Organisation ICD system, it is tempting to think that established descriptions are the ‘real’ disorders while cases of spirit possession are simply a local interpretation of them.

What is becoming increasingly clear, however, is that both our personal experience of psychological distress and how we express that to others, is shaped by our culture. In other words, diagnoses such as PTSD may be as much wedded to a particular culture as spirit possession.

Sadly, this new study is locked behind a pay-wall, but if you have access to the full thing I recommend giving it a read through as it is a curious combination of traditional statistical epidemiology applied to the ‘diagnosis’ of possession.

The paper demonstrates that spirit possession can be studied scientifically and makes as much sense as studying any other psychiatric problem that is defined by unusual or unhelpful behaviour, such as schizophrenia or panic disorder.
 

Link to PubMed entry for study.
Link to DOI entry for study.

Determined to fail: free will and work success

If you want to predict how well someone might perform in a new job, you might want to enquire about their views on whether we are free to choose our own actions.

A delightful study just published in the journal Social Psychological and Personality Science found that belief in free will predicted job performance better than conscientiousness, belief in influence over life events and a commitment to a ‘Protestant work ethic’ where diligent labour is seen as a benefit in itself.

Here’s the summary from the study’s abstract:

Do philosophic views affect job performance? The authors found that possessing a belief in free will predicted better career attitudes and actual job performance. The effect of free will beliefs on job performance indicators were over and above well-established predictors such as conscientiousness, locus of control, and Protestant work ethic. In Study 1, stronger belief in free will corresponded to more positive attitudes about expected career success. In Study 2, job performance was evaluated objectively and independently by a supervisor. Results indicated that employees who espoused free will beliefs were given better work performance evaluations than those who disbelieve in free will, presumably because belief in free will facilitates exerting control over one’s actions.

 

Link to summary and DOI entry for study (via Brain Hammer).

Falling out of love with e-dating

Marie Claire has a fascinating short interview with psychologist Mark Thompson who was apparently hired by a big name internet dating website to work on ‘scientific matchmaking’ – but recently jumped ship when he became disillusioned with the industry.

Buyer beware: the guy has just written his own book on sex and relationships, although his comments on dating sites don’t seem to directly bear on his book promotion efforts.

Regardless, it’s actually quite refreshing to hear someone give a sensible take on the limit of ‘scientific matchmaking’ as, since it has become popular, science news has regularly been bogged down by lots of poorly disguised PR fluff based on exaggerated findings or dodgy unpublished ‘statistics’.

MC: What made you leave e-dating?

MT: I hated the way we overpromised and underdelivered. Our studies showed that the odds of meeting someone online and dating him more than a month are roughly one in 10. So it’s great that all those people on the TV commercials met their spouses, but they are the exceptions, not the rule. No computer can accurately predict whom you should be with. The function of the math will make vastly more false predictions than accurate ones.

MC: But isn’t blind dating always hit or miss?

MT: Yes, but you don’t have to pay $30 a month to be set up by your friend. And you don’t go in believing that science is behind the match. There’s a different set of expectations. When diet companies show someone who lost a bunch of weight in six weeks, they have to say, “Results not typical.” I think eHarmony and other sites should do the same.

MC: Do you think online dating can be fixed?

MT: It really depends on people’s willingness to come back and tell us why each date didn’t work out so the system could get smarter. It would be like Netflix, which learns from your preferences to make better predictions for you.

Netflix for dates. Actually, it’s not such a bad idea. “If you liked this date, you might also like…” could actually come in useful you had the hots for the other person, but they weren’t so keen on you. Or just even if you’re not in it for the long-term thing perhaps.
 

Link to Marie Clarie interview with Mark Thompson (via @DrPetra).

A gut reaction to moral transgressions

The Boston Globe has an excellent article on whether ‘gut feeling’ emotions, particularly disgust, are the unrecognised basis of moral judgements and social customs.

It’s an in-depth feature article that gives a great overview of the idea that social judgements may have an emotional basis, and, more controversially, that this tendency may have developed as part of an evolved aversion to things thought likely to cause infection or disease.

Research has shown that people who are more easily disgusted by bugs are more likely to see gay marriage and abortion as wrong. Putting people in a foul-smelling room makes them stricter judges of a controversial film or of a person who doesn’t return a lost wallet. Washing their hands makes people feel less guilty about their own moral transgressions, and hypnotically priming them to feel disgust reliably induces them to see wrongdoing in utterly innocuous stories.

Today, psychologists and philosophers are piecing these findings together into a theory of disgust’s moral role and the evolutionary forces that determined it: Just as our teeth and tongue first evolved to process food, then were enlisted for complex communication, disgust first arose as an emotional response to ensure that our ancestors steered clear of rancid meat and contagion.

But over time, that response was co-opted by the social brain to help police the boundaries of acceptable behaviour. Today, some psychologists argue, we recoil at the wrong just as we do at the rancid, and when someone says that a politician’s chronic dishonesty makes her sick, she is feeling the same revulsion she might get from a brimming plate of cockroaches.

In psychology, there is lots of interest in people who have a selective problem with certain emotional reactions. ‘Psychopaths‘ are widely considered to have a selective lack of empathy, and I often wonder whether there are people who have a selective lack of disgust reactions.

There also seems to be little consideration of how disgust reactions are altered by context. For example, lots of common sexual acts seem quite unpalatable if done outside of a sexual context, despite the fact that this doesn’t change how hygienic they are.

The Boston Globe piece does a great job of covering the science in the area and it’s also worth mentioning that Edge recently posted videos and articles from a recent conference on ‘The New Science of Morality’ that has some great discussion from the leading researchers in the field.

 
Link to Boston Globe on ‘The surprising moral force of disgust’.
Link to Edge archives of the ‘The New Science of Morality’ conference.

Psychology narrowing its own mind

I’ve just discovered a stinging but insightful critique of modern scientific psychology from cognitive scientist Paul Rozin, who accuses the field of being blinded by fancy experimental methods while devaluing the importance of capturing new and interesting phenomena.

The article, available online as a pdf, was published in Perspectives on Psychological Science and takes a careful look at how psychology has become obsessed with its own methods.

Rozin makes a striking comparison to biology, where curiosity-driven studies that simply try and describe a new species, effect or behaviour are widely published, alongside experimental studies that aim to test a specific idea.

In contrast, psychology has become a largely lab-based science where your career is made by running countless variations on an established effect to the point where it is now virtually impossible to publish a descriptive account or case study in psychology or psychiatry.

Consequently, the most lauded studies are usually statistically robust, very well controlled, and largely removed from a context which makes them directly relevant to real world issues and problems.

Rosin outlines seven types of descriptive studies which he thinks are under-valued and notes their importance in the scientific understanding human thought and behaviour:

‘‘Here’s what happens in the world.’’ This paper consists of raw description, carefully documented, and motivated by what I will call ‘‘informed curiosity’’ (Rozin, 2001). Ethologists do a lot of this, as did Erving Goffman and Darwin. Much of molecular biology takes this form.

‘‘Here is a functional relation between two variables.’’ [Rozin argues that lots of interesting effects occur due to unusual relationships between two things – such as the U-shaped relationship between arousal and performance – and we wouldn’t always see by splitting things up into distinct groups. He argues that reporting these interesting but unexplained relationships is important but neglected]

‘‘Here’s something interesting that no one has noticed, and it is not easily susceptible to explanation by the principles available to us.’’ [Exceptions to established rules, even if described just to be highlighted as interesting]

‘‘Here’s something we haven’t studied, but it looks like it can be subsumed under something we already know.’’ [New things that are probably explained by existing theories – even when the theories were not intended to apply to this new domain. In other words, establishing an analogy between one problem and another]

‘‘Hey, someone did this really interesting study decades ago, and no one seems to have noticed it.’’ This paper would call readers’ attention to something already in the literature that is important and unknown or ignored.

‘‘Everyone assumes Effect X, but is X robust and generalizable?’’ [Taking an established effect and seeing how well it applies to other situations. If it doesn’t replicate very well outside the lab, this tell us something interesting about how fragile and context-sensitive the effect is]

‘‘This is a messy, criticizable experiment reporting something new and interesting.’’ This type of study usually involves an interesting idea, with some admittedly far from conclusive evidence for it. The famous Schachter and Singer (1962) attribution study is an example.

It’s worth reading the piece in full as Rozin gives lots of great examples where new phenomena clearly contribute to our understanding of the mind but are very difficult to publish in the psychology literature, despite similar examples (tool using crows, coconut carrying octopi, spear throwing chimpanzees) making headlines in top biology journals.

I also recommend a commentary on the piece from Cognition and Culture who nail some of the take-home points.
 

pdf of Rozin’s article.
Link to DOI entry and summary.
Link to coverage from Cognition and Culture.

Down and dirty

Baba Brinkman is a beat dealer and science rhyming pioneer who has just recorded an awesome hip-hop album on evolutionary psychology.

Most importantly, it’s actually a great album. It’s not an attempt at parody or a tribute, it’s an inspired, groove heavy, high production values record with a wonderful lyrical touch.

It’s not for kids, you simply won’t be able to play half the tracks to your high school science class without risking your job, as in classic hip-hop tradition, it’s down and dirty from beginning to end.

But it’s also a brilliant guide to the theories and controversies of evolutionary psychology and covers everything from game theory to twin studies.

You can listen to it online and can download it to your computer and mp3 player, choosing whatever price you want to pay for it.

Link to Baba Brinkman’s The Rap Guide to Human Nature (thanks Mark!)

The scientific method – lego robots edition

At the University of Sheffield we’ve been teaching psychology using lego robots. This isn’t as peculiar as it might sound. You can learn a lot about your theories by trying to build them into a machine or computer programme. But while teaching the course, I discovered that you can also learn a lot about the methods used in experimental psychology by trying them out on robots.

legorobot.jpg

This is one of the lego robots we were using. They are built using a Lego Mindstorms set and inspired by a book by Valentino Braightenberg called ‘Vehicles: Experiments in Synthetic Psychology‘.

The robot has a light sensor on each side and a nose-bumper which tells it when it has hit something. A simple brain connects these sensors with two independently powered wheels. Here’s the robot in action:

The suprising thing, and a crucial point of Braitenberg’s book, is that you can get what looks like complex behaviour (in this case line following) from simple rules. All that governs this robot’s behaviour is a positive connection between each light sensor and the wheel on the same side. This makes the robot turn away from lighter floor patches, so in this environment it traces the edge of the patten. An additional ‘fixed action pattern‘ makes it spin around and start in another direction if it bumps into something.

Many people look at these robots and over-intepret the complexity of their behaviour. You need skepticism and controlled experiments to discover exactly how simple the rules controlling the robot are. However, while I was trying to use the robots to teach this to my class, the robots and the class conspired to teach me something.

In a more advanced class I put a simple learning rule in the robot’s brain so that they could learn to slow down before hitting walls (actually, it is only true that I put the rule in the robot’s brain in the sense that Hitler invaded Poland. In truth I made a grad student programme the rule into the robot. Thanks Stuart!).

The task I set the class was simple, I thought: run an experiment to see the robot learn over successive trials. Because I’d programmed the rule into the robot I thought I’d be able to predict the robot behaviour. The predicted learning curve of the robot looked like this:

predicted.png

The results from the groups looked like this

groups.png

Since each robot was identical – same body, same brain in the way only robots can be – and all the groups were doing the same experiment, I expected to get the same results from each group. No luck there! Some get an increase, but with some the line stays almost flat. Some it goes up smoothly, some get wild swings in performance up and down.

And this got me to thinking. If the results are this variable with experimental subjects which we understand completely – their simple bodies are made of lego for goodness sake! the brains are identical and programmed by us! – how unreliable will results be if you experiment on real people? Noisy humans have bodies and brains which are both vastly more complex than lego robots, and each body and brain is unique. With so many sources of variability between individuals it ios amazing that experimental psychologists ever get any results at all.

The moral is that experimental work is hard, really hard. You’d better be sure your experiment reduces sources of variability as much as possible because there will be enough uncontrollable variability without you adding any more.

Fortunately there is a light at the end of the tunnel, in the form of statistics. If you average the different noisy group results you get something a bit more like the underlying pattern I knew to be there:

average.png

Trying a simple experiment with the lego robots gave me a new respect for the experimental method, and the difficulty psychologists face when trying to discover the rules underlying the wonderous variety in human behaviour.

Computationally, my dear Watson

The New York Times has an excellent article on IBM’s ‘Watson’ project which is an artificial intelligence system designed to answer natural language queries to the point where it can beat humans at Jeopardy! quiz show questions – where contestants are given an answer and they have to come up with the question.

Natural language questions are traditionally very difficult for computers because they involve a lot of assumptions. For example, take the question “How many people work in a bank?” To answer the question you need to understand that ‘bank’ refers to a financial institution and not a river bank.

Answering this question needs pre-existing knowledge and, computationally, two main approaches. One is constraint satisfaction, which finds which answer is the ‘best fit’ to a problem which doesn’t have mathematically exact solution; and the other is a local search algorithm, which indicates when further searching is unlikely to yield a better result – in other words, when to quit computing and give an answer – because you can always crunch more data.

If you’re not familiar with it, the quiz show Jeopardy! is a a particularly difficult version of this because it gives people answers and they have to provide correct question: such as “A singer who was touched for the very first time and became the material girl” – the winning contestant would be the first to respond with “Who is Madonna?”

In a major advance for artificial intelligence IBM have developed a system that can beat humans at the quiz. Although the ability to publicly trounce puny humans in quiz shows is not necessarily the greatest contribution to humanity, this is just a way of testing the system which could be deployed to answer unprepared question based on large datasets.

Watson applies computational linguistics to extract knowledge from text – a technique sometimes known as text mining and then applies constraint satisfaction and local search algorithms to produce reasonable answers quickly.

This could be very useful for asking questions of large datasets which someone may not have necessarily asked before – such ‘which drug shows the best promise for treating tuberculosis?’

The article has lots of great insights into the difficulties of artificial intelligence. I particularly liked this section:

To avoid losing money — Watson doesn’t care about the money, obviously; winnings are simply a way for I.B.M. to see how fast and accurately its system is performing — Ferrucci’s team has programmed Watson generally not to buzz until it arrives at an answer with a high confidence level. In this regard, Watson is actually at a disadvantage, because the best “Jeopardy!” players regularly hit the buzzer as soon as it’s possible to do so, even if it’s before they’ve figured out the clue. “Jeopardy!” rules give them five seconds to answer after winning the buzz. So long as they have a good feeling in their gut, they’ll pounce on the buzzer, trusting that in those few extra seconds the answer will pop into their heads. Ferrucci told me that the best human contestants he had brought in to play against Watson were amazingly fast. “They can buzz in 10 milliseconds,” he said, sounding astonished. “Zero milliseconds!”

Buzzing just on a ‘gut feeling’ is an example of what psychologists called ‘metacognition‘ or a little more crudely ‘thinking about thinking’. More specifically in this case its an example of humans relying on their ‘feeling of knowing‘.

‘Feeling of knowing’ is used a little differently in memory and decision making research, but it essentially boils down to the feeling that you know something, without necessarily having to bring the thing to mind. In some ways, it’s similar to when you look at something and decide whether you can lift it or not, without actually having to try and pick it up.

In other words, its being able to manage your mental resources based on estimations. This has become one of the core problems of artificial intelligence.

Computation is easy. Meta-computation, it turns out, is a bitch.

Link to NYT piece ‘What Is I.B.M.’s Watson?’

Neuroplasticity is a dirty word

Photo by Flickr user jamelah. Click for sourceThe latest refrain in popular science is that ‘your brain is plastic’, that experience has the potential to ‘rewire’ your brain, and that many previous mysteries in cognitive science can be explained by ‘neuroplasticity’. What they don’t tell you is that these phrases are virtually meaningless.

Neuroplasticity sounds very technical, but there is no accepted scientific definition for the term and, in its broad sense, it means nothing more than ‘something in the brain has changed’. As your brain is always changing the term is empty on its own.

This is from the introduction to the influential scientific book Toward a Theory of Neuroplasticity:

Given the central important of neuroplasticity, an outsider would be forgiven for assuming that it was a well defined and that a basic and universal framework served to direct current and future hypotheses and experimentation. Sadly, however, this is not the case. While many neuroscientists use the word neuroplasticity as an umbrella term it means different things to different researchers in different subfields… In brief, a mutually agreed upon framework does not appear to exist.

It’s currently popular to solemnly declare that a particular experience must be taken seriously because it ‘rewires the brain’ despite the fact that everything we experience ‘rewires the brain’.

It’s like a reporter from a crime scene saying there was ‘movement’ during the incident. We have learnt nothing we didn’t already know.

Neuroplasticity is common in popular culture at this point in time because mentioning the brain makes a claim about human nature seem more scientific, even if it is irrelevant (a tendency called ‘neuroessentialism‘).

Clearly this is rubbish and every time you hear anyone, scientist or journalist, refer to neuroplasticity, ask yourself what specifically they are talking about. If they don’t specify or can’t tell you, they are blowing hot air. In fact, if we banned the word, we would be no worse off.

As every change in the brain can be referred to as ‘neuroplasticity’ you need to look out for what is actually meant. As we are constantly learning more about the brain, the possible list is endless, but here are some of the most common processes associated with the term:

Structural changes in the brain

Synaptic plasticity refers to changes in the strength of connections between synapses, the chemical or electrical connection points between brain cells. Synaptic plasticity is an umbrella term in itself, and means nothing except something has changed at the synapse, but may include many specific processes such as long-term potentiation (LTP) or depression (LTD), changes in the number of receptors for specific neurotransmitters, and changes in which proteins are expressed inside the cell, among many others known and unknown. As a rule of thumb, nothing changes in the brain without changes in the synapses.

Synaptogenesis and synaptic pruning refers to the creation and removal of whole synapses or groups of synapses which build or destroy connection between neurons.

Neuronal migration is the process where neurons extend from their ‘place of birth’ to connect to far reaching areas across the brain.

Neurogenesis is the creation of new neurons. It largely occurs in the developing brain although over the last decade or so we’ve realised that limited neurogenesis occurs in the adult brain.

Neural cell death is literally where neurons die. This can happen through damage, over-excitation or disease, but also as a natural ‘programmed’ process including apoptosis. When this programmed cell death fails, it can sometimes lead to cancer.

Other forms of ‘neuroplasticity’ may be inferred from structural changes in the brain that do not involve direct measurement of individual neurons.

These usually come from brain scans and can involve changes in the density of white matter or grey matter on structural MRI scans, or to how densely radioactively labelled markers bind to specific receptors in parts of the brain.

Functional reorganisation – changes in how tasks are organised in the brain

As we develop, brain areas becomes specialised for specific tasks and ways of making sense of the world. For example, the very back of your brain is labelled the visual cortex, because it deals with sight.

If experience changes dramatically or parts of the brain are damaged, areas previously specialised for a certain function can ‘take on’ some of the work of other areas, without necessarily detectably changing in structure. For example, the ‘visual cortex’ in blind people can be used to perceive touch.

Functional reorganisation is often inferred without directly measuring the brain. For example, immediately after brain injury, someone might not be able to speak because the areas previously used for language are damaged. However, speech may be regained or it might improve, depending on the extent of damage, as the brain has a limited ability to reorganise the share of work to undamaged areas.

Learning or habit

This is the loosest and most problematic use of ‘neuroplasticity’. By definition if we learn something, acquire a habit or tendency, good or bad, something has changed in the brain. Without specifying what the brain is doing, we know nothing more.

 

UPDATE: You might also be interested in a subsequent post that tackles the myths that neuroplasticity is a new idea and, until quite recently, we thought the brain was ‘fixed’.

Concerned from Tunbridge Wells

The Guardian has been running a fun evolutionary psychology agony aunt column that’s been tackling questions such as ‘why do I fancy blonde women?’, ‘why do nice girls fall for bad boys?’ and ‘what can I do to stop my best friend marrying this idiot?’.

Despite it’s potential, evolutionary psychology has a tendancy to be a bit over-enthusiastic at times but the column just discusses the published studies in relation to the readers’ questions and turns out to be a concise guide to some of the field’s thinking on the area.

Clearly it’s not meant to be taken too seriously as an advice column but any agony aunt that gives references for her evolutionary advice is alright by me.

Link to Guardian’s ‘Ask Carole’ column (via @researchdigest)