The Economist has a good write-up of a recent PLoS One study that found that the perceived ‘human-ness’ of another player in a game altered the extent of activation in brain areas associated with understanding others’ mental states.
The participants were asked to play the prisoner’s dilemma game in a brain scanner and were introduced to four opponents – software on a laptop, a laptop controlled by robotic hands, a humanoid robot and a real human. In reality though, the other players’ moves were all randomly generated.
Dr Krach and Dr Kircher chose the “prisoner’s dilemma” game because it involves a difficult choice: whether to co-operate with the other player or betray him. Co-operation brings the best outcome, but trying to co-operate when the other player betrays you brings the worst. The tendency is for both sides to choose betrayal (thus obtaining an intermediate result) unless a high level of trust exists between them. The game thus requires each player to try to get into the mind of the other, in order to predict what he might do. This sort of thinking tends to increase activity in parts of the brain called the medial prefrontal cortex and the right temporo-parietal junction.
The scanner showed that the more human-like the supposed opponent, the more such neural activity increased. A questionnaire also revealed that the volunteers enjoyed the games most when they played human-like opponents, whom they perceived to be more intelligent. Dr Krach and Dr Kircher reckon this shows that the less human-like a robot is in its appearance, the less it will be treated as if it were human. That may mean it will be trusted less—and might therefore not sell as well as a humanoid design.
It’s an interesting extension of a type of study first pioneered by psychologist Helen Gallagher and colleagues where she asked people to play ‘paper, scissors, stone’ supposedly against human and computer opponents in a PET scanning study.
Like with this recent study, all ‘opponents’ were actually just a series of randomly generated moves but the participants showed significantly greater brain activation in the frontal cortex when playing against the supposedly ‘human’ opponent than versus the computer.
The philosopher Daniel Dennett suggests that attributing mental states is a particular way of thinking about something that he calls the ‘intentional stance‘.
For example, we might play a chess computer and treat it if it was ‘intending’ to take our our bishop, or as if it ‘believed’ that getting the Queen out would be an advantage, but this says nothing about whether the machine actually has intentions or beliefs.
Of course, we can apply this to humans, and just because we find it useful to talk about others’ beliefs, it doesn’t mean belief is necessarily a scientifically sound concept.
Link to Economist article ‘I, human’.
Link to full-text article in PLoS One.
Full disclosure: I’m an unpaid member of the PLoS One editorial board.
This seems to go against Masahiro Mori’s idea of the uncanny valley, where people feel more comfortable with less human robots – the more realistic ones are unsettling. I suppose the difference here is that the participants are not interacting directly with their opponents.