Intuitions about phenomenal consciousness

Illustrating how this ‘experimental philosophy’ idea has really struck a chord, Scientific American Mind has an article on our intuitions about whether things can have mental states, whether that be animals, humans, machines or corporations.

The piece is by philosopher Joshua Kobe and contains lots of fascinating examples of how we tend to be comfortable attributing mental states likes ‘beliefs’ to corporations, but not emotions.

The same goes for robots, it turns out, but one key factor seems to be not what we think about its thinking ‘machinery’ but how human the body seems.

In one of Huebner’s studies [pdf], for example, subjects were told about a robot who acted exactly like a human being and asked what mental states that robot might be capable of having. Strikingly, the study revealed exactly the same asymmetry we saw above in the case of corporations.

Subjects were willing to say:
• It believes that triangles have three sides.
But they were not willing to say:
• It feels happy when it gets what it wants.

Here again, we see a willingness to ascribe certain kinds of mental states, but not to ascribe states that require phenomenal consciousness. Interestingly enough, this tendency does not seem to be due entirely to the fact that a CPU, instead of an ordinary human brain, controls the robot. Even controlling in the experiment for whether the creature had a CPU or a brain, subjects were more likely to ascribe phenomenal consciousness when the creature had a body that made it look like a human being.

Link to ‘Can a Robot, an Insect or God Be Aware?’
pdf of draft Huebner paper.

Leave a comment