Technology Review has published an article by philosopher Daniel Dennett looking at what the development of computer chess tells us about the quest for artificial intelligence.
AI and chess have an interesting and intertwined conceptual history.
It used to be said that if computers could play chess, it would be a genuine example of artificial intelligence, because chess seemed to be a uniquely human game of strategy and tactics.
As soon as computers became good at chess, it was dismissed as a valid example because, ironically, computers could do it. A classic example of moving the goalposts.
Similarly, I’ve recently heard a few people say “If computers could beat us at poker, that would be a genuine example of artificial intelligence”. Recently, a poker playing computer narrowly lost to two pros.
Presumably, ‘genuine intelligence’ is just whatever computers can’t do yet.
Dennett is a big proponent of the “if it looks like a duck and quacks like a duck, it’s a duck” school of behaviour.
In other words, if something can perform a certain task (like playing chess), then objections about it not using the same mechanism as humans to do the task are irrelevant as to whether its doing the task ‘genuinely’ or not.
One of his related ideas is the intentional stance. It says that things like belief, intention and intelligence are not properties of a creature, computer or human, they’re just theories we use to understand how it works.
So if it makes sense for us to interpret a chess computer as having the belief that “taking the queen will give an advantage”, then that’s a good theory for us to work on, but it doesn’t necessarily tell us anything about how that behaviour is implemented in the system.
Link to TechReview article ‘Higher Games’ (via BoingBoing).
4 thoughts on “Dennett on chess and artificial intelligence”
Isn’t AI inherently separate from expert systems in that AI learns? It seems to me that most of the game-based programs use rules defined by programmers and not self-devised logic and would not hence be true artificial intelligence.
“Presumably, ‘genuine intelligence’ is just whatever computers can’t do yet.”
Pretty sure that’s Tesler’s Theorem:
A good response to this reasoning is the Chinese Room argument by John Searle.
One conundrum in Dennett¬¥s sociocognitive tool to understand intelligent behaviour expressed by an animal, human or machine, is that his “Intentional Stance” can be viewed as intrumentalist, that is to say, that the agent making the intentional attributions maybe is projecting his own proper intentions to others or what others seem to him.
Dennett solve this critic arguing that the behaviour of any entitie show distinct “patterns” and the observer has only to choose what pattern wants to emphasize to describe its behaviour.
Despite of it, this sociocogntive tool has tremendous influence in developmental psychology, Sociable robots, and “theory of mind” studies generally.