Technology Review has published an article by philosopher Daniel Dennett looking at what the development of computer chess tells us about the quest for artificial intelligence.
AI and chess have an interesting and intertwined conceptual history.
It used to be said that if computers could play chess, it would be a genuine example of artificial intelligence, because chess seemed to be a uniquely human game of strategy and tactics.
As soon as computers became good at chess, it was dismissed as a valid example because, ironically, computers could do it. A classic example of moving the goalposts.
Similarly, I’ve recently heard a few people say “If computers could beat us at poker, that would be a genuine example of artificial intelligence”. Recently, a poker playing computer narrowly lost to two pros.
Presumably, ‘genuine intelligence’ is just whatever computers can’t do yet.
Dennett is a big proponent of the “if it looks like a duck and quacks like a duck, it’s a duck” school of behaviour.
In other words, if something can perform a certain task (like playing chess), then objections about it not using the same mechanism as humans to do the task are irrelevant as to whether its doing the task ‘genuinely’ or not.
One of his related ideas is the intentional stance. It says that things like belief, intention and intelligence are not properties of a creature, computer or human, they’re just theories we use to understand how it works.
So if it makes sense for us to interpret a chess computer as having the belief that “taking the queen will give an advantage”, then that’s a good theory for us to work on, but it doesn’t necessarily tell us anything about how that behaviour is implemented in the system.