The Economist covers an interesting twist on the Turing test for artificial intelligence. Instead of software attempting to fool human judges into thinking they’re chatting to another person, it needs to fool gamers into thinking their playing against a human opponent.
In Turing’s original proposal, human judges would have a text-based conversation with a human and a machine, and the machine would be judged to be artificially intelligent if the judges couldn’t reliably determine who was human.
This is the same principal applied to first person shooter games like Doom, Quake and Call of Duty, where human players need to judge whether their opposite number is a fellow human or just a collection of cold hard data:
Computers can, of course, be programmed to shoot as quickly and accurately as you like. To err, however, is human, so too much accuracy does tend to give the game away. According to Chris Pelling, a student at the Australian National University in Canberra who was one of last year‚Äôs finalists and will compete again this year, a successeful bot must be smart enough to navigate the three-dimensional environment of the game, avoid obstacles, recognise the enemy, choose appropriate weapons and engage its quarry. But it must also have enough flaws to make it appear human. As Jeremy Cothran, a software developer from Columbia, South Carolina, who is another veteran of last year‚Äôs competition, puts it, ‚Äúit is kind of like artificial stupidity‚Äù.
The competition is called the 2K BotPrize and is currently being held in Milan, Italy.