Fragging rights

The Economist covers an interesting twist on the Turing test for artificial intelligence. Instead of software attempting to fool human judges into thinking they’re chatting to another person, it needs to fool gamers into thinking their playing against a human opponent.

In Turing’s original proposal, human judges would have a text-based conversation with a human and a machine, and the machine would be judged to be artificially intelligent if the judges couldn’t reliably determine who was human.

This is the same principal applied to first person shooter games like Doom, Quake and Call of Duty, where human players need to judge whether their opposite number is a fellow human or just a collection of cold hard data:

Computers can, of course, be programmed to shoot as quickly and accurately as you like. To err, however, is human, so too much accuracy does tend to give the game away. According to Chris Pelling, a student at the Australian National University in Canberra who was one of last year’s finalists and will compete again this year, a successeful bot must be smart enough to navigate the three-dimensional environment of the game, avoid obstacles, recognise the enemy, choose appropriate weapons and engage its quarry. But it must also have enough flaws to make it appear human. As Jeremy Cothran, a software developer from Columbia, South Carolina, who is another veteran of last year’s competition, puts it, “it is kind of like artificial stupidity”.

The competition is called the 2K BotPrize and is currently being held in Milan, Italy.

Link to The Economist on ‘Fighting it out’.
Link to 2K BotPrize website.

One Comment

  1. Posted September 15, 2009 at 8:04 pm | Permalink

    To be truly convincing it would have to be able to call you a “f4g” at the appropriate moment and then log off just before you beat it


Post a Comment

Required fields are marked *
*
*

Follow

Get every new post delivered to your Inbox.

Join 2,599 other followers