Fragging rights

The Economist covers an interesting twist on the Turing test for artificial intelligence. Instead of software attempting to fool human judges into thinking they’re chatting to another person, it needs to fool gamers into thinking their playing against a human opponent.

In Turing’s original proposal, human judges would have a text-based conversation with a human and a machine, and the machine would be judged to be artificially intelligent if the judges couldn’t reliably determine who was human.

This is the same principal applied to first person shooter games like Doom, Quake and Call of Duty, where human players need to judge whether their opposite number is a fellow human or just a collection of cold hard data:

Computers can, of course, be programmed to shoot as quickly and accurately as you like. To err, however, is human, so too much accuracy does tend to give the game away. According to Chris Pelling, a student at the Australian National University in Canberra who was one of last year’s finalists and will compete again this year, a successeful bot must be smart enough to navigate the three-dimensional environment of the game, avoid obstacles, recognise the enemy, choose appropriate weapons and engage its quarry. But it must also have enough flaws to make it appear human. As Jeremy Cothran, a software developer from Columbia, South Carolina, who is another veteran of last year’s competition, puts it, “it is kind of like artificial stupidity”.

The competition is called the 2K BotPrize and is currently being held in Milan, Italy.

Link to The Economist on ‘Fighting it out’.
Link to 2K BotPrize website.

One thought on “Fragging rights”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: