A chat program named Jabberwacky, designed by British AI researcher Rollo Carpenter, has won the Loebner Prize – the annual contest to see the most human-like chat software.
The contest takes the form of the Turing Test where human judges have to work out whether they are chatting to humans or software by typing responses into a computer.
Computer scientist Alan Turing, the designer of the contest, argued that if the judges couldn’t distinguish between humans and software, the software could be thought of as simulating human intelligence. No software has yet passed the full Turing Test (although some has passed limited versions).
The Loebner Prize is awarded to the software that the judges think creates the best simulation, regardless of the fact that it may not pass for human.
Jabberwacky is different from previous winners in that it works out its conversational rules by interacting with humans.
It has a website where visitors can chat to the software, but crucially, they can correct the software when it gives odd or meaningless responses, so the software can adapt to the correct rules of conversation.
Results of its ongoing learning process can be seen in the transcripts of the 2005 contest. Jabberwacky does surprisingly well in some instances but not so great in others.
Link to “Brit’s bot chats way to AI medal” from BBC News
Link to Jabberwacky website and chat.
Link to Loebner Prize website and 2005 transcripts.
2 thoughts on “Beware the Jabberwack, my son”
From reading those transcripts, it seems to me that neither the program nor the judges make a lot of sense!
Fun Guardian article about this:
I do recall one or two less satisfying conversations with human beings, but only one or two. Yet there were moments while speaking with George when I realised that I was, semi-consciously, assuming the presence of a human at the other end. Examining the logs from his site, Carpenter has found that people converse online with George for up to seven hours.
People act, then, as if George thinks. Does he? “We bring a lot of baggage to words like ‘thinking’,” Carpenter says. “Our understanding is very human-centric, and in any of the ways that we think about those words, it obviously doesn’t think. But if you put it another way, my program would know precisely nothing about language, had it not learned. So, to a reasonable degree, you could say that it’s building a non-human form of understanding.”