A study just published in PLoS Computational Biology has reported that an artificial intelligence system trained to make sense of a simulated natural environment is susceptible to some of the same visual illusions that humans fall for.
In one of these, the ‘Herman grid‘ illusion – illustrated on the right, you may be able to ‘see’ fuzzy patches of grey in the white stripes, despite the fact that there is no grey in the image (click for a bigger version if it’s not clear).
David Corney and Beau Lotto, researchers working in the Lotto Lab (which has a wonderful website by the way), have been training artificial intelligence systems to distinguish surfaces in a simulated natural environment with lots of ‘dead leaf’-like shapes.
When training these sorts of systems, the idea is not to program them with specific rules, but to present an image and let the neural network make a guess.
The researchers then ‘tell’ the AI system whether it is correct in its guess, and it adjusts itself to try and reduce the extent of the error on the next guess. After many learning trials, these sorts of ‘back propagation‘ neural networks can make distinctions between quite complex stimuli.
In this case, Corney and Lotto decided that once the system was fully trained to complete its task successfully, they would test it with some visual illusions experienced by humans.
Interestingly, the AI system was susceptible to the Herman Grid illusion, sensing ‘grey’ where there was none. Other illusions produced similar results.
The fact that both humans and AI system ‘fall’ for the same illusions, suggests that they take advantage of visual abilities that have been shaped by our experience of the visual world.