There is a rule of thumb for following how AI is progressing: keep track of what Geoffrey Hinton is doing.
Much of the current science of artificial neural networks and machine learning stems from his work or work he has done with collaborators.
The New York Times piece riffs on the fact that Hinton and his team just won a competition to design software to help find molecules that are most likely to be good candidates for new drugs.
Hinton’s team entered late, their software didn’t include a big detailed database of prior knowledge, and they easily won by applying deep learning methods.
To understand the advance you need to know a little about how modern AI works.
Most uses abstract statistical representations. For example, a face recognition system will not use human-familiar concepts like ‘mouth’, ‘nose’ and ‘eyes’ but statistical properties derived from the image that may bear no relation to how we talk about faces.
The innovation of deep learning is that it not only arranges these properties into hierarchies – with properties and sub-properties – but it works out how many levels of hierarchy best fit the data.
If you’re a machine learning aficionado Hinton described how they won the competition in a recent interview but he also puts all his scientific papers online if you want the bare metal of the science.
Either way, while the NYT piece doesn’t go into how the new approach works, it nicely captures it’s implications for how AI is being applied.
And as many net applications now rely on communication with the cloud – think Siri or Google Maps – advances in artificial intelligence very quickly have an impact on our day-to-day tools.