Advances in artificial intelligence: deep learning

If you want to keep up with advances in artificial intelligence, the New York Times has an essential article on a recent step forward called deep learning.

There is a rule of thumb for following how AI is progressing: keep track of what Geoffrey Hinton is doing.

Much of the current science of artificial neural networks and machine learning stems from his work or work he has done with collaborators.

The New York Times piece riffs on the fact that Hinton and his team just won a competition to design software to help find molecules that are most likely to be good candidates for new drugs.

Hinton’s team entered late, their software didn’t include a big detailed database of prior knowledge, and they easily won by applying deep learning methods.

To understand the advance you need to know a little about how modern AI works.

Most uses abstract statistical representations. For example, a face recognition system will not use human-familiar concepts like ‘mouth’, ‘nose’ and ‘eyes’ but statistical properties derived from the image that may bear no relation to how we talk about faces.

The innovation of deep learning is that it not only arranges these properties into hierarchies – with properties and sub-properties – but it works out how many levels of hierarchy best fit the data.

If you’re a machine learning aficionado Hinton described how they won the competition in a recent interview but he also puts all his scientific papers online if you want the bare metal of the science.

Either way, while the NYT piece doesn’t go into how the new approach works, it nicely captures it’s implications for how AI is being applied.

And as many net applications now rely on communication with the cloud – think Siri or Google Maps – advances in artificial intelligence very quickly have an impact on our day-to-day tools.
 

Link to NYT on deep learning AI (via @hpashler)

4 Comments

  1. higherthinkingprimate
    Posted November 25, 2012 at 12:38 am | Permalink

    Reblogged this on .

  2. Posted November 28, 2012 at 2:18 pm | Permalink

    On Geoff Hinton, here are more … ahem … facts: http://machinelearningjourney.blogspot.co.uk/2012/01/geoff-hinton-memes.html

    Or you can learn from him for free at https://class.coursera.org/neuralnets-2012-001/class/index

  3. Posted December 2, 2012 at 1:06 am | Permalink

    It would be really optimal if AI and genetic algorithms were taught in schools so we could tailor AI to our individual lives from the beginning of its “life”. This is what can be learned from Google Maps, for example. It knows the fastest way to get us somewhere, but not the sneakiest. That’s why I have to “manually” drag the route suggestion all over the back roads to find the way I want to take. Not to mention SIRI, who has annoyed more people than a distant spouse.

  4. Posted December 2, 2012 at 1:32 am | Permalink

    The phrase “really optimal” is the worst in American redundancy. Sorry.


Post a Comment

Required fields are marked *
*
*

Follow

Get every new post delivered to your Inbox.

Join 23,056 other followers