The Atlantic has an amazing in-depth article on how Douglas Hofstadter, the Pulitzer Prize-winning author of Gödel, Escher, Bach, has been quietly working in the background of artificial intelligence on the deep problems of the mind.
Hofstadter’s vision of AI – as something that could help us understand the mind rather than just a way of solving difficult problems – has gone through a long period of being deeply unfashionable.
Developments in technology and statistics have allowed surprising numbers of problems to be solved by sifting huge amounts of data through relatively simple algorithms – something called machine learning.
Translation software, for example, long ago stopped trying to model language and instead just generates output from statistical associations. As you probably know from Google Translate, it’s surprisingly effective.
The Atlantic article tackles Hofstadter’s belief that, contrary to the machine learning approach, developing AI programmes can be a way of testing out ideas about the components of thought itself. This idea may be now starting to re-emerge.
The piece is also works as a sweeping look at the history of AI and the only thing I was left wondering was what Hofstadter makes of the deep learning approach which is a cross between machine learning stats and neurocognitively-inspired architecture.
It’s a satisfying thought-provoking read that rewards time and attention.
If you want another excellent, in-depth read on AI, a great complement is another Atlantic article from last year where Noam Chomsky is interviewed on ‘where artificial intelligence went wrong’.
Both will tell you as much about the human mind as they do about AI.
Link to ‘The Man Who Would Teach Machines to Think’ on Hofstadter.
Link to ‘Noam Chomsky on Where Artificial Intelligence Went Wrong’.
2 thoughts on “Hofstadter’s digital thoughts”
Superb article. First two Atlantic links are broken, but last one works.
Hofstadter always sounds more characteristically thoughtful and intuitive than others in this field. Interesting article and point about Google. It begs the question though, why is Google Search better at knowing what I want than Siri?
One of my supervisors (who spends ample time updating us on sightings of the Yeti) claims that technology is being developed which could “upload our consciousness” into a computer for future use – except this would be a cyborg. A “psychotic” version of ourselves, with no empathy. Creative fellow, also scary.
Is it because more people use google over Siri? And always have and always will. It begs another question, if there’s one service that works and exists, why do we create another? Optimisation is surely not found in fragmentation.