Some dialogue from the novel A Madman Dreams of Turing Machines (ISBN 1400040302) by physicist Janna Levin.
In this passage, Kurt G√∂del discusses his objections to Alan Turing’s work on whether the mind can be completely described as a series of computations with his friend Oskar Morgenstern.
“If I die, you must promise to publish my article refuting Alan Turing’s thesis on the limitations of the mind. A Turing machine is a concept, equivalent to a mechanical procedure or algorithm. Turing was able to completely replace reasoning by mechanical operations on formulas – by Turing machines. Good, agreed?
However, are we supposed to equate the human soul with a Turing machine? No. There is a philosophical error in Turing’s work. Turing in his 1937 paper, page 250, gives an argument which is supposed to show that mental procedures cannot go beyond mechanical procedures. However this argument inconclusive. What Turing disregards completely is the fact that mind, in its use, is not static but constantly developing.
They murdered him, you realize?”
“I thought it was suicide,”, Oskar replies absently.
Kurt continues, “The government poisoned his food. I have also been working on a formal proof of the existence of God. But this is unfinished. I don’t want our colleagues to think I am crazy. Maybe you should not published that one if I die.”
Gödel eventually died from starvation, owing to paranoid beliefs about conspiracies and poisoning.
G√∂del’s idea that consciousness is not understandable as a form of computation was further developed by mathematician Roger Penrose in the book Shadows of the Mind (ISBN 0198539789).
One test for consciousness would be the ability to have an unconscious, though this might exclude animals. Another test might be to be able to go insane, i.e. create a formal structure of logic which is internally consistent, yet incompatible with reality.
Could AI become paranoid and starve itself?
The problem of the unconscious to AI is a serious one. We may know the red octagon to mean stop, and AI can be taught this. But can AI be taught to have a momentary hesitation at something that looks like a stop sign, but isn’t? And can AI then make unconscious associations to this sign– say, fear because it recalls a previous accident?
The main problem with the argument of consciousness is that it is entirely dependent on our definition, which is recursive. We define consciousness in a certain way, because it is dependent on what we say it is. you can’t tell if a computer has consciousness or not because the question is undefined. Does the computer have consciousness? Which part of the computer? And what part of consciousness? You may as well ask if the computer uses oxygen (sort of). In other words, the question isn’t a question; it’s an analogy.
It is possible to understand everything about Microsoft Word. But nothing allows you to predict what I will type there. That’s the problem of consciousness.
http://thelastpsychiatrist.com
Different people have described Penrose’s “theory” of consciousness to me as bad physics, bad neuroscience, bad computer science or a combination of all the above. For an explication of the computer-science part, see this paper by Solomon Feferman (Stanford University):
http://psyche.cs.monash.edu.au/v2/psyche-2-07-feferman.html
And for the REAL reason Turing was killed, see Charlie Stross’s novel THE ATROCITY ARCHIVES. 😉