ABC Radio National’s All in the Mind has just had an excellent programme on ‘the singularity‘, the idea that at some point in the future computer power will outstrip the ability of the human brain and then humanity will be better off in some sort of vague and unspecified way.
The idea, is of course, ludicrous and is based on a naive notion that intelligence can measured as a type of unitary ‘power’ which we can adequately compare between computer and humans. The discussion on All in the Mind is a solid critical exploration of this wildly left-field notion as well as the community from whence it comes.
It’s a popular theme among transhumanists who, despite seeming to have a mortal fear of human limitations, I quite like.
Transhumanists are like the eccentric uncle of the cognitive science community. Not the sort of eccentric uncle who gets drunk at family parties and makes inappropriate comments about your kid sister (that would be drug reps), but the sort that your disapproving parents thinks is a bit peculiar but is full of fascinating stories and interesting ideas.
They occasionally take themselves too seriously and it’s the sort of sci-fi philosophy that has few practical implications but it’s enormously good fun and is great for making you re-evaluate your assumptions.
By the way, there’s loads of extras on the AITM blog, so do check it out.
Link to All in the Mind on ‘the singularity’.
Link to extras on AITM blog.
The ‘singularity’ has more to do with computers and artificial intelligence eventually reaching the point where it is powerful enough to self-evolve. Is that really such a ludicrous idea? We already have software that intelligently generates other software, and software that heals itself (antivirus progams?).
The basic premise of the “singularity” is that we’ll be unable to understand the technological changes happening around us. I think that’s already more true than not.
Although we’re pretty loathe to put any sort of implant into our brains (the sort of thing we’d all agree turns us into cyborgs!) there is little resistance to carrying a blackberry everywhere we go. I even sleep next to mine and wake up to its alarm every morning (and, according to my love sleeping next to me, I haven’t lost my humanity yet.)
Was the quirky tendency of early man to carry shiny stones along with him a preparatory precursor to our future of carrying blackberries?
No, not by our standard definitions of cause and effect, but evolution is never the less tapping into such instincts; a plan is unnecessary.
We are essentially a cybernetic society already.
If you divorce yourself from the perspective of humanity as isolated individual bodies, and view us in the equally fair light of us being social cultures and shared infrastructure it’s clear we’ve incorporated technology at every level.
Does that point even require support? I’m discussing this on the internet – a technology that might have passed for a god not long ago. The world’s population has exploded, and the wealth and physical health of that population has increased dramatically. Future technologies are nearly unpredictable and setting limits on their possibility is difficult past 50 or 100 years.
And then think of the markets, the technological fields, the cultures of religion and entertainment fetish. All of these things are diversifying and evolving with explosive speed, while simultaneously deepening their scope. No one individual can keep up with all these things, and the society itself is a frothy storm of them, and still we go on with (what seems to be) happy success.
They say in the case of a true “singularity” (one around a black hole) that you’d not notice yourself passing over the edge of it. It’s only the place where outside observers would lose track of you.
I’d imagine it’s like the experience of a diver coming up slowly from a deep dive. You’re forced to stop at various depths so that the compressed gasses can escape without giving you the bends. At those moments deep in the water it’s sometimes easy to forget which way is up, but the real danger is being caught unknowingly in a current. The whole mass of deep water around you could be sucking you far away from your dive boat, and you simply have no way of knowing. Relative to your dive partner you haven’t moved, but that’s only because you’re moving together.
If there’s any lesson of the singularity, it’s that considering the simpleminded scifi version of it is a good stepping stone to understanding society and our human experience more accurately in the present. Don’t let the refutations of the simplistic versions of this idea blind you to the more complex and human realities that lie behind it.
We are awash in a see of evolving technology (both material and intellectual) that is thoroughly out of our individual control, and yet we are still a part of it and it is our moral prerogative to use it to the best of our abilities. Evolution has not stopped, we’re just coming to realize it’s never really been limited to DNA.
The incorporation of technology into society will continue. It’s effect on morality and human nature will continue to accelerate. It’s wise to be mindful.
As for practical applications of this philosophic mindset I can recommend a practical idea:
Social Networking (in it’s myriad guises) is the “global brain” – act accordingly and use technology to amplify your ideas (and possibly reward your individual self.)
ie: if you want to advance your career in psychology, start a blog, then get that blog on digg.
Transhumanism is where technology transforms and enhances human beings. Posthumanism is where technology will reach the point where new robotic creatures will be superior and dominate human beings. Singularity has to do with posthumanism.
Thanks alleycat. That was a pretty interesting and impassioned defense of the larger implications of the singularity notion.
And Joe– Posthumanity is definitely not limited to “robotic creatures” who will “dominate human beings.” It refers pretty vaguely to what comes next– be it robots as you say, or some designed biological evolution of ours or cyborg advancement.
From Wiki: “Transhuman is a term that refers to an evolutionary transition from the human to the posthuman.” The terms are practically used interchangeably. All of this is intimately related.
There’s no reason we can’t compare machine and human intelligence – a simple definition like “ability to make accurate predictions about the environment and use those predictions to maximize it’s values” encapsulates the most important functions what an intelligence does.
There’s variations of the idea of the singularity, but the most appealing (and least ridiculous) is a natural extension of a machine intelligence. Once you’ve created an intelligence that’s able to improve itself (because it has access to it’s own sourcecode) it can continue to do so until it reaches hard physical limits, which, given the speed advantage of silicon vs cellular, means it could easily be thousands (or millions) of times smarter than a human. Once you’ve reached that point, it’s impossible to predict what will happen – if you knew what a superintelligence would do, you would go ahead and do it yourself.
All technological progress has arisen, in part, from a “mortal fear of human limitations”. Medicine is, perhaps, the best example of this (particularly of the fear), though other technology arises equally due our desire to transcend our own human limitations (flight, for example).
To make a joke of that reality and both compare an awareness of it to and implicate it as nuttery is…an interesting insight into your own personality and biases or blindspots.