Biologist Gerald Edelman is interviewed in Discover magazine about his views on the brain’s own internal ‘natural selection’ process and its possible role in the development of consciousness.
Edelman won the Nobel Prize in 1972 for his work on antibodies, but later turned to neuroscience and is keen to crack the problem of consciousness.
He argues that pathways in the brain are created by a process akin to ‘natural selection, where the most useful survive.
In the first few months of life, the neurons, on average, are more connected with each other than later in life.
If you click here you can see a graph of the number of synapses (inter-neuron connections) present in the human visual cortex by age.
According to the study that this graph is taken from, the peak time for synaptic connections is 6 months old. After that the number rapidly decreases.
This happens because connections that aren’t used disappear on a ‘use it or lose it’ basis, and the ones that are left form the more permanent connections in the brain.
In other words, from all the random variation, the weak connections die out and the strongest survive.
Edelman also argues that this principal applies to larger patterns of activity in the brain – with past and ongoing experience determining what can be considered useful.
Edelman talks about his theory and how he thinks it is crucial in understanding consciousness, and also how his research group is attempting to build robots based on the same principal.
Link to Edelman interview in Discover magazine.
“. . . based on the same principle.”
That interview is a little confusing. First, Edelman tears down the dualist dichotomy between mind and brain; then, he builds up a new dichotomy between “brains” and “computers”; finally, he undermines **that** dichotomy by saying that “algorithms” can “give you identical behavior” to real neurons.
For some reason I don’t quite fathom, Edelman classifies his “brain based devices” outside of artificial intelligence, whereas (in my experience) AI is typically an umbrella term covering expert systems, neural networks and lots of other stuff. Speaking not entirely tongue-in-cheek, one could say that a technique becomes not-AI once it works reliably: Google would have been considered “artificial intelligence” in 1985.
Edelman states that a Turing machine “can’t tolerate error,” which just doesn’t make any sense. Such a claim ignores everything Claude Shannon discovered about fault-tolerant communication, first of all.
“The brain isn’t a computer,” Edelman says, “but we’re simulating it in a computer.” Doesn’t the fact that one can build such a simulation really mean that the distinction between “brains” and “computers” is not valid?