Cell intelligence and surviving the dead of winter

New Scientist has an interesting article on whether single cells can be considered intelligent. The piece is by biologist Brian Ford who implicitly raises the question of how we define intelligence and whether it is just the ability to autonomously solve problems. If so, then individual cells such as neurons might be considered ‘intelligent’ even when viewed in isolation.

However, he finishes on a bit of an odd flourish:

For me, the brain is not a supercomputer in which the neurons are transistors; rather it is as if each individual neuron is itself a computer, and the brain a vast community of microscopic computers. But even this model is probably too simplistic since the neuron processes data flexibly and on disparate levels, and is therefore far superior to any digital system. If I am right, the human brain may be a trillion times more capable than we imagine, and “artificial intelligence” a grandiose misnomer.

It’s odd because it reads like blue-sky speculation when, in fact, the idea that neurons could work like “a vast community of microscopic computers” is an accepted and developed concept in the field supposedly doomed by this idea – namely, artificial intelligence.

Traditionally, AI had two main approaches both of which emerged from the legendary 1956 Dartmouth Conference.

One was the symbol manipulation approach, championed by Marvin Minsky, and the other was the artificial neural network approach, championed by Frank Rosenblatt.

Symbol manipulation AI builds software around problems where data structures are used to explicitly represent aspects of the world. For example, a chess playing computer would have a representation of the board and each of the pieces and in its memory and it works by running the simulation to test out and solve problems.

In contrast, artificial neural networks are ideal for pattern recognition and often need training. For example, to get one to recognise faces you put a picture into the network and it ‘guesses’ whether it is a face or not. You tell it whether it is right, and if it isn’t, it adjusts the connections to try and be more accurate next time. After being trained enough the network learns to make similar distinctions on pictures it has never seen before.

As is common in science, these started out as tools but became ideologies and a fierce battle broke out over which could or couldn’t ever form the basis of an artificial mind.

At the time of the Dartmouth Conference, the neural network approach existed largely as a simple set-up called the perceptron which was good at recognising patterns.

Perceptrons were hugely influential until Minksy and Seymour Papert published a book showing that they couldn’t learn certain responses (most notable a logical operation called a XOR function).

This killed the artificial neural network approach dead – for almost three decades – and contributed to what is ominously known as the AI winter.

It wasn’t until 1986 when two young researchers, David Rumelhart and James McClelland, solved the XOR problem and revived neural networks. Their approach was called ‘parallel distributed processing‘ and, essentially, it treats simulated neurons as if they are a ‘a vast community of microscopic computers’ just as Brian Ford proposes in his New Scientist article.

Artificial neural networks has evolved a great deal and the symbol manipulation approach, although still useful, is now ironically called GOFAI or ‘Good old fashioned artificial intelligence’ as it seems, well, a bit old fashioned.

How we define intelligence is another matter and saying that individual cells have it is actually quite hard to dismiss when they seem to be solving a whole range of problems they might never have encountered before.

Artificial intelligence seems cursed though, as true intelligence is usually defined as being just beyond whatever AI can currently do.

Link to NewSci on intelligence and the single cell (thanks Mauricio!)

5 thoughts on “Cell intelligence and surviving the dead of winter”

  1. Excellent appraisal! I’d had the same hesitation when I read that articles conclusion, and you’ve done a service in explaining the facts. I wasn’t aware of the term “AI Winter” until now, either.

  2. Contrary to popular belief, Rumelhart and McClelland were not the first to implement backpropagation. As far as I know that distinction rests with Paul Werbos, who not only defined but implemented it in the early 1970s
    Otherwise a lovely article and I’ll repost it on my campus freethinker group’s Facebook page

  3. One fascinating application of pattern recognition is in MRI research. It is well accepted that in fMRI research group level data does allow one to make generalizations to the individual level, making our current statistical analyses of imaging data not useful for clinical diagnostic purposes of psychiatric/neurological disorders.
    Indeed, pattern classification may be useful for making generalization about one population vs. another (i.e. patient vs control). It may also be useful for training a machine to recognize patterns of thought (very well characterized ones). One recent experiment did this – looking at memories of three video clips. There is better data out there with pattern classifiers but it is an interesting piece. The article is “Decoding Individual Episodic Memory Traces in the Human Hippocampus” in the journal Current Biology.

  4. Thank you for this nice article.
    I was wondering whether you could recommend any blogs or so that discuss AI more? I’m a student fascinated in AI and cognitive psychology, but don’t know of many web resources for the former, and would appreciate any links you thought were good.
    Cheers,
    Richard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: