Computation is a lens

CC Licensed Photo from Flickr user Jared Tarbell. Click for source.“Face It,” says psychologist Gary Marcus in The New York Times, “Your Brain is a Computer”. The op-ed argues for understanding the brain in terms of computation which opens up to the interesting question – what does it mean for a brain to compute?

Marcus makes a clear distinction between thinking that the brain is built along the same lines as modern computer hardware, which is clearly false, while arguing that its purpose is to calculate and compute. “The sooner we can figure out what kind of computer the brain is,” he says, “the better.”

In this line of thinking, the mind is considered to be the brain’s computations at work and should be able to be described in terms of formal mathematics.

The idea that the mind and brain can be described in terms of information processing is the main contention of cognitive science but this raises a key but little asked question – is the brain a computer or is computation just a convenient way of describing its function?

Here’s an example if the distinction isn’t clear. If you throw a stone you can describe its trajectory using calculus. Here we could ask a similar question: is the stone ‘computing’ the answer to a calculus equation that describes its flight, or is calculus just a convenient way of describing its trajectory?

In one sense the stone is ‘computing’. The physical properties of the stone and its interaction with gravity produce the same outcome as the equation. But in another sense, it isn’t, because we don’t really see the stone as inherently ‘computing’ anything.

This may seem like a trivial example but there are in fact a whole series of analog computers that use the physical properties of one system to give the answer to an entirely different problem. If analog computers are ‘really’ computing, why not our stone?

If this is the case, what makes brains any more or less of a computer than flying rocks, chemical reactions, or the path of radio waves? Here the question just dissolves into dust. Brains may be computers but then so is everything, so asking the question doesn’t tell us anything specific about the nature of brains.

One counter-point to this is to say that brains need to algorithmically adjust to a changing environment to aid survival which is why neurons encode properties (such as patterns of light stimulation) in another form (such as neuronal firing) which perhaps makes them a computer in a way that flying stones aren’t.

But this definition would also include plants that also encode physical properties through chemical signalling to allow them to adapt to their environment.

It is worth noting that there are other philosophical objections to the idea that brains are computers, largely based on the the hard problem of consciousness (in brief – could maths ever feel?).

And then there are arguments based on the boundaries of computation. If the brain is a computer based on its physical properties and the blood is part of that system, does the blood also compute? Does the body compute? Does the ecosystem?

Psychologists drawing on the tradition of ecological psychology and JJ Gibson suggest that much of what is thought of as ‘information processing’ is actually done through the evolutionary adaptation of the body to the environment.

So are brains computers? They can be if you want them to be. The concept of computation is a tool. Probably the most useful one we have, but if you say the brain is a computer and nothing else, you may be limiting the way you can understand it.
 

Link to ‘Face It, Your Brain Is a Computer’ in The NYT.

12 thoughts on “Computation is a lens”

  1. These kinda “reductionist” models applied to the brain, are no more useful than trying to understand “Love” as merely a cascade of chemical neurotransmitters.

  2. I’m a Gibsonian, but, even before I had discovered ecological psychology and similar non-computational views, I had deep suspicions about the computer metaphor (which, historically was only a metaphor, but over time brains began to be seen AS computers, not just SIMILAR TO computers).

    For one, computationalism relies on mind\body dualism. If that position is incorrect, then the hardware\software analogy is untenable.

    Overall, computationalism falls prey to the humunculus fallacy and what Michael Turvey has called a “loan of intelligence”, in which we imbue something with more intelligence than it reasonably has (i.e. there is a “little man” in our brain that “represents” stuff… but, how did that little man get there and how does it represent? The same can be asked of neurons or brain regions that “compute”.).

    Loans of intelligence deter us from seeking explanations from first principles. Some have proposed dynamical systems theory and theories of self-organization as the groundwork for a new physics that establishes first principles by unifying physics and biology (rather than connecting physics and biology with engineering and computer science). IMO, the former (a theory of animal behavior from physical first principles) seems like a solution commensurate with the rest of physical and biology, the latter (introducing computationalism into biology to explain behavior) seems like a hack to me.

    For a scholarley critique of the computer metaphor, see Carello, Turvey, Kugler & Shaws’s Inadequancies of the Computer Metaphor: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCcQFjAA&url=http%3A%2F%2Fwww.haskins.yale.edu%2Fsr%2Fsr071%2FSR071_12.pdf&ei=TZ-ZVZ7zMdjtoATb9YzwCA&usg=AFQjCNHyaffvWBl7pkdjEaaIS60a2YdnFg&sig2=Oa1HdZc6Nqa-7xvwt11fVg&bvm=bv.96952980,d.cGU

  3. Also, Vaughn, I really appreciate that you mentioned J.J. Gibson and ecological psychologists. We often get mis-construed, or worse, simply ignored. I think Gibson and his followers have done a lot of important work that provided great insights into what a mature psychological science might look like (e.g. one grounded firmly in the kinds of physics that ground the so-called “hard sciences”).

    One thing that makes discovering an appropriate physics for explaining behavior so difficult is that the type of physics involved at the ecological scale differs markedly from classical Newtonian physics, as Schrodinger commented on as early as 1944. For more on this, I recommend Turvey & Shaw’s “Toward an ecological physics and a physical psychology”: https://www.academia.edu/1031630/Toward_an_ecological_physics_and_a_physical_psychology

    Of course, ecological psychology has it’s own issues. Much of the really impressive work by ecological psychologists has focused on action and various aspects of visual perception. This means that Gibsonians rarely touch on wide swathes of the more “applied” and “cognitive” areas of psychology (memory, problem solving, mental illness, among others).

  4. It seems crucial to bear in mind that using the word “algorithm” and “encode” begs the question, since these terms are very specifically computer-oriented. Signals in our brains represent certain things, but it’s not clear that they encode them in any computer-like way. Human memories, for example, are now known to regenerate memories every time they’re accessed, effectively rewriting and altering the information. This is highly uncomputer-like. Humans don’t retrieve static data when they remember, they retrieve the last time they remembered information and in the process alter it. If computers did this, they’d crash in a millisecond.

    http://www.sciencedaily.com/releases/2012/09/120919125736.htm

    Likewise, it’s now settled that specific neurons contain specific memories. This is hardware-dependent memory, which is quite unlike modern computers. Modern computers totally separate data from hardware by an abstraction layer. This abstraction layer typically takes the form of some kind of assembly language or processor code that can run independent of the hardware. But a human brain does not appear to contain any kind of “processor code” that can run independent of the organic wetware. In a human brain, the organic wetware is the memories.

    http://www.scientificamerican.com/article/memories-may-not-live-in-neurons-synapses/

  5. Saying the brain is a computer might be helpful. But it’s just a metaphor. Metaphors are terrific. They’re really useful. But it’s folly to take them literally.

  6. This is an old cogsci trope that I enjoyed jousting when I was in school. We always apply modern paradigms as metaphors to everything we can find, especially ourselves. And then we immediately drop the metaphor andnconfuse it for ontology.

    “The brain is like a machine” becomes “the brain is a machine.” “The brain is like a computer.” Becomes “the brain is a computer”. The metaphorical phrasing can be useful, the ontic phrasing is the gateway to woo.

    The brain is like a brain, and the brain is the brain. It is it’s own thing, and operates as itself.

    Further, if other systems can compute, these means there is some form of ends. Who or what are they ultimately computing for? With the brain we can say the organism, but what about that stone, or waves?

  7. Hi,

    Just a small correction: It is not an “analogue” computer as in analogy, but rather “analog” as in the opposite from digital.

  8. A problem is that computation is pretty vague and abstract. Do brains compute? Yes, obviously. Can networks of neurons be made to compute? Yes, with both physical neurons (see: ratbots) and with ANNs.

    But, when a normal person talks about ‘a computer’, they don’t mean ‘a thing that computes’ — they mean a solid-state electronic device with stored programs, a von neumann architecture, a CPU made out of NAND gates etched in doped silicon via lithography… To the extent that they are unaware of these details, they cannot counteract assumptions made about them.

    Can we talk about the operation of the brain through the lens of information theory or through the lens of cybernetics? Yes, the same way that we can use that toolset to talk about swarming behavior in fish, or genetics, or household HVAC systems, or international politics. Lots of things can be said to compute, and meaningfully! But, the united nations is clearly not a computer in the casual sense, and neither is a swarm of fish, or a brain.

    To the extent that we say that something is a computer, we mean it either in a casual (and probably wrong) sense that it has a structure analogical to what we mean when we talk about computers to our next-door neighbours or parents, or we mean it in a very abstract technical sense (which is obviously correct but also not particularly rare) that we can use the mental toolkit of analysing computation to analyse it.

Leave a comment