Seeking free will: a debate

The Dana magazine Cerebrum has just published a debate between a psychiatrist and neurologist on how we can make sense of free will in the age of neuroscience.

The choice of professionals is an interesting one because each typically deals with what are assumed to be quite different disruptions in free will.

Neurologists often treat patients who have problems controlling their movements, cognition or consciousness – owing to clear, identifiable brain damage to the systems involved in these processes.

Someone with Parkinson’s disease, for example, seems to have little conscious control over their tremor or rigid movements.

Psychiatrists on the other hand, typically deal with people who don’t have clear brain damage, but whose brain’s are nonetheless functioning in such as way that they experience unstable moods, odd perceptions, or come to hold seemingly impossible beliefs.

Here the idea of free will is a bit more conceptually tricky. We can clearly say that someone who has Parkinsonian tremor is not ‘willing’ their movements, but what about someone whose brain disturbance means they hear voices?

Some people who hear voices can have conversations with them. In this situation, the person would seem to be exercising some influence over their hallucinations, because the voices respond to what’s being said, but many people can’t ‘will’ the voices away.

One particularly interesting phenomenon in this regard is ‘command hallucinations’ – usually hallucinated voices that command the person to do something.

Often, the commands are pointless – touch the table, cross the street, take off your hat – but sometimes they can be terrifying instructions – for example, that the person must harm themselves.

In some cases, these commands seem irresistible, the person feels completely compelled to follow their hallucinated instructions.

We don’t really have a good understanding (or, to be fair, even a bad understanding) of why some command hallucinations are distressing but impotent, while others seem to compel the person to comply.

There are many more examples of how free will is affected in both psychiatry and neurology. In both specialities, there are conditions where the boundaries of free will cover a big grey area, and all of them raise really quite profound questions about our freedom to act as we want.

The Cerebrum debate tackles exactly these sorts of issues by two people who undoubtedly have to deal with them on a daily basis.

Link to Cerebrum article ‘Seeking Free Will in Our Brains: A Debate’.

Advancing the history of psychology

I’ve been enjoying the Advances in the History of Psychology blog lately, which is full of interesting snippets about the past and often digs into the historical background of contemporary hot topics.

For example, here’s an interesting bibliography about psychoactive drug use in psychology, and here’s another about Benjamin Franklin’s interest in ‘electrotherapy’.

It’s run by the same people who produce the completely invaluable Classics in the History of Psychology archive, that has a huge website with some of the most important texts from psychology’s colourful past.

Both are excellent, and I look forward to reading more.

Link to Advances in the History of Psychology blog.
Link to Classics in the History of Psychology archive.

Want fries with that?

Neurophilosophy discusses a recent study that suggests that the inclusion of large amounts of starchy foods into our diet helped fuel the evolution of the brain.

It’s interesting because it’s not the first study to suggest that specific changes in diet improved nutrition and brain development:

According to one theory, increased consumption of meat by our ancestors provided the additional energy needed for brain expansion. (Cooking would have further increased the amount of calories obtained from meat.) Another holds that a switch to a seafood-rich diet would have provided polyunsaturated fatty acids which, when incorporated into nerve cell membranes, would have made the brain function more efficiently.

And now, a study published in Nature Genetics adds starchy tubers to the smorgasbord of foodstuffs that may have contributed to the expansion of the human brain.

These theories tend to be quite controversial and tend to cause numerous back and forth arguments in the literature, partly because they’re quite hard to test, largely owing to the fact that the brain has the consistency of toothpaste and so doesn’t leave much of a fossil record.

The study picked up by Neurophilosophy is interesting because it tracks a gene that codes for a starch enzyme, needed to break down starch into glucose.

It’s a relatively new approach to an old problem, although as the article mentions, the link to brain evolution is still circumstantial.

However, it’s an interesting areas and the Neurophilosophy article is a great brief guide to some of the thinking behind these theories.

Link to Neurophilosophy on ‘Diet and brain evolution’.

Moral psychology and religious mistakes

Psychologist Jonathan Haidt has written a thought-provoking essay for Edge which charts the recent revolution in the psychology and neuroscience of moral reasoning and suggests that the current critiques of religion have mischaracterised its true nature, based on these new findings.

Haidt summarises the main tenants of the new science of morality as four main principles:

1) Intuitive primacy but not dictatorship. This is the idea, going back to Wilhelm Wundt and channeled through Robert Zajonc and John Bargh, that the mind is driven by constant flashes of affect in response to everything we see and hear.

2) Moral thinking is for social doing. This is a play on William James‘ pragmatist dictum that thinking is for doing, updated by newer work on Machiavellian intelligence. The basic idea is that we did not evolve language and reasoning because they helped us to find truth; we evolved these skills because they were useful to their bearers, and among their greatest benefits were reputation management and manipulation.

3) Morality binds and builds. This is the idea stated most forcefully by Emile Durkheim that morality is a set of constraints that binds people together into an emergent collective entity.

4) Morality is about more than harm and fairness. In moral psychology and moral philosophy, morality is almost always about how people treat each other. Here’s an influential definition from the Berkeley psychologist Elliot Turiel: morality refers to “prescriptive judgments of justice, rights, and welfare pertaining to how people ought to relate to each other.”

The essay then goes on to discuss how the recent findings in then area apply to the ongoing debate between the ‘new atheists‘ (Dawkins, Dennett, Harris and the like) and religion.

In particular, Haidt suggests that the recent criticisms of religion don’t always reflect the best psychological understanding of what are primarily social, rather than ideological, institutions, and notes research findings showing that religious people tend to be happier and more altruistic than others.

As a self-professed non-believer and high-profile social psychologist, Haidt makes some interesting points that are bound to cause controversy.

Link to essay ‘Moral Psychology and the Misunderstanding of religion’.

Statistical self-defence over at idiolect.org.uk

Readers of mindhacks.com might be interested to read my review of the last chapter of Darrell Huff’s classic How To Lie With Statistics, over at my personal blog idiolect.org.uk. The last chapter gives Huff’s rules of thumb for interrogating statistics and I’ve provided some slim commentary on the workings of science, reason and whatnot. See you there!

Dennett on chess and artificial intelligence

Technology Review has published an article by philosopher Daniel Dennett looking at what the development of computer chess tells us about the quest for artificial intelligence.

AI and chess have an interesting and intertwined conceptual history.

It used to be said that if computers could play chess, it would be a genuine example of artificial intelligence, because chess seemed to be a uniquely human game of strategy and tactics.

As soon as computers became good at chess, it was dismissed as a valid example because, ironically, computers could do it. A classic example of moving the goalposts.

Similarly, I’ve recently heard a few people say “If computers could beat us at poker, that would be a genuine example of artificial intelligence”. Recently, a poker playing computer narrowly lost to two pros.

Presumably, ‘genuine intelligence’ is just whatever computers can’t do yet.

Dennett is a big proponent of the “if it looks like a duck and quacks like a duck, it’s a duck” school of behaviour.

In other words, if something can perform a certain task (like playing chess), then objections about it not using the same mechanism as humans to do the task are irrelevant as to whether its doing the task ‘genuinely’ or not.

One of his related ideas is the intentional stance. It says that things like belief, intention and intelligence are not properties of a creature, computer or human, they’re just theories we use to understand how it works.

So if it makes sense for us to interpret a chess computer as having the belief that “taking the queen will give an advantage”, then that’s a good theory for us to work on, but it doesn’t necessarily tell us anything about how that behaviour is implemented in the system.

Link to TechReview article ‘Higher Games’ (via BoingBoing).

Ancient Egyptian post-mortem neurosurgery

Retrospectacle has a great post that describes how the Ancient Egyptians removed the brains of the dead before mummification and notes some of their neurological knowledge.

The Ancient Egyptians described a range of neurological and psychiatric disorders in their writing that would be recognisable today.

One major source is the Edwin Smith papyrus, another is the Ebers papyrus which has quite a significant section on psychiatric disorders, including what we would now class as depression, dementia and psychosis.

Needless to say, the remedies were often magical in nature, but the observation of the clinical features can be quite astute.

The article on Retrospectacle has some great brain scan images and a link to a video of how embalmers would remove the brain through the nose, using a metal tool to go up into the frontal lobes.

Link to Retrospectacle article.

Psychological continuity and the problem of identity

Philosophy Now magazine has an interesting article on the problem of identity – how we have the impression that we are the same person, despite the fact that our personality, preferences and even cognitive abilities may change from moment to moment.

It’s a problem that was most famously tackled by 17th century philosopher John Locke but is still relevant for understanding the issues of identity and the self in contemporary cognitive science, as well as for informing complex judgements on free will and responsibility.

Suppose a man has committed a crime whilst drunk or undergoing temporary amnesia. Suppose also, that because of his mental state at the time of the offence, he genuinely cannot remotely remember a thing about it. Clearly on the evidence of witnesses ‚Äì and perhaps he was caught in the act ‚Äì it was his own body, the same man who now stands in the dock, who did it. But was it the same person? Should the present person be found guilty of the crime if the drunkenness or amnesia had so changed his psyche that, at the time, he ‘wasn’t his true self’? Can he rightly claim that at the time of the incident the occupant of his body was a different person altogether; or perhaps some fractured component of his own psyche that couldn‚Äôt rightly be described as ‚Äòhimself‚Äô?

Psychological continuity was, Locke claimed, the answer to the question. The accused, considered as a man, the physical being, is certainly guilty. His own hand struck the blow, his own voice had risen in anger. But if the person, the psychological being, cannot remember one atom of it, then he is not guilty.

But though Locke’s theory answered the question, it‚Äôs not certain that it solved the problem; for it raises a paradox that will try the wits of the jurists: the man in the dock may be guilty, but not the person in the man! And if the man is punished, he will experience the pain, but the wrong person will suffer it.

Link to article ‘A Question of Identity’ (via Thinking Meat).

Terrorism fails because we don’t see its purpose

In an article for Wired, security guru Bruce Schneier suggests that the reason terrorism fails is because it falls foul of a cognitive bias in how we understand people’s intentions from their actions.

Schneier bases his conclusions on a recent paper [pdf] by Max Abrahms who applies correspondent inference theory to terrorism and the political objectives of terrorist groups.

‘Correspondent inference theory’ suggests that we try and understand people’s intentions and character based on the most salient effect of their actions.

This can often lead us astray, as demonstrated by a regular plot line in soap operas where someone’s good intentions accidentally misfire and the person on the receiving end assumes they’re being deliberately malicious.

As noted by Schneier and Abrahms, this also leads us to misunderstand the goal that motivates terrorist acts:

The theory posited here is that terrorist groups that target civilians are unable to coerce policy change because terrorism has an extremely high correspondence. Countries believe that their civilian populations are attacked not because the terrorist group is protesting unfavorable external conditions such as territorial occupation or poverty. Rather, target countries infer the short-term consequences of terrorism — the deaths of innocent civilians, mass fear, loss of confidence in the government to offer protection, economic contraction, and the inevitable erosion of civil liberties — (are) the objects of the terrorist groups. In short, target countries view the negative consequences of terrorist attacks on their societies and political systems as evidence that the terrorists want them destroyed. Target countries are understandably skeptical that making concessions will placate terrorist groups believed to be motivated by these maximalist objectives.

In his paper, Abrahms examines the political objectives of terrorist groups and looks at how successful terrorism has been in obtaining them. He reckons, with a generous estimate, that only 7% of the stated goals have been achieved.

But he also notes that the stated goals rarely gets through to the people being targeted and that the political rhetoric of the terrorists’ target is littered with misunderstandings of their intentions.

I’m personally interested in how and why terrorists are labelled ‘mad’. It’s in the terrorists’ interest to be seen as sane, as part of the goal is to force concessions.

There’s no point conceding to someone who you think is unbalanced, because an irrational group might not stop the violence once they’ve achieved their aims.

The fact that violent protestors are so often labelled as ‘mad’ suggests, as per correspondent inference theory, that we assume their is no coherent intention behind their actions, contrary to what they are trying to achieve.

Anyway, an interesting look at the motivations and perception of political violence.

Link to ‘The Evolutionary Brain Glitch That Makes Terrorism Fail’.
pdf of Max Abrahms’ paper ‘Why Terrorism Does Not Work’

Can’t compute the wood for the trees

Computer scientist David Gelernter has written an in-depth article for Technology Review where he criticises the possibility of creating artificial consciousness, but has high hopes for unconscious artificial intelligence.

My case for the near-impossibility of conscious software minds resembles what others have said. But these are minority views. Most AI researchers and philosophers believe that conscious software minds are just around the corner. To use the standard term, most are “cognitivists.” Only a few are “anticognitivists.” I am one. In fact, I believe that the cognitivists are even wronger than their opponents usually say.

But my goal is not to suggest that AI is a failure. It has merely developed a temporary blind spot. My fellow anticognitivists have knocked down cognitivism but have done little to replace it with new ideas. They’ve showed us what we can’t achieve (conscious software intelligence) but not how we can create something less dramatic but nonetheless highly valuable: unconscious software intelligence. Once AI has refocused its efforts on the mechanisms (or algorithms) of thought, it is bound to move forward again.

Gelernter is a a great writer and an interesting guy, not least because of his brush with death, courtesy of disturbed anti-technologist Ted Kaczynski aka ‘The Unabomber’.

Link to TechReview article ‘Artificial Intelligence Is Lost in the Woods’.

Mind the gap: science and the insanity defence

Reason Magazine has an excellent article on why our knowledge about the psychology and neuroscience of mental illness doesn’t really help when trying argue for or against the insanity defence in court.

The insanity defence concerns whether a person accused of a crime should be considered legally responsible.

Some of the first legal criteria for judging someone ‘not guilt by reason of insanity’ are the M’Naghten Rules created after Daniel M’Naghten tried to assassinate the British Prime Minister Robert Peel in 1843.

He ended up killing Peel’s secretary, but when caught was found to be suffering from paranoid delusions and it was judged that his crime was motivated by his unsound mind and he didn’t understand the ‘nature and quality’ of what he did.

Most Commonwealth law in this area is still based on these criteria, and most US law was too, until shortly after John Hinckley shot US President Ronald Reagan and was found not guilt by reason of insanity.

This caused a backlash against the insanity defence and many US states have variously abolished it or made it much more difficult to prove (near impossible in some cases).

The Reason Magazine article examines why, when it does arise, the evidence is largely based on descriptions of the person’s mental state and why recent advances in understanding mental illness don’t really help very much.

One of the main reasons is that studies that find differences between people with mental illness and those without, do so on the group level. The same differences might not be present when comparing any two individuals.

In other words, on average, there are mind and brain differences between people affected by mental disorders and unaffected people, but the individual variation is so great that you couldn’t reliably say it would be present in one particular person.

As these criminal trials are focused on the actions of one individual much of the objective science goes out the window because it can’t reliably indicate an diagnosis, state of mind or reasoning abilities on the individual level.

This means that the most relevant evidence is usually the testimony of a psychiatrist or psychologist who is giving his or her clinical, descriptive judgement of the person’s state of mind.

The Reason Magazine article examines what sort of dilemmas this causes, and considers how developments in psychology and neuroscience are likely to impact on the legal judgement of insanity.

It’s an excellent guide to some of the key issues and the difficulties of making legal judgements on subjective states of mind.

Link to article ‘You Can’t See Why on an fMRI’.

Are we computers, or are computers us?

Philosopher Dr Pete Mandik has published an interesting thought on his blog that questions whether the common ‘computer metaphor’ used to describe the human mind is really a metaphor at all.

Cognitive psychology typically creates models of the mind based on information processing theories.

In other words, the mind and brain are considered to do their work by manipulating and transforming information, either from the senses, or from other parts in the system.

It is therefore common for scientists to talk about the mind and brain in computer metaphors, as if they are information processing machines.

Mandik questions whether this is really a metaphor at all:

There is a sense of the verb “compute” whereby many, if not all, people compute insofar as they calculate or figure stuff out. Insofar as they literally compute, they literally are computers. Further, the use of “compute”, “computing”, and “computer” as applied to non-human machines is derivative of the use as applied to humans.

It strikes me as a bit odd, then, to say that calling people or their minds “computational” is something metaphorical.

Indeed, the term ‘computer’ was originally a name for a person who did mathematical calculations for a company.

Calculating machines were then given the supposedly metaphorical name ‘computers’ as they did equivalent work to the human employees.

Mandik questions whether we should think of any of these examples as genuine metaphors, since they’re describing the same operations.

However, a key issue for cognitive science is whether there are reasonable limits in describing mind, brain and behaviour in mathematical terms.

The fact that we can adequately describe some things mathematically doesn’t solve this problem, because there may be things that are impossible to describe in this way which we simply don’t know about.

Often though, we just assume that we haven’t found the right maths yet, when the reality may be far more complex.

Link to Pete Mandik post with great discussion.

Next step brains: Evolution or optimisation?

This week’s edition of ABC Radio National’s opinion programme Ockham’s Razor has Dr Peter Lavelle speculating about a future when computers will match or outstrip the human brain.

Taking a “if you can’t beat ’em, join ’em” approach, Lavelle looks to a time when we’ll extend our capabilities with electronics and cybernetic expansions.

But he doesn’t stop there. He continues way past where most futurists stop and thinks about the possible end points for the human race if our trend for technological integration continues.

A fun and wildly speculative way to spend 15 minutes if you like your neuroscience with a touch of wide-eyed wonder.

Link to programme details and audio.

Profiling serial killers and other violent criminals

I just noticed that the January edition of the Journal of Forensic Sciences is freely available online, which contains psychological case reports on two serial killers and a football hooligan.

The journal is always a fascinating read, as it combines academic papers on everything from molecular analysis to psychological profiling.

The psychology case reports are often more influenced by a Freudian, interpretive style of explanation than in many other areas of psychology.

This is perhaps because the reports are largely from the USA which was historically most influenced by Freudian ideas and still retains a stronger influence in clinical and forensic psychology.

It is possibly also because it’s quite hard to do controlled studies on violent criminals, and so single case studies are more likely to draw on interpretive ideas that were specifically developed to delve into the mind of individuals.

For example, the FBI’s Behavioural Science Unit will partly analyse a crime scene using interpretive methods to link the symbolism of certain actions (e.g. covering a victims face after the murder) with the emotional state of the killer (e.g. shame).

The APA Monitor has an intriguing article on FBI profiling if you want to know more, and if you want some examples of the sorts of thinking that goes into criminal profiling, the case reports in the January edition are a good place to start.

Link to ‘Paths to Destruction: The Lives and Crimes of Two Serial Killers’.
Link to ‘The Hooligan’s Mind’.
Link to ‘Criminal profiling: the reality behind the myth’.

Identity disorder and the future of technology

Polymath physician Dr Ray Tallis has written an optimistic article in the latest edition of Philosophy Now magazine arguing that human technological enhancement is over-hyped but no reason for fear.

Tallis is a professor of geriatric medicine, so it’s no surprise that he sees some of the most applicable benefits of technological advances for diseases like Alzheimer’s and Parkinson’s.

Critics have suggested that using technology to enhance human abilities, whether by drugs, implants or genetics, will lead to an erosion of our sense of identity.

Tallis looks back on past promises and argues that this is unlikely to be the case:

The most often repeated claim is that we are on the verge of technological breakthroughs – in genetic engineering, in pharmacotherapy and in the replacement of biological tissues (either by cultured tissues or by electronic prostheses) – which will dramatically transform our sense of what we are and will thereby threaten our humanity. A little bit of history may be all that is necessary to pour cooling water on fevered imaginations.

In 1960, leading computer scientists, headed by the mighty Marvin Minsky, predicted that by 1990 we would have developed computers so smart that they would not even treat us with the respect due to household pets. Our status would be consequently diminished. Anyone seen any of those? Smart drugs that would transform our consciousness have been expected for 50 years, but nothing yet has matched the impact of alcohol, peyote, cocaine, opiates, or amphetamines, which have been round a rather long time.

As well as making some telling philosophical points, the article is quite funny in places, as Tallis uses some of his literary skills to good effect.

Link to Philosophy Now article ‘Enhancing Humanity’.

Narrative self, split brain

If you liked our recent post on what the stories of our lives say about us, Philosophy Now has an article on how the self might be based on our ability to create narratives.

The article looks at how the self has been related to our ability to make narratives out of the disconnected events in our lives, and particularly focuses on the theories of philosophers Alasdair MacIntyre and Paul Ricoeur.

MacIntyre emphasises that the concept of personal identity is not only logically dependent upon the concept of a narrative, but it’s also the other way round. In other words it is meaningless to talk about a character biography unless one presupposes that its subject has a personal identity. The biography must be about a continually-existing thing. Conversely, it is pointless, meaningless, to state that some being has a personal identity through time, and at the same time deny that this being has a possible biography.

[In Ricoeur’s theory] narratives, or more precisely plots, synthesise reality. A plot fuses together intentions, causal relations, and chance occurrences in a unified sequence of actions and events. Ricoeur seems to think that the plot creates a unified pattern in a chaotic series of events, ties them together, making them meaningful wholes.

This idea has also been taken up by more cognitive science-oriented philosophers, most notably, Daniel Dennett.

In his paper ‘The Self as a Center of Narrative Gravity’, Dennett argues that the main function of consciousness is to generate a sense of narrative for our experiences.

He references experiments on ‘split-brain‘ patients, whose cortical hemisphere’s cannot directly communicate because their main link, the corpus callosum, has been severed.

In some situations, these patients seem to show a self which isn’t a unified whole, where some knowledge and experience is accessible to some parts (like perception) but not others (like speech).

Despite these obvious divisions, the patients report that they still feel like an apparently unified “sole inhabitant” of the body, as if their narrative is maintained.

Link to Philosophy Now article ‘Don Quixote and The Narrative Self’.
Link to Dennett’s article ‘The Self as a Center of Narrative Gravity’.