The hardest cut: Penfield and the fight for his sister

In 1935, world renowned neurosurgeon Wilder Penfield published three remarkable case studies describing the psychological effects of frontal lobe surgery.

They remain a fascinating insight into the link between brain and behaviour, but one case was unlike anything Penfield had tackled before.

It described the fight to save the life of his only sister.

 

Continue reading “The hardest cut: Penfield and the fight for his sister”

Is bigotry a mental illness?

The Psychiatric Times has an interesting article discussing whether bigotry should be classified as a mental illness. The author concludes no, but the discussion gives an important insight into how we decide what is a mental illness and what is not.

Most people might think that an opinion, no matter how disagreeable, shouldn’t get someone diagnosed with a mental disorder.

The difficulty comes when deciding what criteria you should use to decide that someone’s mental state has gone beyond what is normal and should be considered an illness.

Generally, if a mental state is considered to cause distress or impairment, it’s considered to be a sign of mental illness.

This goes for physical illness as well. A physical difference is only considered an illness if it causes problems as a result.

However, someone who is extremely racist might genuinely suffer problems as a result of their opinions.

As we reported previously, a small group of psychiatrists are pushing for a diagnosis of ‘racist disorder’ to be included in the next revision of the diagnostic manual on this basis.

One argument to be wary of in the justification of this, or any other mental disorder, is that ‘it must exist because biological differences can be found between people thought to have the condition and those without’.

As the mind and behaviour is just a reflection of brain function, any difference, no matter how trivial (ice cream preference for example), will have a related biological difference.

As with physical illness, biological differences in themselves can’t define an illness, because they have to be linked to what is considered serious distress or impairment in everyday life.

Biology might tell us why the difference occurs, but it can’t tell us whether the difference should be considered good or bad.

This decision is essentially a value judgement, because what is considered serious, distressing, impairing or relevant to everyday life aren’t cut-and-dry decisions and are made on the basis of a consensus of opinions.

In some cases, such as cancer, it’s easy, because everyone agrees that an early painful death is bad.

In other cases, particularly for mental illnesses, the issues can be a lot less straightforward because there there are few obvious and direct effects of mental states.

These issues ask us to question what we consider an illness and highlight that the decision is as based as much on social considerations and context, as on the science of biology.

The Psychiatric Times article tackles exactly these sorts of issues in its discussion of bigotry, and is a great guide to the philosophical issues involved in classifying mental disorder.

If you want to explore further, the Stanford Encyclopedia of Philosophy has a great entry on mental illness that tackles many of the conceptual difficulties.

Link to Psychiatric Times article ‘Is bigotry a mental illness?’
Link to Stanford Encyclopedia of Philosophy entry on mental illness.

Kidman new face of brain game, will it sharpen the mind?

As a sure sign that cognitive improvement games have gone mainstream, Nicole Kidman has been announced as the new face of Nintendo’s latest ‘brain training’ title.

The idea that mental training will actually help boost your mental skills is relatively new.

It was traditionally thought that the mind and brain just start losing their edge after young adulthood and your best hope was to learn to use your remaining resources more effectively as you age.

However, studies started to appear in the late 1990s suggesting that practicing certain tasks could act as a sort of ‘mental workout’, actually improving mental abilities directly in people with disorders like Alzheimer’s disease and schizophrenia.

Most people weren’t fully convinced of the benefits in healthy older people until a key study was published last year in the Journal of the American Medical Association that showed modest but reliable improvements, even after five years.

The effects were typically small (often too small to be picked up without standard tests), but interestingly, the training also had a knock-on effect on the participants’ ability to look after themselves effectively on a day-to-day basis.

It seems that cognitive training may have a stronger effect in people with mental impairments. A recent review of 17 studies found a positive effect on mental abilities, everyday activities and mood in people with Alzheimer’s.

However, as far as I know, no controlled trials have ever been published on any off-the-shelf ‘brain training’ game, including Nintendo’s. You’d guess from the medical literature that they might have a similar effect, but it’s yet to be shown for sure.

Link to BBC News article ‘Kidman to be new face of Nintendo’.
Link to JAMA article ‘Long-term Effects of Cognitive Training…’

Formula 1 and Iraqi psychiatry on AITM new series

A new series of BBC Radio 4’s All in the Mind has just kicked off with the first programme investigating the psychology of Formula 1 drivers and including an interview with an Iraqi psychiatrist involved in rebuilding the country’s mental health services.

The programme talks to Jenson Button, Honda’s top driver, Tony Lycholat, Head of Human Performance at Honda, and Dr Kerry Spackman a neuroscientist who is consultant to the Maclaren team.

In relation to mental health in Iraq, Dr Sabah Sadik is interviewed about his role as National Advisor for Mental Health to the Iraqi Ministry of Health.

The Iraqi mental health system has virtually collapsed since the invasion in 2003, and as recently reported by the Washington Post, the conflict has left intense psychological scars on many of the country’s children.

Link to first in the new series of BBC All in the Mind.

Psychiatrists top list of drug maker gift recipients

The New York Times continues its theme of investigating psychiatry and mental health with an article noting that US psychiatrists receive drug company ‘gifts’ worth the largest amount among all the medical specialities.

The data is only from two states, because they are the only ones which have gone public with their records of payments to doctors.

The practice is widespread and usually doesn’t take the form of direct cash payments, but instead funds everything from trips to conferences (which are often little more than marketing presentations in luxurious holiday destinations), to expensive meals and outings, to footing the bill for medical school events and symposiums.

The extent of the funding is quite eye-opening: the article reports that the average payment to each psychiatrist in Vermont last year was over $45,000 dollars.

Vermont officials disclosed Tuesday that drug company payments to psychiatrists in the state more than doubled last year, to an average of $45,692 each from $20,835 in 2005. Antipsychotic medicines are among the largest expenses for the state’s Medicaid program.

Over all last year, drug makers spent $2.25 million on marketing payments, fees and travel expenses to Vermont doctors, hospitals and universities, a 2.3 percent increase over the prior year, the state said.

The number most likely represents a small fraction of drug makers’ total marketing expenditures to doctors since it does not include the costs of free drug samples or the salaries of sales representatives and their staff members. According to their income statements, drug makers generally spend twice as much to market drugs as they do to research them.

The state of psychiatric drug marketing is shocking. It’s gone beyond the point of promotion to what seems to be little more than outright bribery.

As you might expect, this practice has a strong and significant effect of the prescribing behaviour and attitudes of doctors when medical decisions should be taken on the best empirical evidence rather than on marketing information provided by commercial vendors.

UPDATE: An important clarification from Doctor X, taken from the comments:

While I am concerned about the influence of big pharma on psychiatry, I was taken aback by the figures presented in the Times story. I did a little checking and found that the Times article grossly misrepresented the facts as presented in the original Vermont report. The $45,000 per year figure is for the top 11 psychiatrists who are recipients of pharma money. The report does not indicate the average or median for psychiatrists across the state, but extrapolating from the report figures it looks like $1000.00 per year is probably more typical and closer to the median figure for all psychiatrists. The mean is probably in the neighborhood of $4,000 per psychiatrist, a figure that is probably skewed upward by a heavily lopsided distribution of money and fees paid to top recipients.

Further explanation here.

Link to NYT article ‘Psychiatrists Top List in Drug Maker Gifts’.

Enough about you doctor, what about me?

The New York Times reports on a new study that examined how doctors disclose information about themselves during patient consultations. The study found that disclosures are usually for the benefit of the doctor and rarely help the patient.

The study recorded 113 doctor-patient interactions and analysed the conversation for themes, timing, effect and number of self-disclosures.

Self-disclosure is usually specifically covered in clinical training and, if done carefully, is thought to enhance the relationship with the patient and make them feel more at ease.

In this case, the research team found that none of the self-disclosures were primarily focused on patient concerns and only 4% were useful, providing education, support, explanation, or acknowledgment, or prompting some indication from the patient that it had been helpful.

The study also contains a few transcripts, including this gem:

Physician: No partners recently?

Patient: I was dating for a while and that one just didn’t work out. . . . about a year ago.

Physician: So you’re single now.

Patient: Yeah. It’s all right.

Physician: [laughing] It gets tough. I‚Äôm single as well. I don’t know. We’re not at the right age to be dating, I guess. So, let’s see. No trouble urinating or anything like that?

As was found in a previous study, it was also found that the longer the doctor talked about themselves, the less likely it was to be useful.

We tend to think of medical diagnosis as a scientific process, but so much of it relies on conversation, with patients – to get their experience of symptoms, and colleagues – to get their opinions and advice. In other words, it relies as much on negotiation as diagnostic tests.

Another key element is how the doctor transforms the patient’s personal problem into a medical one, so he or she can apply medical knowledge and problem-solving techniques to it.

As found by a key study in medical sociology, doctors use various non-scientific strategies to interpret the objective medical symptoms while making a diagnosis.

When medicine is discussed as ‘part art, part science’, the art seems to be in how doctors interact with their patients and interpret their concerns, which seems to be equally as important as medical tests.

Link to NYT article ‘Study Says Chatty Doctors Forget Patients’.
Link to abstract of study.

Harnessing humans for subconscious computing

Technology Review has an article on using humans as part of a digital face recognition system. Uniquely, you don’t have to take part in any deliberate recognition, the system uses electrical readings to automatically measure the response of the brain – even if you’re not aware of it.

The system, developed by Microsoft Research, takes advantage of the fact that when we see something we recognise as a face, a specific electrical signal is generated by face-perception brain activity that can be picked up by electrodes.

Crucially, this brain activity happens automatically, we don’t have to make a special effort.

Last year, I wrote an article entitled ‘Hijacking Intelligence‘, noting that software is increasingly being designed to use humans as ‘biological subroutines’ for the things computers find most difficult.

Labelling pictures is one such task – it’s something humans find trivial, computers find difficult, and it’s needed in large numbers to create an index for image searches.

To get round this problem, Google designed an online game that involved labelling pictures. Humans play for fun, while Google get the benefit of your intelligence for their database.

This new system takes it a step further, as you don’t have to be doing anything related for it to take advantage of your ‘mental work’.

For example, a picture could flash up every time you hit save on a word processor, or every time you look at a certain website.

Each time your brain signals that you’ve seen a face, the system reads your recognition activity and sends it back to the main database to classify the image.

This might be one way of sifting through security images to see which should be inspected in more detail.

As a substitute for advertising, maybe you’d be offered free internet access if you had the system installed. Your brain would pay the bills.

While the system has only been developed as a proof-of-concept, it’s interesting, if not a little scary, to speculate how technology will harness our mental skills, even when we’re not aware of it.

Link to Technology Review article ‘Human-Aided Computing’.

Tooth marks reveal childhood trauma

Childhood stress can interfere with the development of the teeth to the extent that a traumatic experience leaves a recognisable line in the tooth enamel that remains as a record of past traumas.

I discovered this when reading about a study published in the Annals of the New York Academy of Sciences [pdf] that used these lines to compare the number of childhood traumatic experiences that occured in people diagnosed with schizophrenia and healthy controls.

New approaches to the problem of estimating stress during early brain development are required. In this regard, human enamel has promise as accessible repositories of indelible information on stress between gestation and the age of 13. Stressful experiences induce long-term activation of the sympatho-adrenal system, slowing of tropic [growth-related] parasympathetic functions, and they then induce disrupted secretion of the enamel matrix.

During the brain development (in infancy, childhood and preadolescence), ameloblast activity in human enamel is slowed during 1 to 2 days of extreme stress, and the segment of enamel rods is smaller and often misshapen, making a particular dark line seen by the use of a microscope (we referred this line to Pathological Stress Line, PSL in short). Retzius reported that this line is incremental lines reflecting the layered apposition of enamel during amelogenesis (Retzius, 1937), and after that this line is termed the Retzius line. The line is conceptually akin to tree rings which are markers of environmental adversity in the tree’s life.

Schizophrenia was once thought to be largely caused by genetic factors, but in the last decade a number of studies have shown that childhood trauma contributes to the chance of developing the disorder.

One difficulty with this type of research is that it often relies on people remembering back to their childhood after the onset of psychosis, which could mean that the memories aren’t perfectly reliable in some cases.

Stress-induced lines in tooth enamel are one way of looking at the link between trauma and schizophrenia that doesn’t rely on potentially hazy memories of the past.

Link to abstract of study.
pdf of scientific paper.

Why don’t ethics professors behave better?

If you spent your whole life trying to work out how to be ethical, you would think you’d be more moral in everyday life. Philosopher Eric Schwitzgebel has found that this isn’t the case, and asks the question “Why don’t ethics professors behave better than they do?”.

Initially, this was based on a hunch, but Schwitzgebel, with colleague Joshua Rust, has begun to do research into the question. They’ve found some surprising results.

At a recent philosophy conference, he offered chocolate to anyone who filled in a questionnaire asking whether ethicists behaved better than other philosophers.

It wasn’t long before an ethics professor stole a chocolate without filling in a questionnaire. (This reminds me of a famous psychology study that found that trainee priests on their way to give a talk on ‘The Good Samaritan’ mostly ignored someone in need if they were in a hurry!).

When the results came in, ethicists rated other ethicists as behaving better, but other philosophers rated them as no more moral than everyone else.

In another study, Schwitzgebel investigated whether people interested in moral issues are more likely to steal books. By looking at library records, he’s found that books on ethics are more likely to be stolen than other philosophy books.

So why aren’t ethics professors more ethical than the rest of us? Schwitzgebel wonders whether it is because there is a difference between emotional engagement with moral issues and a more detached reasoning style that is necessary for careful analysis, but which may not make someone feel compelled to act more ethically.

Ominously, he notes that “More and more, I’m finding myself inclined to think that philosophical reflection about ethical issues is, on average, morally useless”.

It is interesting that there are similar problems in other professions. For example, doctors don’t follow health advice adequately and are much more likely to suffer from mental illness.

As an aside, Schwitzgebel has made all his papers and publications available online and has a fantastic blog that is well worth keeping tabs on.

Link to Schwitzgebel’s articles on ‘The problem with ethics professors’.
Link to Schwitzgebel’s homepage with publications and blog links.

Law, ethics, brain scans and mind reading

ABC Radio National’s All in the Mind has just broadcast the first of a two-part series on using neuroscience to read the mind.

The first programme investigates whether neuroscience can tell us anything about criminality and violence, and what role brain-based evidence will play in the court room.

The programme talks to many of the delegates from last April’s The Law and Ethics of Brain Scanning conference which was one of the first to consider the legal issues of brain scans in detail.

All of the conference talks have been put online as mp3 files so you can listen to the talks yourself if you want to hear more.

In the mean time, this edition of All in the Mind covers the key issues and next week’s will investigate some more (as yet undisclosed) aspects of so-called ‘mind-reading’ technology.

Link to AITM on ‘Mind Reading’.
Link to The Law and Ethics of Brain Scanning conference audio.

Encoding memory: from a free issue of SciAm

To celebrate the launch of a redesign, Scientific American have made the July edition freely available online as a pdf file. The cover story examines the search for how the brain encodes memories.

The issue is only available online until the end of June (one more week!) so you’ll need to be quick, but it’s a copy of the entire issue.

On a related note, the June 25th podcast is on the neurology of boxing-related brain damage.

pdf of July 2007 Scientific American (via Neurophilosopher).
Link to July edition table of contents.

Oldest children have highest IQ: a family effect?

Science has just published a study of almost a quarter-of-million people providing strong evidence that oldest children have slightly higher IQs, and, most interestingly, the evidence suggests that this isn’t a biological effect – it’s likely to do with family environment and upbringing.

In fact, first-born children are known to have a number of psychological differences. For example, they are less likely to be gay, show differences in autistic-like traits, and are typically less severely affected by schizophrenia if it occurs.

These differences have often been explained by a theory that argues that the mother adapts her immune system during the first pregnancy and it might not be fully attuned to later children and this might affect the brain development of subsequent children.

In order to test this idea the Science study looked at the records of almost 250,000 Norwegian army recruits, all of which have routine IQ tests and full medical and family histories.

It turned out, as has been found many times before, that first-born children had higher IQs by about 3 points on average.

Crucially, it also turned out that some second-born children who had an older sibling who had died young also had higher IQs.

In other words, although they were second-born biologically, they were brought up as the oldest child after their sibling passed away.

Being brought up as the oldest child seems to be the crucial factor: family-rank, not birth order affects IQ. This suggests that the immune system theory is unlikely to explain this effect.

This has generated a great deal of discussion and many parents are interested in whether they can provide the ‘first child advantage’ to their younger children as well.

The New York Times featured the study and just published a follow-up article discussing the role of family-dynamics in the development of intelligence after all the interest it generated.

Some psychologists are suggesting that the effect might be because older children get the chance to coach the junior family members which may help them consolidate knowledge and provide practice in manipulating information.

It’s also interesting that a recent study on birth-order in Thai medical students found exactly the reverse pattern. Younger siblings were found to be more intelligent and have more positive personality factors.

All of these studies suggest that culture and environment are crucial factors during childhood, both for mental and emotional development.

Link to abstract of Science study (thanks Laurie!).
Link to NYT write-up.
Link to NYT on intelligence and family dynamics.

Mind the gap: science and the insanity defence

Reason Magazine has an excellent article on why our knowledge about the psychology and neuroscience of mental illness doesn’t really help when trying argue for or against the insanity defence in court.

The insanity defence concerns whether a person accused of a crime should be considered legally responsible.

Some of the first legal criteria for judging someone ‘not guilt by reason of insanity’ are the M’Naghten Rules created after Daniel M’Naghten tried to assassinate the British Prime Minister Robert Peel in 1843.

He ended up killing Peel’s secretary, but when caught was found to be suffering from paranoid delusions and it was judged that his crime was motivated by his unsound mind and he didn’t understand the ‘nature and quality’ of what he did.

Most Commonwealth law in this area is still based on these criteria, and most US law was too, until shortly after John Hinckley shot US President Ronald Reagan and was found not guilt by reason of insanity.

This caused a backlash against the insanity defence and many US states have variously abolished it or made it much more difficult to prove (near impossible in some cases).

The Reason Magazine article examines why, when it does arise, the evidence is largely based on descriptions of the person’s mental state and why recent advances in understanding mental illness don’t really help very much.

One of the main reasons is that studies that find differences between people with mental illness and those without, do so on the group level. The same differences might not be present when comparing any two individuals.

In other words, on average, there are mind and brain differences between people affected by mental disorders and unaffected people, but the individual variation is so great that you couldn’t reliably say it would be present in one particular person.

As these criminal trials are focused on the actions of one individual much of the objective science goes out the window because it can’t reliably indicate an diagnosis, state of mind or reasoning abilities on the individual level.

This means that the most relevant evidence is usually the testimony of a psychiatrist or psychologist who is giving his or her clinical, descriptive judgement of the person’s state of mind.

The Reason Magazine article examines what sort of dilemmas this causes, and considers how developments in psychology and neuroscience are likely to impact on the legal judgement of insanity.

It’s an excellent guide to some of the key issues and the difficulties of making legal judgements on subjective states of mind.

Link to article ‘You Can’t See Why on an fMRI’.

Personalised drugs

The New York Times has an interesting opinion piece on using genetic tests to determine which psychiatric drugs will be most effective and least problematic.

It is starting to become known that people with certain genes or sets of genes react to drugs differently.

These could be genes related to aspects of brain function, or, just as importantly, liver function, because many psychiatric drugs are broken down by enzymes in the liver.

For example, enzyme CYP2D6 metabolises a whole range of psychiatric drugs including antidepressants and antipsychotics.

Some people have certain versions of the CYP2D6 gene which means they have much less of the enzyme and so break these drugs down at a much slower rate.

This means the same dose of the drug in these people will have a much stronger effect, which can lead to increased side-effects.

There are many more examples of how genes influence the effects of drugs, and doctors would ideally like to be able to test people beforehand to see which drugs might be better.

Like most mass-market industries, the drug industry prefers a ‘one size fits all’ approach, advertising their pill as suitable for anyone with a particular condition.

The idea of genetically testing people for drug suitability is causing them a bit of a headache at the moment, as they’re desperately trying to think of ways to make money out of it.

The New York Times article is quite positive about the effect this will have on the relationship between medicine and industry:

Aside from the potential to transform clinical psychiatric practice, these new developments will surely change the relationship between doctors and the drug industry and between the industry and the public. Direct-to-consumer advertising will become nearly irrelevant because the drugs will no longer be interchangeable, but will be prescribed based on an individual’s biological profile. Likewise, doctors will have little reason to meet with drug company representatives because they won’t be able to give doctors the single most important piece of information: which drug for which patient. For that doctors will need a genetic test, not a salesman.

Of course, it could just lead to people with common genes being prescribed cheap, widely available treatments, while those with rarer genetic profiles having to pay more for expensive, niche medicines.

Almost certainly, it will lead to the drug industry getting into the genetic testing market, probably with equally as many advantages and drawbacks as exist with their current marketing strategies.

Link to NYT on ‘On the Horizon, Personalized Depression Drugs’.

Are we computers, or are computers us?

Philosopher Dr Pete Mandik has published an interesting thought on his blog that questions whether the common ‘computer metaphor’ used to describe the human mind is really a metaphor at all.

Cognitive psychology typically creates models of the mind based on information processing theories.

In other words, the mind and brain are considered to do their work by manipulating and transforming information, either from the senses, or from other parts in the system.

It is therefore common for scientists to talk about the mind and brain in computer metaphors, as if they are information processing machines.

Mandik questions whether this is really a metaphor at all:

There is a sense of the verb “compute” whereby many, if not all, people compute insofar as they calculate or figure stuff out. Insofar as they literally compute, they literally are computers. Further, the use of “compute”, “computing”, and “computer” as applied to non-human machines is derivative of the use as applied to humans.

It strikes me as a bit odd, then, to say that calling people or their minds “computational” is something metaphorical.

Indeed, the term ‘computer’ was originally a name for a person who did mathematical calculations for a company.

Calculating machines were then given the supposedly metaphorical name ‘computers’ as they did equivalent work to the human employees.

Mandik questions whether we should think of any of these examples as genuine metaphors, since they’re describing the same operations.

However, a key issue for cognitive science is whether there are reasonable limits in describing mind, brain and behaviour in mathematical terms.

The fact that we can adequately describe some things mathematically doesn’t solve this problem, because there may be things that are impossible to describe in this way which we simply don’t know about.

Often though, we just assume that we haven’t found the right maths yet, when the reality may be far more complex.

Link to Pete Mandik post with great discussion.