CIA guide to optimised thinking

The CIA have released the full text of a book on the psychology of analysing surveillance data. While aimed at the CIA’s analysts, it’s also a great general guide on how to understand complex situations and avoid our natural cognitive biases in reasoning.

I’ve not read it all, but it aims not only to give the reader an understanding of the limitations of our reasoning, but also how to overcome them when trying to think about tricky problems.

A central focus of this book is to illuminate the role of the observer in determining what is observed and how it is interpreted. People construct their own version of “reality” on the basis of information provided by the senses, but this sensory input is mediated by complex mental processes that determine which information is attended to, how it is organized, and the meaning attributed to it. What people perceive, how readily they perceive it, and how they process this information after receiving it are all strongly influenced by past experience, education, cultural values, role requirements, and organizational norms, as well as by the specifics of the information received.

The chapters on cognitive biases seem particularly good, and the book consistently grounds the abstract concepts in accessible examples.

It’s interesting that patients who undertake cognitive behavioural therapy (CBT) to help with emotional or psychiatric difficulties will learn how to identify and avoid many of these exact same biases.

However, in the clinical situation the idea is that mood or emotion is in a pathological feedback loop which makes biases more likely (e.g. anxious people will tend to focus on threatening things), which in turn reinforces the emotional state.

The CIA book doesn’t seem to mention emotion or mood at all, despite the fact that the same effects are known to occur in all of us, even if they don’t get to the level of illness or impairment.

Secret service analysts must surely work in high-emotion environments (and the fact that the UK’s secret services regularly advertise for clinical psychologists seems to bear this out), so this would seem to be a crucial aspect not covered by this otherwise very comprehensive guide.

Link to full text of CIA book ‘Psychology of Intelligence Analysis’.

Cognitive biases as public policy

The LA Times has an interesting article on whether the sorts of decision-making biases identified by behavioural economists should be used to promote public policy objectives.

The idea is based on the fact that we are more likely to choose certain options depending on how they’re presented. In fact, supermarkets take advantage of this in how they lay out their products to maximise the chances of us buying the premium brands.

The LA Times piece argues that this could be used for government objectives, such as increasing the number of people who take out pensions, while still maintaining the freedom to choose and without using explicit incentives.

The libertarian aspect of the approach lies in the straightforward insistence that, in general, people should be free to do what they like. They should be permitted to opt out of arrangements they dislike, and even make a mess of their lives if they want to. The paternalistic aspect acknowledges that it is legitimate for choice architects to try to influence people’s behavior in order to make their lives longer, healthier and better.

Private and public institutions have many opportunities to provide free choice while also taking real steps to improve people’s lives.

* If we want to increase savings by workers, we could ask employers to adopt this simple strategy: Instead of asking workers to elect to participate in a 401(k) plan, assume they want to participate and enroll them automatically unless they specifically choose otherwise.

The article gives several more examples and defends its use of the term ‘libertarian paternalism’ for the idea.

I’m left wondering whether governments shouldn’t be adopting exactly what the commercial sector have been doing for years, or whether we’re naive to think political choice engineering isn’t being used already.

Link to LA Times article ‘Designing better choices’.

The psychology of magical thoughts

Psychology Today has a great article that covers the length and breadth of magical thinking – the tendency to see patterns and causality where none exists.

Magical thinking is described in a number of ways. Superstition is the most common, where we assume rituals will somehow affect the future despite having no causal connection to what we want to change.

Apophenia or pareidolia describe the effect where we see meaningful information where none was intended. The Fortean Times has a wonderful collection of photographs that depict ‘faces’ or other forms in clouds, trees, rock formations or even food.

Superstition and apophenia are an interesting contrast, because superstition can be more easily rejected than apophenia. Our perceptual systems are just set up to detect patterns, and so the perception of ‘faces’ is unavoidable.

Often we don’t even register our wacky beliefs. Seeing causality in coincidence can happen even before we have a chance to think about it; the misfiring is sometimes perceptual rather than rational. “Consider what happens when you honk your horn, and just at that moment a streetlight goes out,” observes Brian Scholl, director of Yale’s Perception and Cognition Laboratory. “You may never for a moment believe that your honk caused the light to go out, but you will irresistibly perceive that causal relation. The fact remains that our visual systems refuse to believe in coincidences.” Our overeager eyes, in effect, lay the groundwork for more detailed superstitious ideation. And it turns out that no matter how rational people consider themselves, if they place a high value on hunches they are hard-pressed to hit a baby’s photo on a dartboard. On some level they’re equating image with reality. Even our aim falls prey to intuition.

The article looks at seven types of magical thinking, and discusses some of the key psychology experiments that have shown us how magical thinking is influenced.

One of my favourites is an experiment by psychologist Emily Pronin who found that people would readily attribute another person’s headaches to sticking pins in a ‘voodoo doll’.

Interestingly, the effect was much stronger when the other person (actually a stooge) was deliberately annoying. The irritating actor increased the likelihood of participants’ wishing them harm, and so increased the perceived connection between their ‘voodoo doll’ pin-sticking and the actor’s feigned headache.

Link to Psychology Today article on magical thinking.

Predictably irrational, variably dishonest

Behavioural economist Dan Ariely was the guest on the latest edition of ABC Radio National’s All in the Mind where he discusses why we’re so bad at predicting what’s best for us, and why honesty is a shifty behaviour.

As well as being a researcher, Ariely is also author of a psychology book called Predictably Irrational which is currently riding high in the book charts.

It’s worth catching the mp3 version of the programme, as it’s slightly extended, and I found the last part, where Ariely talks about honesty, the most interesting.

Using various experimental conditions where participants are given varying degrees of room for dishonesty, Ariely notes that people tend to be dishonest enough to give themselves an advantage, but suggests we’re not so dishonest to feel bad about ourselves.

In other words, he’s suggesting that honesty is a cognitive dissonance style reasoning process, balancing our desire for personal gain against our willingness to believe in ourselves as a ‘good person’ – an idea explored further in a forthcoming paper [pdf] by Nina Mazar and Dan Ariely.

If you’re interested in a good overview of the psychology of honesty and deception, I’ve just read a fantastic paper [pdf] by the same pair, which is fascinating as much for its insights into what influences our level of honesty for its recommendations about applying the research to encourage people to be more honest.

It notes that getting people to focus on themselves increases honesty, as does getting them to focus on moral ideas, such as the Ten Commandments.

In their experiment, participants were told to write down either as many of the Ten Commandments as they could remember (increased self-awareness of honesty) or the names of ten books that they read in high school (control). They had two minutes for this task before they moved on to an ostensibly separate task: the math test. The task in the math test was to search for number combinations that added up to exactly ten. There were 20 questions, and the duration of the experiment was restricted to five minutes. After the time was up, students were asked to recycle the test form they worked on and indicate on a separate collection slip how many questions they solved correctly. For each correctly solved question, they were paid $.50.

The results showed that students who were made to think about the Ten Commandments claimed to have solved fewer questions than those in the control. Moreover, the reduction of dishonesty in this condition was such that the declared performance was indistinguishable from another group whose responses were checked by an external examiner. This suggests that the higher self-awareness in this case was powerful enough to diminish dishonesty completely.

However, I wonder whether the effect of focusing on the Ten Commandments was due to their moral or supernatural associations.

I am reminded of Eric Schwitzgebel’s ongoing project on why ethics professors, who think about moral issues a lot, are no more moral (and perhaps less!) than other people, and a study [pdf] by psychologist Jesse Bering that found that simply telling participants that the lab was haunted increased honesty in a computer task.

Link to Dan Ariely on All in the Mind.
pdf of Mazar and Ariely’s paper on the psychology of dishonesty.

Beyond belief

Salon has a provocative article by neurologist Robert Burton who discusses what the neuroscience of belief means for how we understand the world, drawn from his new book, On Being Certain.

We’re going to be posting an interview with Burton on Mind Hacks in the near future, but the Salon article should give you a flavour of some of his thoughts the brain and belief.

What’s most curious about work on the neuropsychology of belief is that it barely touches upon the memory research where they’ve had many of these things under the microscope for years.

I’m a huge fan of the work of Israeli psychologist Asher Koriat who has done some absolutely stunning work on the control of memory.

This may seem a relatively dry topic, but think for a minute about how you use your memory.

For example, you’ve almost certainly had the experience where you know that you know something but can’t remember the details, or that you know you recognise something, but can’t remember the occasion when you encountered it before.

Also, we seem able to judge when we’ve remembered something to our satisfaction, but this is quite a remarkable feat in itself. Think about how we could possibly do this.

You could say we know because the memory matches other memories we have in mind, but then these are subject to the same problem – how do we know that we’ve remembered them correctly?

In other words, there must be another system at work, and one of the primary components of this is what psychologists call the ‘feeling of knowing’ that communicates between our unconscious pool of stored information and our conscious sense of how successfully our memory is operating.

Koriat discussed these processes in a 2000 paper [pdf] that was a revelation for me when I read it. It convinced me of the importance of these wormhole-like processes that connect the conscious and unconscious mind.

In his article, Burton suggests what social implications arise from the science of belief, suggesting we should be a little more humble when we state what we ‘know’.

Link to Salon article ‘The certainty epidemic’.
pdf of Koriat’s 2001 paper on the ‘feeling of knowing’.

Behavioural Obamanomics

Theories are made great by those whom they inspire. Perhaps then, it is not surprising that the fresh new face of the US presidential race has been inspired by behavioural economics, one of the fresh new faces of cognitive science.

The New Republic magazine has an article on how the Obama campaign have adopted behavioural economics – the science of how people actually reason about money, as opposed to how they should – as their mainstay of economic policy.

Unsurprisingly, The New Republic, generally a centre-left publication, hold out great hope for the partnership of this new science and an Obama government.

You can find subtle evidence of this influence across numerous Obama proposals. For example, one key behavioral finding is that people often fail to set aside money for retirement even when their employers offer generous 401(k) plans. If, on the other hand, you automatically enroll workers in 401(k)s but allow them to opt out, most stick with it. Obama’s savings plan exploits this so-called “status quo” bias.

What is more interesting though is that cognitive science is starting to make inroads into policy development outside the traditional area of defence (where psychology, and more recently neuroscience, have traditionally been key in driving defence spending).

Link to The New Republic article ‘The Audacity of Data’.
Link to intro to behavioural economics (both via MeFi).

Sampling risk and judging personal danger

We live in a dangerous world and we’ve learnt to judge risk as a way of avoiding loss or injury. How we make this appraisal is crucial to our survival and an innovative study published in December’s Risk and Analysis investigated what influences risk perception in everyday life and has shown that our retrospective estimations of risk are quite different from how we judge them at the time.

Many studies on the psychology of risk ask people to look back on past situations or judge risk for hypothetical or lab-based situations.

The trouble is, imaginary or lab-based situations may not be a good match to real-life (after all, what’s really the danger?) and our perceptions when looking back might be influenced by the outcome – perhaps we judge things as less risky if they turned out OK in the end.

One way of trying to get a handle on how people feel during the flow of everyday life is to use a method call ‘experience sampling’.

This usually involves giving participants a pager, an electronic diary or just sending them texts to their mobile phone.

Participants are alerted at random times during the day by whatever method is chosen and they’re asked to rate how they feel there and then, or as soon as safely possible (I discussed how this has been applied to psychotic experiences in a BPSRD article in 2006).

In this study, participants were asked to rate their mood, what activity they were doing, what is the worst consequence that could occur, how severe that consequence could be, how likely it is to happen and what would the risk be to their well-being.

Generally, risks were perceived to be short term in nature and involved “loss of time or materials” related to work and “physical damage”.

Interestingly, everyone rated the severity of risk as about the same, but women were more likely to think that the worst consequence was likely to occur.

Furthermore, the better the mood of the participants (both male and female), the less risky they thought their activity was.

As an additional part of the study, participants were asked to look back and re-assess some of the situations they rated on the spot. These ratings tended to be much lower, showing that people tend to judge things to be more risky ‘in the heat of the moment’.

Both of these findings demonstrate the importance of emotion in risk judgements, suggesting that it forms another source of information, along with more calculated rational estimates.

In fact, this is one of the key ideas behind understanding anxiety disorders.

Anxiety acts as an emotional risk warning, but it can get massively ‘out of synch’ with our rational judgements, so even when we ‘know’ that (for example) the risk of air travel is smaller than the risk of driving a car, ‘in the heat of the moment’, the information from our emotions overrides this in our judgement of risk in the form of anxiety.

Of course, risk perception in itself is an important topic to understand, particularly as risk judgements are the basis of safety decisions in many professions.

Link to PubMed abstract of paper.
pdf of full-text of paper.

Beliefs about intelligence affect mental performance

I’ve just found a fascinating five minute NPR radio report on work by psychologist Carol Dweck that has found that if a child thinks that intelligence is something that can change throughout life, they do better in school.

Dweck has been doing some fascinating work on what affects children’s academic performance.

We’ve reported on some of her earlier work, including the fact that praising children for their intelligence actually makes them perform worse in certain situations, whereas praising them for their hard work encourages them to tackle adversity when it occurs.

This NPR radio slot covers some work she published with colleagues in a freely available paper looking at the fact that children who believe that intelligence is flexible seem to do better as they “tend to emphasize ‚Äòlearning goals‚Äô and rebound better from occasional failures”.

Dweck and her colleagues then tested the idea that if they taught children that intelligence could grow, their performance would improve. As predicted, it did.

It’s a really great example of carefully targeted cognitive science research. It’s a counter-intuitive finding that has direct practical application to improving children’s academic performance in both the long- and short-term.

It’s also a lovely example of a self-confirming belief. Children who believe intelligence is fixed are more likely to have fixed performance, whereas children who believe intelligence can grow are more likely to show performance growth.

The implications for the psychology of teachers are also interesting, because it would seem to be self-confirming for them as well. Teachers who believe that poorly performing children may have hidden potential might see them improve when they pass this on to the child.

Teachers who believe that poorly performing children are unlikely to change may actually limit a child’s performance if the child picks up on this and begins to believe the same.

So it might be worth testing whether teachers’ beliefs about intelligence affect their students’ performance as well.

Link to NPR on ‘Students’ View of Intelligence Can Help Grades’.
Link to paper ‘Why do beliefs about intelligence influence learning success?’.

Kids’ letters to Santa as advertising psychology study

A completely charming study looking at how television advertising influences children by examining the toys they request in their letters to Santa Claus.

The study was led by Prof Karen Pine and has just been published in the Journal of Developmental and Behavioral Pediatrics.

The Relationship Between Television Advertising, Children’s Viewing and Their Requests to Father Christmas.

J Dev Behav Pediatr. 2007 Dec;28(6):456-61.

OBJECTIVE:: Children’s letters to Father Christmas provide an opportunity to use naturalistic methods to investigate the influence of television advertising.

METHODS:: This study investigates the number of toy requests in the letters of children aged between 6 and 8 (n = 98) in relation to their television viewing and the frequency of product advertisements prior to Christmas. Seventy-six hours of children’s television were sampled, containing over 2,500 advertisements for toys.

RESULTS:: Children’s viewing frequency, and a preference for viewing commercial channels, were both related to their requests for advertised goods. Gender effects were also found, with girls requesting more advertised products than boys.

CONCLUSION:: Exploring the children’s explicit understanding of advertising showed that children in this age group are not wholly aware of the advertisers’ intent and that, together with their good recall of advertising, this may account for their vulnerability to its persuasive messages.

Link to abstract on PubMed.

Black humour perks up the inevitable

Time magazine has a short article on an interesting finding: after thinking about their own death, participants in a psychology study were more likely to respond unconsciously in ways that suggested a boost in mood.

The study was led by psychologist Nathan DeWall and asked one group of students to think about a painful dental procedure, and another about their own death.

The participants were then asked to complete questionnaires that rated their mood. In terms of their conscious reporting, there was no difference between the groups.

However, when asked to do some simple tasks that are known to be affected by unconscious emotional biases, the group who had thought about death showed a consistently positive effect:

Students in the death-and-dying group, it turns out, had all gone to their happy place ‚Äî at least in their unconscious. There was no difference in scores between the groups on the explicit tests of emotion and affect. But in the implicit tests of nonconscious emotion ‚Äî the wordplay ‚Äî researchers found that the students who were preoccupied with death tended to generate significantly more positive-emotion words and word matches than the dental-pain group. DeWall thinks this mental coping response kicks in immediately when confronted with a serious psychological threat. In subsequent research, he has analyzed the content of the volunteers’ death essays and found that they’re sprinkled with positive words. “When you ask people, ‘Describe the emotions that the thought of your own death arouses in you,'” says DeWall, “people will report fear and contempt, but also happiness that ‘I’m going to see my grandmother’ and joy that ‘I’m going to be with God.'”

I would like to think that this will come as welcome news to the people who protested against a funeral parlour being built near their homes because of concerns about a ‘negative psychological impact’, although, I suspect it will be of little comfort.

Experimental evidence is remarkably unconvincing to some.

It reminds me of when Tom Gilovich did an analysis of the ‘hot hand’ in professional basketball (where players who have scored several points are supposedly ‘on a run’). His study [pdf], published in the journal Cognitive Psychology, found that the effect was just the misperception of random variation.

When asked about the research, Red Auerbach, coach of the Boston Celtics, reportedly responded “Who is this guy? So he makes a study. I couldn’t care less”.

Another example of the fly of empirical evidence being crushed against the windscreen of self-confidence. Well, at least Stephen Colbert would be proud.

Link to Time article ‘Are We Happier Facing Death?’.

Decision-making special issue in Science

This week’s Science has a special selection of papers on the psychology and neuroscience of decision making. While most of the articles are closed-access, one on how game theory and neuroscience are helping us understand social decision-making is freely available.

It is a great introduction to ‘neuroeconomics’, a field that attempts to work out how the brain supports cost-benefit type decisions.

This can be directly applied to financial decision-making, but also to other types of situations where weighing possible gains and losses is important, whether the gains and losses are in the form of money, time, social advantage or status – to name just a few.

One of the crucial discoveries of recent years is that people do not act as rational maximisers – making individual decisions on how to get the most benefit out of each choice. In fact, social influences can be huge and often lead people to reject no-risk economic gains when then feel it is socially unjustified.

This had led the field into interesting territory, both informing models of the economy, and illuminating how we make social decisions.

As part of the neuroeconomic approach, researchers have begun to investigate the psychological and neural correlates of social decisions using tasks derived from a branch of experimental economics known as Game Theory. These tasks, though beguilingly simple, require sophisticated reasoning about the motivations of other players. Recent research has combined these paradigms with a variety of neuroscientific methods in an effort to gain a more detailed picture of social decision-making. The benefits of this approach are twofold. First, neuroscience can describe important biological constraints on the processes involved, and indeed, research is revealing that many of the processes underlying complex decision-making may overlap with more fundamental brain mechanisms. Second, actual decision behavior in these tasks often does not conform to the predictions of Game Theory, and therefore, more precise characterizations of behavior will be important in adapting these models to better fit how decisions are actually made.

Link to Science special issue on decision making.
Link to article ‘Social Decision-Making: Insights from Game Theory and Neuroscience’.
Link to previous Mind Hacks post on game theory and (ir)rationality.

Infowar: strike early, strike often

The Washington Post has a timely article about the psychology of believing news reports, even when they’ve been retracted – suggesting that if false information is presented early, it is more likely to be believed, while subsequent attempts to correct the information may, in fact, strengthen the false impression.

The article starts with results from a study [pdf] by psychologist Norbert Schwarz who looked at the effect of a government flier that attempted to correct myths about the flu vaccine by marking them ‘true’ or ‘false’.

Unfortunately, the flier actually boosted people’s belief in the false information, probably because we tend to think information is more likely to be true the more we hear it.

Negating a statement seems just to emphasise the initial point. The additional correction seems to get lost amid the noise.

One particularly pertinent study [pdf] not mentioned in the article, looked at the effect of retractions of false news reports made during the 2003 Iraq War on American, German and Australian participants.

For example, claims that Iraqi forces executed coalition prisoners of war after they surrendered were retracted the day after the claims were made.

The study found that the American participants’ belief in the truth of an initial news report was not affected by knowledge of its subsequent retraction.

In contrast, knowing about a retraction was likely to significantly reduce belief in the initial report for Germans and Australians.

The researchers note that people are more likely to discount information if they are suspicious of the motives behind its dissemination.

The Americans rated themselves as more likely to agree with the official line that the war was to ‘destroy weapons of mass destruction’, whereas the Australian and German participants rated this as far less convincing.

This suggests that there may have been an element of ‘motivated reasoning’ in evaluating news reports.

Research has shown that this only occurs when there’s sufficient information available to create a justification for the decision, even when the information is irrelevant to the main issue.

There’s a wonderful example of this explained here, in relation to men’s judgements about the safety of sex with HIV+ women of varying degrees of attractiveness.

So, if you want your propaganda to be effective get it in early, repeat it, give people reasons to be believe it (however irrelevant), and make yourself seem trustworthy.

As I’m sure these principles are already widely known among government and commercial PR departments, bear them in mind when evaluating public information.

Link to Washington Post article on the persistence of myths.
pdf of study ‘Memory for Fact, Fiction, and Misinformation’ in the Iraq war.
Link to info on motivated reasoning and example.

The obvious and not-so-obvious in psychology

Tom has written an excellent article for The Psychologist on the not-so-obvious findings in psychology which has just been made freely available.

There are certain predictable responses you get if you introduce yourself as a psychologist.

The most common is “are you analyzing me?”, followed by “can you read my mind?”. The best answer to both, of course, is ‘sometimes’.

Occasionally, a bright spark will tell you “psychology, well, it’s just obvious isn’t it?”, which, to be frank, I wish it was. But sadly, it’s fiendishly complicated.

Tom’s article gathers a whole bunch of counter-intuitive research findings for exactly such situations:

I used to keep a stock of ‘unobvious’ findings ready to hand for occasions like this. Is it really obvious that people can be made to enjoy a task more by being more poorly paid to recruit for it (cognitive dissonance: Festinger & Carlsmith, 1959)? That a saline solution can be as effective as morphine in killing pain (the placebo effect: Hrobjartsson, 2001)? That students warned that excessive drinking is putting many of their peers at risk may actually drink more, whereas advertising the fact that most students don’t drink, or drink in moderation, is the thing that actually reduces binge drinking (Perkins et al., 2005)? That over a third of normal people report having had hallucinations, something we normally experience solely with mental illness or substance abuse (Ohayon, 2000)? Or that the majority of ordinary Americans could be persuaded to electrocute someone to death merely by being asked to by a scientist in a white coat (Milgram, 1974)?

There’s many more great examples, including touching on the cognitive bias that leads people to think they understand more than they do when they have little knowledge.

Priceless stuff.

Link to article in The Psychologist on the ‘obvious’.

The modern science of subliminal influence

The New York Times has a great article on how our actions and decisions can be subconsciously ‘primed’ by the world around us.

Priming is a well-established effect in psychology. It describes the effect whereby encountering one thing activates related mental concepts in the mind.

Because they’ve been activated, they influence other mental processes that happen to be occurring at the same time, influencing decision making and desire, even if we’re not aware of it.

New studies have found that people tidy up more thoroughly when there’s a faint tang of cleaning liquid in the air; they become more competitive if there‚Äôs a briefcase in sight, or more cooperative if they glimpse words like “dependable” and “support” ‚Äî all without being aware of the change, or what prompted it.

Psychologists say that “priming” people in this way is not some form of hypnotism, or even subliminal seduction; rather, it’s a demonstration of how everyday sights, smells and sounds can selectively activate goals or motives that people already have.

More fundamentally, the new studies reveal a subconscious brain that is far more active, purposeful and independent than previously known. Goals, whether to eat, mate or devour an iced latte, are like neural software programs that can only be run one at a time, and the unconscious is perfectly capable of running the program it chooses.

It’s great to see this article is largely based on published experiments.

Often, experiments tell their own stories and very little is needed to make them ‘accessible’ to the public. Just a bit of light and attention.

Link to NYT article ‘Who‚Äôs Minding the Mind?’.

Why don’t ethics professors behave better?

If you spent your whole life trying to work out how to be ethical, you would think you’d be more moral in everyday life. Philosopher Eric Schwitzgebel has found that this isn’t the case, and asks the question “Why don’t ethics professors behave better than they do?”.

Initially, this was based on a hunch, but Schwitzgebel, with colleague Joshua Rust, has begun to do research into the question. They’ve found some surprising results.

At a recent philosophy conference, he offered chocolate to anyone who filled in a questionnaire asking whether ethicists behaved better than other philosophers.

It wasn’t long before an ethics professor stole a chocolate without filling in a questionnaire. (This reminds me of a famous psychology study that found that trainee priests on their way to give a talk on ‘The Good Samaritan’ mostly ignored someone in need if they were in a hurry!).

When the results came in, ethicists rated other ethicists as behaving better, but other philosophers rated them as no more moral than everyone else.

In another study, Schwitzgebel investigated whether people interested in moral issues are more likely to steal books. By looking at library records, he’s found that books on ethics are more likely to be stolen than other philosophy books.

So why aren’t ethics professors more ethical than the rest of us? Schwitzgebel wonders whether it is because there is a difference between emotional engagement with moral issues and a more detached reasoning style that is necessary for careful analysis, but which may not make someone feel compelled to act more ethically.

Ominously, he notes that “More and more, I’m finding myself inclined to think that philosophical reflection about ethical issues is, on average, morally useless”.

It is interesting that there are similar problems in other professions. For example, doctors don’t follow health advice adequately and are much more likely to suffer from mental illness.

As an aside, Schwitzgebel has made all his papers and publications available online and has a fantastic blog that is well worth keeping tabs on.

Link to Schwitzgebel’s articles on ‘The problem with ethics professors’.
Link to Schwitzgebel’s homepage with publications and blog links.