‘Stress’: from buildings to the battlefield

Sometimes we don’t realise how much the vocabulary of psychology has become part of everyday language.

I was surprised to learn that the use of the term ‘stress’ to mean psychological tension, rather than just physical pressure, has only been with us since the mid-1930s and was popularised by the major wars of the 20th century.

And it turns out, the person who coined the new usage did it by accident, owing to a mistaken translation.

Akin to ‘distress’, ‘stress’ meant ‘a strain upon endurance’, but it was also used in a more specialist way by engineers to denote the external pressures on a structure – the effects of ‘stress’ within the structure became known as ‘strain’.

Then in 1935 the Czech-Candian physiologist Hans Selye began to promote ‘stress’ as a medical term, denoting the body’s response to external pressures (he later admitted that, new to the English language, he had picked the wrong word; ‘strain’ was what he had meant).

Academic physiologists regarded the concept of stress as too vague to be scientifically useful, but Selye’s determined self-promotion, coupled with the upheaval and distress brought by the [Second World] war to many millions of ordinary people, popularised the term.

By the time of Vietnam, ‘stress’ had become a well-established part of military medicine, thought to be a valuble tool in reducing ‘wastage’. In the military context, it was an extension of the work done at the end of the First World War on the long-term effects of fear and other emotions on the human system…

‘Stress’, writes the historian Russell Viner, ‘was pictured as a weapon, to be used in the waging of psychological warfare against the enemy, and Stress research as a sheild or vaccination against the contagious germ of fear.’

From p349 of A War of Nerves, a book on the history of military psychiatry, which we covered previously.

A War of Nerves

I’ve just started reading Ben Shephard’s stunning book A War of Nerves: Soldiers and Psychiatrists that tracks the history of military psychiatry through the 20th century.

Even if you’re not interested in the military per se, the wars of the last 100 years have been incredibly important in shaping our whole understanding of mental breakdown, mind-body concepts and clinical treatment.

For example, the effects of trauma stemming from World War I were so shockingly obvious and happened in such large numbers that the medical establishment could no longer deny the role of the mind in both the theories and practice of treating ‘nervous disorders’.

In effect, it made psychology not only acceptable, but necessary, to a previously sceptical medical establishment that were largely focused on an ‘organs and nerves’ view of human life.

One of the big concerns during World War I was ‘shell shock’, a confusing and eventually abandoned label that was typically used to describe any number of physical problems (such as paralysis, blindness, uncontrollable shaking) that arose from combat stress.

The original name came from early theories that suggested these symptoms arose from the effect of ‘shock waves’ on the nervous system.

However, it became clear that only a small percentage of cases actually resulted from actual brain injury (interestingly, a recent article in the American Journal of Psychiatry notes parallels between ‘shell shock’ and concerns over the effects of Improvised Explosive Devices or IEDs in Iraq).

It turns out, many of the symptoms were triggered or exacerbated by unbearable stress and were shaped by beliefs and expectations.

This was clearly demonstrated when a ‘gas shock’ syndrome emerged during World War I when gas attacks became more frequent.

Like ‘shell shock’, it arose from a combination of extreme stress and was shaped by expectation and fear (the descriptions of death by mustard gas are truly horrifying) even when no gas injury could be detected.

An eye witness recalled that: “When men trained to believe that a light sniff of gas meant death, and with nerves highly strung by being shelled for long periods and with the presence of not a few who really had been gassed, it is no wonder that a gas alarm went beyond all bounds. It was remarked as a joke that if someone yelled ‘gas’, everyone in France would put on a mask. Two or three alarms a night was common. Gas shock was as common as shell shock.”

The military managed (and still manage) these forms of combat stress reactions by rest (stress and fatigue play a great part) but also by managing expectations.

Soldiers are typically treated briefly and near the front line, with the expectation they’ll rejoin their unit. In effect, instilling the belief that the effects are unfortunate but transient. As a result, they usually are.

Shephard’s book is full of fascinating facts, quotes and insights on every page as he’s used some incredibly in-depth historical research to bring not only the scientific and medical issues alive, but also the culture and attitudes of the time.

He’s interwoven military records and scientific research with press commentary and personal letters to make the book really quite moving in places.

I’m sure I’ll be posting more gems as I read more.

Link to book details.
Link to abstract of ‘Shell shock and mild traumatic brain injury: a historical review.’

Cognitive dissonance reduction

Following on from my earlier post about the way psychologists look at the world, let me tell you a story which I think illustrates very well the tendency academic psychologists have for reductionism. It’s a story about a recent paper on the phenomenon of cognitive dissonance, and about a discussion of that paper by a group of psychologists that I was lucky enough to be part of.

Cognitive Dissonance is a term which describes an uncomfortable feeling we experience when our actions and beliefs are contradictory. For example, we might believe that we are environmentally conscious and responsible citizen, but might take the action of flying to Spain for the weekend. Our beliefs about ourselves seem to be in contradiction with our actions. Leon Festinger, who proposed dissonance theory, suggested that in situations like this we are motivated to reduce dissonance by adjusting our beliefs to be in line with our actions.

Continue reading “Cognitive dissonance reduction”

From the nose to the genitals and back again

Recently, the Journal of the Royal Society of Medicine has had some interesting letters on a theory from times past – the nasogenital reflex theory – that says that the nervous system makes a direct link between the erectile tissue in the genitals and the nose.

The nose has tissue which, like the genitals, can become engorged with blood, which is part of the reason we get a stuffy nose. To counter this, most nasal decongestants contain a drug which acts as a vasoconstrictor to reduce blood flow (sometimes this is a type of amphetamine).

The possible link between nose and genital tissue was first proposed by American surgeon John McKenzie in 1883:

Over one hundred years ago, neurological reflexes emanating from the nose — termed the nasal reflex neurosis — were considered to be the cause of many symptoms, including symptoms related to the genitalia. In 1883 McKenzie, an otolaryngologist from Johns Hopkins Hospital, proposed a nasogenital reflex responsible for symptoms such as dysmenorrhea, pelvic pain, etc. and described improvements following nasal treatments.

In other words, he argued that problems with the nose could also results in problems with the genitals and vice versa.

Later, Wilhelm Fleiss, a German ear, nose and throat specialist and a close friend of Freud’s elaborated the theory, and suggested that nasal tissue could be the cause and cure of a number of illnesses in body and mind:

In 1893 Fleiss published his monograph on ‘The Nasal Reflex Neurosis’, in which he claimed that back pain, chest tightness, digestive disturbances, insomnia and ‘anxious dreams’ could all be attributed to nasal pathology. He also claimed that temporary relief of these symptoms was possible with the topical application of cocaine, of which Freud had published the first account of local anaesthetic properties.

Gradually the list of conditions grew to include migraine, vertigo, asthma and then gynaecological conditions such as dysmenorrhoea and repeated miscarriages.

Freud became quite influenced by this theory at one time, and referred a patient to Fleiss for nasal surgery to cure her depression. Sadly, surgical complications nearly cost the patient her life and Freud became disenchanted with the theory.

While it is now clear that the nose isn’t a major cause of other disturbances in the body and mind, and the nervous system has no major pathway that connects the tissues of the nose and the genitals, there are some clues that they might both be affected by similar things.

Reports of ‘viagra nosebleeds‘ and ‘honeymoon rhinitis‘ (a stuffy nose and sneezing after sex) suggest that they may react similarly in some instances.

Link to JRSM letter (not open-access yet).
Link to second JRSM letter (not open-access yet).

How do psychologists think?

I believe that the important thing about psychology is the habits of thought it teaches you, not the collection of facts you might learn. I teach on the psychology degree at the University of Sheffield and, sure, facts are important here — facts about experiments, about the theories which prompted them and about the conclusions which people draw from them — but more important are the skills which you acquire during the process of learning the particular set of facts. Skills like finding information and articulating yourself clearly in writing. Those two things are common to all degrees. But lately I’ve been wondering what skills are most emphasised on a psychology degree? And I’ve been thinking that the answer to this is the same as to the question ‘how do psychologists think?’. How does the typical psychologist[*] approach a problem? I’ve been making a list and this is what I’ve got so far:

1. Critical — Psychologists are skeptical, they need to be convinced by evidence that something is true. Their default is disbelief. This relates to…

2. Scholarly — Psychologists want to see references. By including references in your work you do two very important things. Firstly you acknowledge your debt to the community of scholars who have thought about the same things you are writing about, and, secondly, you allow anyone reading your work to go and check the facts for themselves.

3. Reductionist — Psychologists prefer simple explanations to complex ones. Obviously what counts as simple isn’t always straightforward, and depends on what you already believe, but in general psychologists don’t like to believe in new mental processes or phenomena if they can produce explanations using existing processes or phenomena.

I am sure there are others. One of the problems with habits of thought is that you don’t necessarily notice when you have them. Can anyone offer any suggested additions to my inchoate list?

Continue reading “How do psychologists think?”

Gathering data for thought experiments

The Idea Lab section of The New York Times has an article on experimental philosophy – a new branch of philosophy where, for example, answers to philosophical thought experiments are tested on members of the public to find the most common answers and possible contradictions in everyday reasoning.

But now a restive contingent of our tribe is convinced that it can shed light on traditional philosophical problems by going out and gathering information about what people actually think and say about our thought experiments. The newborn movement (“x-phi” to its younger practitioners) has come trailing blogs of glory, not to mention Web sites, special journal issues and panels at the annual meeting of the American Philosophical Association. At the University of California at San Diego and the University of Arizona, students and faculty members have set up what they call Experimental Philosophy Laboratories, while Indiana University now specializes with its Experimental Epistemology Laboratory. Neurology has been enlisted, too.

More and more, you hear about philosophy grad students who are teaching themselves how to read f.M.R.I. brain scans in order to try to figure out what’s going on when people contemplate moral quandaries. (Which decisions seem to arise from cool calculation? Which decisions seem to involve amygdala-associated emotion?) The publisher Springer is starting a new journal called Neuroethics, which, pointedly, is about not just what ethics has to say about neurology but also what neurology has to say about ethics. (Have you noticed that neuro- has become the new nano-?) In online discussion groups, grad students confer about which philosophy programs are “experimentally friendly” the way, in the 1970s, they might have conferred about which programs were welcoming toward homosexuals, or Heideggerians. Oh, and earlier this fall, a music video of an “Experimental Philosophy Anthem” was posted on YouTube. It shows an armchair being torched.

Some of the highest profile work uses neuroimaging to look at the brain areas involved in making moral and ethical decisions, but some of my favourite are the most simple.

As we’ve discussed previously philosopher Eric Schwitzgebel’s work on whether being a professional ethicist makes you behave any more ethically is amusing, but also asks questions about the use of moral philosophy if it doesn’t seem to have any personal impact.

He’s recently taken this a step further and has begun to investigate whether political scientists vote more often than other people.

In a way, everything has come full circle. Before the word was invented ‘science’ was called ‘natural philosophy’, because it was the philosophy of how the natural world worked. It was distinguished from the rest of philosophy because it used experiments.

Link to NYT on ‘The New New Philosophy’.
Link to Schwitzgebel on whether political scientists vote more often?

In search of evidence-based bullshit

Monday morning is not the best time to be told to ‘bridge the quality chasm’ and ‘identify your value stream’. I was having the misfortune of starting my week with a talk that introduced new health-service management ideas based on psychological sounding ideas such as ‘lean thinking’ and ‘connected leadership’.

Now, I’ve got no problem with things sounding like bullshit, as long as they work. After all, medicine is one of the few places where you can get away with calling the practice of squirting cold water in the ear ‘vestibular caloric stimulation’.

No-one minds that much, because it’s been very well researched and is known to have a profound, albeit temporary, effect on a number of neurological conditions.

So if I wanted to find out whether any of these new management techniques made an organisation more efficient, the first thing I’d do is find out what the research says.

In health and medicine, the ‘gold standard’ for finding our whether an intervention has an effect is the randomised controlled trial or RCT.

It’s a simple but powerful idea. You get a group of people you want to study. You measure them at the beginning. You randomly assign them to two groups. One gets the intervention, the other doesn’t. You measure them at the end. If your intervention has worked, one group should be different when compared to the others.

Of course, it gets a bit more complex in places. Making the comparison fair and deciding what should be measured can be tricky, but it’s still a useful tool.

After my traumatic Monday morning experience I went to see what randomized controlled trials had been done on management techniques.

To my surprise, I found none. Not a single RCT in any of the business psychology literature.

Now, this may be because I know little about organisational psychology, and literature searches are as much about knowing the key words as knowing what you want. So maybe RCTs are called something completely different, or I’m just looking in the wrong places.

So, if you know of any RCTs done on leadership and management techniques, please let me know, I’d be fascinated to find out.

I could completely wrong, but if I’m not, I want to know why are there no randomised-controlled trials in organisational psychology?

And as a corollary, are we spending millions on organisational interventions to supposedly help patients that have been tested no further than the pseudoscience we reject for every other area of medicine?

UPDATE: Some interesting comments from organisational psychologist Stefan Shipman:

It may be that the complexity lies in that organizational research is always secondary to doing business. I can remember in some of my early research that I attempted to implement a new human resources program in one department. The program was successful in its early stages and was (despite my suggestions) implemented company wide.

I think your post absolutely speaks to the frustration of all organizational psychologists because the zeal of organizations to find “new” ways of doing business that are hopefully more effective. This zeal often reduces the “completeness” of research. As organizational psychologists we accept the conditions under which real world research can be done. We encourage the assignment of conditions but accept that some ideas or programs might “leak” into other parts of the organization.

Lies, lesions and medical mysteries

Hysteria, or conversion disorder as it is now known, is when neurological symptoms such as blindness or paralysis are present but no neurological problems or brain abnormalities can be found.

The issue of whether such patients are ‘faking’, whether the neurological abnormality just hasn’t been found yet, or whether the problem is best understood in psychological terms, has been vexing clinicians for the best part of 200 years.

This is a fascinating quote from the introduction to Contemporary Approaches to Study of Hysteria (ISBN 019263254X) by Halligan, Bass and Marshall:

…how can we discover if someone is indeed faking it? (We use ordinary language here rather than the more obviously psychiatric terms such as factitious disorder and malingering: clarity and logic are best served by calling a spade a spade.) The simple but totally impractical solution would be 24-hour surveillance on audio- and video-tape unbeknownst to the patient. Anyone who behaved perfectly normally when alone but who invariably developed the ‘disability’ when in company might be plausibly thought to be feigning.

Short of this Big Brother solution, investigators have tried to devise catch-trials and catch-tests to detect the cheater. For example, it is sometimes assumed that a patient who ‘guesses’ a randomized stimulus sequence (touch, touch, no touch…) significantly below chance must be faking it.

But the existence of such phenomena such as blindsight, unfeeling touch, unconscious perception in visual-spatial neglect and priming in amnesia show how misleading it can be to assume that odd relationships between behaviour and verbal report necessarily constitute evidence of cheating.

We do not impinge on the honesty of patients who perform visual discriminations at above chance level while claiming to have seen nothing. Why should we perforce distrust those who score below chance? In short, the detection of lying in the neurology clinic is at least as difficult as it is in a court of law.

Link to book details.
Link to previous Mind Hacks article on hysteria.
Link to great NYT article on hysteria.

Freud widely taught, except in psychology departments

The New York Times discusses an upcoming study that has found that Freud and psychoanalysis form a key part of the teaching in the humanities, while being virtually extinct in psychology departments in the same universities.

As some of the psychologists in the article suggest, many of the problems with psychoanalysis are because those who believe in the theories have been reluctant to submit the ideas to rigorous empirical testing.

Where this has been done, the results have been fascinating. As we reported in June, empirical work has supported some of Freud’s ideas on transference (how feelings from one relationship can affect another if the two people share similarities).

Moreover, an upcoming London conference aims to get the hard nosed cognitive and neuroscientists talking to the psychoanalysts to thrash out ways of separating the wheat from the chaff and to inspire research with new ideas.

These are largely the exceptions, however, and more often than not, psychoanalysis has continued developing its ideas without much recourse to outside testing.

Psychology now runs on the mantra of ‘evidence-based practice’, which has meant the science-flimsy Freudian ideas have been largely rejected.

However, subjects like film, literature and history have no such restrictions and have found psychoanalysis a useful discussion point.

Interestingly, there are some moves to introduce cultural analysis based on cognitive science into these subjects.

Buckland’s book The Film Spectator: From Sign to Mind (ISBN 9053561315) investigates whether its possible to understand how we interpret film using cognitive linguistics and the science of perception.

Link to NYT article ‘Freud Is Widely Taught at Universities, Except in the Psychology Department’.

Sad, mad or dangerous to diagnose?

The New York Review of Books has a wonderful article that ostensibly reviews three books about mental illness but is also a powerful summary of some of the most important criticisms of modern psychiatry.

One of the key points of debate is the extent to which distressing yet common mental states such as shyness or feeling low are being classified as mental illnesses such as social phobia or depression.

This is currently a hot topic. The British Medical Journal hosted a recent debate on whether depression is overdiagnosed with Ian Hickie arguing that it needs to be recognised more widely to stop people missing out on lifesaving treatment and Gordon Parker arguing that normal sadness is being excessively labelled as a medical disorder.

Drug companies have an obvious interest in getting more people diagnosed, but less obviously, although equally as pervasive, is their interest in pushing for new diagnoses.

On the level of the individual patient, medicalising a problem often shifts people’s thinking so they feel less empowered to make a difference to their lives – it becomes an illness to be dealt with by medical experts.

In the US, however, where insurance payments are often only guaranteed when a medical diagnosis is made, people might only be able to get relief from their mental distress if their problem is medicalised.

Unlike in socialised health systems, insurance-based healthcare can pressure professionals not to help people with non-specific or difficult to diagnose problems, meaning the existing categories are often stretched to allow such people to be treated.

Treatment has traditionally been medication, which means drug companies have a strong financial incentive to push for the changes to the classification of mental illness and promote theories which best support their treatments.

In contrast, cognitive behavioural therapy, a type of psychological therapy, is known to be as effective as drugs (the most effective treatment is both medication and therapy), and is better at preventing relapse.

However, because it isn’t a ‘product’, there is no corporate marketing machine behind it, meaning it is typically under-recognised and under-used.

The ‘promotion’ of psychological therapies is left to mental health charities (such as the recent We Need to Talk campaign) which pales in comparison to the billions spent by drug companies.

So, the extent to which mental and emotional distress should be treated as a medical disorder effects everything from the personal to the political.

The New York Review of Books article does a fantastic job of covering how these processes work, both at the medical and corporate level, and how they impact on our individual health care.

Link to article ‘Talking Back to Prozac’ (via MeFi).

The ethical psychiatrist

ABC Radio National’s The Philosopher’s Zone had a fascinating discussion recently on the ethics of psychiatry, tackling some of the challenges of this unique medical speciality.

Perhaps the most obvious aspect of psychiatry which distinguishes it from other medical specialities is that it more commonly involves treating people against their will.

The laws on involuntary treatment vary, but most include the principle that someone who is judged to have their lost their insight into their own condition because of mental illness, is at risk to themselves or others, and who refuses voluntary treatment can be treated against their will.

Of course, this relies on a huge amount of other assumptions, such as the ability to distinguish between normal and abnormal mental states, and an idea of what constitutes insight.

It also relies on a presumption that psychiatrists can distinguish between potentially foolish but reasoned refusal of treatment, and a refusal driven by pathological thinking.

The programme tackles many of these issues and discusses how these decisions are affected by cultural norms and political influence, as well as how they fit in with the wider ethical approach of medicine.

Link to the Philosopher’s Zone on the ethics of psychiatry.

Reflections on the brain of an idiot

I’ve just discovered that the Journal of Anatomy and Physiology have all their past issues freely available online all the way back to 1867. I came across a curious article entitled ‘Description of the Brain of an Idiot’ in the 1871 issue and it made me think about how names for brain disorders have been rejected and changed throughout history.

Back in 1871, the term ‘idiot’ was a proper medical term. It referred to someone we would now describe as having learning disabilities or intellectual impairment.

As the word became used as an everyday form of abuse, it left the realms of medicine because it was deemed inappropriate, and has been replaced by seemingly more appropriate terms. There is a long history of this process and it continues to this day.

For example, wildly abnormal or problematic sexual behaviours used to be called sexual deviancy. ‘Sexual deviancy’ described something beyond the presumed normal range, but it was thought to be inappropriate because it branded people as outsiders.

Now we use the term ‘paraphillia’ which means, well, exactly the same – someone who has desires outside the norm – but because it’s Greek, everyone is much happier.

It’s also interesting when the terminology differs between countries. In America, ‘mentally retarded’ is a common description in medicine, but in Europe it’s considered an outdated insult – similar to the previously official words imbecile and idiot.

However, it’s always struck me as a little curious why our words for intellectual disabilities have changed so much throughout history, but the word for epilepsy (despite there being many commonly used nicknames) has been maintained since the time of Ancient Greece.

Presumably, there’s something about the Greek language which just makes us feel better about our difficulties.

Link to 1871 article ‘Description of the Brain of an Idiot’.

Plain talking

An excerpt from Prof Nick Craddock’s no-nonsense review of the book ‘The Overlap of Affective and Schizophrenic Spectra’ in this month’s British Journal of Psychiatry:

If this book is not of interest, the reader has no business being a psychiatrist.

I think he likes it.

With Michael Owen and Michael O’Donovan, Craddock has been instrumental is completing genetic research into bipolar disorder and schizophrenia.

The research has shown that these disorders are unlikely to be distinct conditions, but just different points on a spectrum of problems with mood and thinking.

Link to BJP review of ‘The Overlap of Affective and Schizophrenic Spectra’.

Biting the mind

Ten-minute philosophy podcast Philosophy Bites has an interview with Prof Tim Crane where he gives an excellent summary of one of the most important topics in contemporary cognitive science – the mind-body problem.

The problem asks how we can reconcile the biological properties of the brain with the subjective mental properties of the mind, because intuitively they seem like quite different things.

One of the most important points often gets lost when people think about this: it is perfectly possible to believe that mind cannot be fully reduced to the function of the brain while still being a materialist – i.e. while thinking that the brain is the only thing that supports the mind and without needing to believe in souls, ghostly spirits, or other non-material things.

How can this be? The key to understanding this is the word ‘reduced’ – i.e. reduction – where one phenomenon is equally well explained by its smaller components.

Importantly, this is a process of fitting theories together. Our idea of heat is equally well explained by our ideas about atoms.

For two things that are understood physically to begin with (e.g. heat and atoms) it works well, but for things that are described using quite different properties, such as thought and the brain, it doesn’t.

Here’s an analogy: when someone plays a recording of a song, everything you experience is carried in the sound waves.

However, you won’t understand why the singer is so in love by looking at the physics of sound, because what is meaningful about the song cannot be fully reduced to physics.

This isn’t a problem of missing detail in the sound. We can measure the sound waves in minute detail. But still, meaning is lost.

The same holds for the mind: even if we could track every single atom in the brain when we have a thought, we might lose meaning when we map the two together.

And if we lose meaning, it means we cannot fully reduce to the mind to the brain. No ghosts, spirits or souls, just a problem of connecting different levels of explanation.

One school of thought, eliminative materialism, argues that this problem highlights the fact that mind-level explanations are inherently unscientific and we should solve the issue by only taking about neuroscience – our subjective experience of the mind is simply wrong and misleading.

Probably the most popular approach at the moment is property dualism – which argues that both mind-level and brain-level explanations may explain how we think and behave but at different levels that may not always be reducible.

This is where there are two type of theories that both attempt to explain something, but in different ways. You can see where, in places, they connect, but they’re not always compatible.

This is different from ‘substance dualism‘, famously invented by Descartes, which says there are two types of substances – the brain, and the soul.

In the recent debates about religion, it’s interesting to see that some people argue that being unable to reduce the mind to the brain is evidence for the God, spirit or soul; while others opposed to religion see any mention of the problem as an indication of support for a non-materialist view.

The key point is that this issue is about how we map theories, not about what sorts of things exist in the universe.

The Philosophy Bites podcast discusses exactly these sorts of issues, particularly with regards to consciousness, perhaps the most well-known example of a problem with mapping the mind to the brain.

Link to Philosophy Bites on the mind-body problem.

A rough guide to philosophy and neuroscience

Philosophy is now an essential part of cognitive science but this wasn’t always the case. A fantastic new article, available online as a pdf, describes how during the last 25 years philosophy has undergone a revolution in which it has contributed to, and been inspired by, neuroscience.

The article is by two philosophers, Profs Andrew Brook and Pete Mandik, and it’s a wonderful summary of how the revolution occurred and just how we’ve benefited from philosophers turning their attention to cognitive science.

But it also notes how evidence from psychology and neuroscience is being used by philosophers to better understand concepts – such as perception, belief and consciousness – that have been the concern of thinkers from as far back as the Ancient Greeks.

It’s an academic article, so it’s fairly in-depth in places, but if you want a concise introduction to some of the key issues philosophy of mind is dealing with, and how this directly applies to current problems in the cognitive sciences, look no further.

The scope is wonderfully broad and there’s a huge amount of world-shaking information packed into it.

It’s particularly good if you’re a psychologist or neuroscientist and want a guide to how philosophy is helping us make sense of the mind and brain.

The article will shortly appear in the philosophy journal Analyse and Kritik but the proofs are available online right now.

pdf of article ‘The Philosophy and Neuroscience Movement’ (via BH).

Daniel Kahneman ‘masterclass’ online

Nobel prize winning psychologist Daniel Kahneman recently gave a two day masterclass on his work. It’s now been made available on Edge as transcripts and video clips.

Kahneman has done a huge amount of work on cognitive biases – the quirks of mind that make us deviate from rationality, sometimes in quite surprising and interesting ways.

For example, with his colleague Amos Tversky, he discovered the availability heuristic, which is the process by which we tend to judge an event as more likely to happen in the future the more easily it can be brought to mind.

This is why we vastly overestimate the chances of vividly spectacular but unlikely things like terrorism, but underestimate the mundane but consistently lethal things like driving.

Kahneman has been involved in identifying many of these sorts of biases, and cleverly, applying them to economic decision making to inform economic models of financial behaviour.

As a result, experimental psychology is now a key part of economics to understand how people actually behave as opposed to earlier models which assumed that people will always act more-or-less rationally to maximise their profits.

The Edge ‘masterclass’ is quite a comprehensive guide to his work and covers work which has been influential in many areas of psychology.

Link to Edge Daniel Kahneman ‘masterclass’.