Disease rankings

There is a hierarchy of prestige in medicine. Numerous studies have found that surgery and internal medicine are thought of most highly by doctors while while psychiatry, geriatric and child medicine come near the bottom. A study published in Social Science & Medicine took this idea one step further and looked at which diseases have the most prestige among the medical community.

Sociologist Erving Goffman wrote a highly influential book about the social dynamics of stigma in which he suggested that it has its social power through associating people with stereotypes.

It’s interesting that doctors who specialise in working with people who have the least status in society (children, the ‘mad’, the ‘old’) also have the least status in medicine.

The Norwegian researchers asked senior doctors, general practitioners and medical students to rate diseases and came up with the following list, which ranks diseases from the most prestigious at the top, to the least prestigious at the bottom.

Needless to say, mental illnesses fill most of the bottom slots.

Myocardial infarction [heart attack]
Leukaemia
Spleen rupture
Brain tumour
Testicle cancer
Pulmonary embolism [normally blood clot on the lung]
Angina pectoris
Extrauterine pregnancy
Thyroid cancer
Meniscus rupture [‘torn cartilage’]
Colon cancer
Ovarian cancer
Kidney stone
Appendicitis
Ulcerative colitis [inflammation of the bowel]
Kidney failure
Cataract
Duodenal ulcer [peptic ulcer]
Asthma
Pancreas cancer
Ankle fracture
Lung cancer
Sciatica [‘trapped nerve’]
Bechterew’s disease [arthritis of the spine]
Femoral neck fracture
Multiple sclerosis
Arthritis
Inguinal hernia [abdominal wall hernia]
Apoplexy [internal organ bleeding]
Psoriasis
Cerebral palsy
AIDS
Anorexia
Schizophrenia
Depressive neurosis
Hepatocirrhosis [cirrhosis of the liver]
Anxiety neurosis
Fibromyalgia

Link to PubMed entry for study.

A brief and incomplete history of telepathy science

Photo from Wikipedia. Click for sourceThe Fortean Times has a wonderful article that discusses the long and winding quest to find scientific evidence for telepathy, extra-sensory perception and other mysterious psychic powers.

The opening paragraph both made me laugh out loud and sets the scene for the rest of the article:

There are two truths universally acknowledged about extra-sensory perception (ESP). The first is that the anecdotal evidence is often fun and fascinating to read, whereas to peruse the experimental evidence is as boring as batshit, as our antipodean cousins say, and the investigative methods generally employed would for most of us banish insomnia for all time. We can’t avoid discussing these methods and their results in these entries, but we do promise to be brief and to strive personfully not to ruin your reading experience.

Link to Fortean Times on ‘Telepathy on Trial’.

The difficulty of profiling killers

The Guardian has a compelling yet disturbing article on criminal profilers and how the practice is attempting to recover from the early days of profiler ‘experts’ who based their predictions on little more than guesswork, sometimes with disastrous results.

It’s written by journalist Jon Ronson who takes an incisive look into the history of criminal profiling in the UK and the impact of the Rachel Nickell case where a profiler wrongly implicated a man who spent 14 months in custody while the actual murderer went on to kill a mother and her daughter.

The practice has become considerably more scientific and considerably less dramatic as a result. The piece is essential reading if you’re interested in the psychology of profiling and a revealing look into the mistakes of the Nickell case.

Link to Guardian article on criminal profiling (via @researchdigest)

Built for sin

Photo by Flickr user G√¨pics. Click for sourceThere’s a fascinating short article in The New York Times about physical attributes and the chance of ending up becoming a criminal or ending up in the clink.

Linking physical traits to criminality may sound like a throwback to the biological determinism advocated by 19th-century social Darwinists who believed that there was a genetic predisposition for wrongdoing. Practitioners are quick to distance themselves from such ideas.

Mr. Price, for example, argues that crime can be viewed, at least partly, as an ‚Äúalternative labor market.‚Äù If individuals with certain physical attributes are disadvantaged in the labor force, they may find crime more attractive, he said…

A link between a physical attribute and salary, or crime, does not necessarily mean cause and effect. Mr. Mocan pointed out that we do not know why someone who is overweight, unattractive or short is at a disadvantage in the labor market or more likely to commit a crime. It could be employer discrimination, customer preference or that the physical attribute may make the worker less productive. If a job involves carrying heavy loads, for instance, brawn would be an advantage.

That is what both Howard Bodenhorn, an economist at Clemson University, and Mr. Price concluded from 19th-century prison records. In that era increased body weight was associated with a lower risk of crime. In the 21st century, though, in which service jobs are much more common, Mr. Price found that being overweight was linked to a higher risk of crime.

The whole article is worth reading in full as it has lots of great snippets about how attractiveness is related to criminal activity and why Americans are getting shorter.

Link to NYT on ‘For Crime, Is Anatomy Destiny?’ (via @crime_economist)

Ego tripping the Freud fantastic

Photo by Flickr user Carla216. Click for sourceI just got sent this fantastic article from The Guardian in 2006 where neuropsychologist Paul Broks discusses Freud’s legacy in light of the burgeoning brain sciences.

As always, Broks writes brilliantly, and the piece starts with a wryly observed domestic scene.

One Sunday morning, when he was four years old, my son climbed into bed with his mother. I was downstairs making coffee. “Mum,” I heard him saying as I returned, “I’d like to kill Daddy.” It was a dispassionate declaration, said serenely, not in the heat of a tantrum or the cool spite of a sulk. He was quite composed. Shouldn’t you be repressing this, I thought.

The article was written on what would have been Freud’s 150th birthday and the rest is equally engaging.

Link to ‘The Ego Trip’ in The Guardian (thanks Ceny!)

Cell intelligence and surviving the dead of winter

New Scientist has an interesting article on whether single cells can be considered intelligent. The piece is by biologist Brian Ford who implicitly raises the question of how we define intelligence and whether it is just the ability to autonomously solve problems. If so, then individual cells such as neurons might be considered ‘intelligent’ even when viewed in isolation.

However, he finishes on a bit of an odd flourish:

For me, the brain is not a supercomputer in which the neurons are transistors; rather it is as if each individual neuron is itself a computer, and the brain a vast community of microscopic computers. But even this model is probably too simplistic since the neuron processes data flexibly and on disparate levels, and is therefore far superior to any digital system. If I am right, the human brain may be a trillion times more capable than we imagine, and “artificial intelligence” a grandiose misnomer.

It’s odd because it reads like blue-sky speculation when, in fact, the idea that neurons could work like “a vast community of microscopic computers” is an accepted and developed concept in the field supposedly doomed by this idea – namely, artificial intelligence.

Traditionally, AI had two main approaches both of which emerged from the legendary 1956 Dartmouth Conference.

One was the symbol manipulation approach, championed by Marvin Minsky, and the other was the artificial neural network approach, championed by Frank Rosenblatt.

Symbol manipulation AI builds software around problems where data structures are used to explicitly represent aspects of the world. For example, a chess playing computer would have a representation of the board and each of the pieces and in its memory and it works by running the simulation to test out and solve problems.

In contrast, artificial neural networks are ideal for pattern recognition and often need training. For example, to get one to recognise faces you put a picture into the network and it ‘guesses’ whether it is a face or not. You tell it whether it is right, and if it isn’t, it adjusts the connections to try and be more accurate next time. After being trained enough the network learns to make similar distinctions on pictures it has never seen before.

As is common in science, these started out as tools but became ideologies and a fierce battle broke out over which could or couldn’t ever form the basis of an artificial mind.

At the time of the Dartmouth Conference, the neural network approach existed largely as a simple set-up called the perceptron which was good at recognising patterns.

Perceptrons were hugely influential until Minksy and Seymour Papert published a book showing that they couldn’t learn certain responses (most notable a logical operation called a XOR function).

This killed the artificial neural network approach dead – for almost three decades – and contributed to what is ominously known as the AI winter.

It wasn’t until 1986 when two young researchers, David Rumelhart and James McClelland, solved the XOR problem and revived neural networks. Their approach was called ‘parallel distributed processing‘ and, essentially, it treats simulated neurons as if they are a ‘a vast community of microscopic computers’ just as Brian Ford proposes in his New Scientist article.

Artificial neural networks has evolved a great deal and the symbol manipulation approach, although still useful, is now ironically called GOFAI or ‘Good old fashioned artificial intelligence’ as it seems, well, a bit old fashioned.

How we define intelligence is another matter and saying that individual cells have it is actually quite hard to dismiss when they seem to be solving a whole range of problems they might never have encountered before.

Artificial intelligence seems cursed though, as true intelligence is usually defined as being just beyond whatever AI can currently do.

Link to NewSci on intelligence and the single cell (thanks Mauricio!)

Are crime dramas warping the legal system?

The Economist has an interesting article on the ‘CSI effect’ which suggests that television crime dramas are altering jurors’ expectations of the relevance and power of scientific evidence and hence affecting how court judgements are made.

The article is largely based on a forthcoming paper to be published in Forensic Science International that argues the ‘CSI effect’ is influencing how forensic evidence is interpreted and understood by professionals and the public alike.

Nevertheless, both The Economist piece and the academic article in Forensic Science International are notable for the fact they are largely based on anecdotes.

Actually, empirical (shall we say, forensic?) evidence for the effect is harder to come by. One of the few people who have systematically investigated the effect is trial judge and law professor Donald Shelton who came to significantly less alarming conclusions.

In a study on the effect published in the National Institute of Journal, Shelton reported that although to effect did appear in places, it mainly effected expectations and the effect on actual decisions was inconsistent and largely insubstantial:

There was scant evidence in our survey results that CSI viewers were either more or less likely to acquit defendants without scientific evidence. Only 4 of 13 scenarios showed somewhat significant differences between viewers and non-viewers on this issue, and they were inconsistent. Here are some of our findings:

* In the “every crime” scenario, CSI viewers were more likely to convict without scientific evidence if eyewitness testimony was available.

* In rape cases, CSI viewers were less likely to convict if DNA evidence was not presented.

* In both the breaking-and-entering and theft scenarios, CSI viewers were more likely to convict if there was victim or other testimony, but no fingerprint evidence.

Law professor Kimberlianne Podlas was even more damning in a paper [pdf] published in the Loyola of Los Angeles Entertainment Law Review, writing:

Notwithstanding the popularity of such claims, they are not grounded in case-studies or statistical data of increases in acquittals. Rather, they are based on anecdotes about cases wherein law enforcement lost their case while believing it should have won. However, anecdotes are not an adequate substitute for empirical evidence or a logical theory of media influence.

The ‘CSI effect’, it seems, probably wouldn’t stand up in court.

UPDATE: Many thank to Mind Hacks reader Brett for emailing to say that the Stanford Law Review published an article on the supposed ‘CSI effect’ and why it lacks evidence last April, which is also notable for tackling the reasons for why it has gained a cultural foothold despite such flimsy support.

Link to The Economist on the ‘CSI effect’ (via @crime_economist)
Link to Forensic Science International paper.
Link to study on ‘CSI effect’
pdf of Podlas’ article on CSI effect ‘fiction’.

Questioning ‘one in four’

The Guardian has an excellent article questioning the widely cited statistic that ‘1 in 4’ people will have a mental illness at some point in their lives. The issue of how many people have or will have a mental illness raises two complex issues: how we define an illness and how we count them.

Defining an illness is a particularly tricky conceptual point and this is usually discussed as if it is an issue particular to psychiatry and psychology that doesn’t effect ‘physical medicine’ but it is actually a concern that is equally pressing in all types of poor health.

The most clear-cut definition of an illness is usually given as an infectious disease that can be diagnosed with a laboratory test. For example, you either have the bacteria or you don’t.

However, you will acquire lots of new bacteria that will continue to live in your body, some of which ’cause problems’ and others that don’t. So the decision rests not on the presence or absence of new bacteria, but on how we define what it means for one type to be ‘causing a problem’. This is the central point of all definitions of illness.

For example, when are changes in heart function enough for them to be considered ‘heart disease’? Perhaps we judge them on the basis of their knock-on effects, but this raises the issue of what consequences we think are serious, and when we should consider them serious enough to count. Death within weeks, clearly, death within two years, maybe, but is still this the case if it occurs in a 90 year-old?

The idea of a personal change ‘causing a problem’ is also influenced by culture as it relies on what we value as part of a fulfilling life.

In times gone past, physical differences that caused sexual problems might only have been considered an illness if they prevented someone from having children. A man who had children, wanted no more, but was unable to have recreational sex with his wife due to physical changes might be considered unlucky but not ill.

The idea of normal sexual function was different, and so the concept of abnormality and illness were also different.

The same applies to mental illness. What we consider an illness depends on what we take for being normal and what someone has the ‘right’ to expect from life.

The fact that the concept of depression as an illness has changed from only something that caused extreme disability (‘melancholy madness’) to something that prevents you from being content is likely due to the fact that, as a society, we have agreed that we have a right to expect that we enjoy our lives. There was no such expectation in the past.

The problem of correctly diagnosing an illness is a related problem. After we have decided on the definition of an illness, there is the issue of how reliably we can detect it – how we fit observations of the patient to the definition.

This is a significant issue for psychiatry, which largely relies on changes in behaviour and subjective mental states, but it also affects other medical specialities.

Contrary to popular belief, most ‘physical’ illness are not diagnosed with lab tests. As in psychiatry, while lab tests can help the process (by excluding other causes or confirming particular symptoms) the majority of diagnoses of all types are made by what is known as a ‘clinical diagnosis’.

This is no more than a subjective judgement by a doctor that the signs and symptoms of a patient amount to a particular illness.

For example, the diagnosis of rheumatoid arthritis depends on the doctor making a judgement that the mixture of subjectively reported symptoms by the patient and objective observations on the body amount to the condition.

The key test of whether an illness can be counted is how reliably this process can be completed – or, in other words, whether doctors consistently agree on whether patients have or don’t have the condition.

This is more of an issue for psychiatry because diagnosis relies more heavily on the patient’s subjective experience, but it is wrong to think that bodily observations are necessarily more reliable.

For example, the Babinski response is where the toes curl upward after the plantar reflex is tested by stroking the bottom of the foot. It is commonly used by neurologists to test for damage to the upper motor neurons but it is remarkably unreliable. In fact, neurologists agree on whether it is present at a far lower rate than would be acceptable for the diagnosis of a mental illness or psychiatric symptom.

The problem of reliably diagnosing a condition is relatively easy to overcome, however, as agreement is easy to test and refine. The problem of what we consider an illness is a deeper conceptual issue and this is the essence of the debates over how many people have a mental illness.

The ‘1 in 4’ figures seems to have been mostly plucked out of the air. If this seems too high an estimate, you may be surprised to learn that studies on how many people qualify for a psychiatry diagnosis suggest it is too low.

There is actually no hard evidence for one in four ‚Äì or any other number ‚Äì because there’s never been any research looking at the overall lifetime rates of mental illness in Britain. The closest thing we’ve had is the Psychiatric Morbidity Survey, run by the Office of National Statistics. The latest survey, done in 2007, found a rate of about one in four, 23%, but this asked people whether they’d suffered symptoms in the past week (for most disorders).

We don’t know what the corresponding rate for lifetime illness is, although it must be higher. Several such studies have been done in other English speaking countries, however. The most recent major survey of the US population found an estimated lifetime rate of no less than 50.8%. Another study in Dunedin, New Zealand, found that more than 50% of the people there had suffered from mental illness at least once by the age of 32.

Psychiatry has a tendency for ‘diagnosis creep’ where unpleasant life problems are increasingly defined as medical disorders, partly due to pressure from drug companies who develop compounds that could genuinely help non-medical problems. The biggest market is the USA where most drugs are dispensed via insurance claims and insurance companies demand an official diagnosis to fund the drugs, hence, pressure to create new diagnoses from companies and distressed people.

Whenever someone criticises a diagnosis as being unhelpful a common response is to suggest the critic has no compassion for the people with the problem or that they are wanting to deny them help.

The most important issue is not whether people are suffering or whether there is help available to them, but whether medicine is the best way of understanding and assisting people.

Medicine has the potential to do great harm as well as great good and it is not an approach which should be used without seriously considering the risks and benefits, both in terms of the individual and in terms of how it shifts our society’s view of ourselves and the share of responsibility for dealing with personal problems.

So when you hear figures that suggest that ‘1 in 4’ or ‘50%’ of people will have a mental illness in their lifetime, question what this means. The figure is often used to try and destigmatise mental illness but the most powerful bit of The Guardian article shows that this is not necessary:

People who experience mental illness often face stigma and discrimination, and it’s right to oppose this. But stigma is wrong whether the rate of mental illness is one in four, or one in 400. We shouldn’t need statistics to remind us that mental illness happens to real people. By saying that mental health problems are nothing to be ashamed of because they’re common, one in four only serves to reinforce the assumption that there’s something basically shameful about being “abnormal”.

If you want more background on the ‘1 in 4’ figure or discussion about how we understand what is mental illness and who has it, an excellent three part series on Neuroskeptic tackled exactly this point.

Link to ‘How true is the one-in-four mental health statistic?’
Parts one two and three of excellent Neuroskeptic series

The madwoman in the attic

BBC Radio 4 has an excellent programme on the depiction of the ‘madwoman in the attic’ in Victorian literature and how it reflects ideas about mental disturbance and femininity of the time.

The programme discusses Mrs Rochester from Jane Eyre, Anne Catherick from The Woman in White, and Madame Bovary from the book of the same name.

Unfortunately, the programme finishes on the rather clich√©d interpretation that the novels demonstrate how women who didn’t conform ended up being branded mad and locked up – essentially, madness as a form of female repression.

This is the classic feminist criticism of historical ideas about madness and despite there being some truth to it, it is only supportable by ignoring the other side of the coin – the traditional interplay between insanity and masculinity.

Feminist writer Elaine Showalter makes exactly this point with regards to ‘hysteria’ in her book Histories but you can read an excellent summary of her approach in a chapter for the book Hysteria Beyond Freud where she tracks how the feminist critique originated and how it has been sustained by a limited focus on female issues.

Although male hysteria has been documented since the seventeenth century, feminist critics have ignored its clinical manifestations, writing as though “hysterical questions” about sexual identity are only women’s questions. In order to get a fuller perspective on the issues of sexual difference and identity in the history of hysteria, however, we need to add the category of gender to the feminist analytic repertoire. The term “gender” refers to the social relations between the sexes, and the social construction of sexual roles. It stresses the relational aspects of masculinity and femininity as concepts defined in terms of each other, and it engages with other analytical categories of difference and power, such as race and class. Rather than seeking to repair the historical record by adding women’s experiences and perceptions, gender theory challenges basic disciplinary paradigms and questions the fundamental assumptions of the field.

When we look at hysteria through the lens of gender, new feminist questions begin to emerge. Instead of tracing the history of hysteria as a female disorder, produced by misogyny and changing views of femininity, we can begin to see the linked attitudes toward masculinity that influenced both diagnosis and the behavior of male physicians. Conversely, by applying feminist methods and insights to the symptoms, therapies, and texts of male hysteria, we can begin to understand that issues of gender and sexuality are as crucial to the history of male experience as they have been in shaping the history of women.

The Radio 4 programme is otherwise excellent and talks to historians, literary critics, psychiatrists and the like about Victorian madness.

Thanks to the changes to the BBC website it is only available for another six days before disappearing into the void forever.

Link to ‘Madwomen in the Attic’.

Social warfare

A news story in today’s Nature notes that the US military are pumping more money into social science research which is considered to be an important ‘game changing’ component of 21st century warfare.

The unconventional wars now being fought by the US military have also bolstered interest in the social sciences. With the military trying to stave off a growing insurgency in Afghanistan, the Pentagon now believes that understanding cultural dynamics is at least as important as weapons. Consequently, Lemnios is ramping up funding in social-science projects, including a model developed by Los Alamos National Laboratory in New Mexico to simulate the opium trade in Afghanistan and analyse the effectiveness of efforts to combat it. The office is also supporting a project at the University of Chicago, Illinois, to model and predict potential conflicts.

Research to be used ‘on the ground’, like that described above, is likely to involve at least two important components. The first is the deployment of social scientists in conflict zones to use their skills to better solve problems that require the co-operation of local populations, along the lines of the Human Terrain System.

The other is the use of mathematical modelling to look at the structure of social networks to decide who are the key players or to infer (technically, to impute) the likely structure of the network that is not directly known about.

For example, within a large town there may be a small network of insurgents / terrorists / freedom fighters (take your pick) who your army wants to destroy. The traditional approach involves getting informers to reveal this network while you have to make a subjective judgement about how true this information is and how important each individual is, often before blundering in with troops, much to everyone’s alarm.

With social network analysis, you can create maps of the known network and mathematically analyse it for how accurate it seems and get an estimation for how important each person is for the overall structure. You can even add to the model with observational data (e.g. with traffic analysis – looking at communication patterns without knowing content) and make computational best guesses as to relationships and people you know must exist, but no nothing directly about.

The idea being that you can destroy the network by taking out the key people with the minimum of fuss – where ‘taking out’ could mean killing, arresting or bribing, and where ‘fuss’ could mean violence, risk or public knowledge.

Essentially, you are analysing the behaviour of networks and much the same principles (and indeed, some of the same laws) apply to other sorts of networks like transport and communication.

While this sort of research would help forces on the ground, other research, as funded by the Pentagon’s Minerva programme, seems aimed more at foreign policy and PsyOps operations, where the attitude and behaviour of very large populations need to be understood.

As has been the case with previous military enthusiasm for such ventures, this new level of funding is likely to cause additional concern that social science will become ‘weaponised’ and distort a field that has traditionally had a commitment to a ‘do no harm’ policy.

The Nature article also has a curious bit where it discusses the Pentagon’s interest in “how organisms sense and respond to stimuli ‚Äî such as chemicals, ions and metals, or electrical, magnetic, optical and mechanical impulses” to be able to develop “living sentinels”.

UPDATE: Thanks to Ian for sending a link to this Slate article series on ‘how the U.S. military used social networking to capture the Iraqi dictator’.

Link to Nature on Pentagon / social science tryst.

The ‘pseudocommando’ mass murderer

Murder sprees by grudge-bearing, gun-toting killers have become a tragic feature of modern society although owing to the thankfully rare occurrence of the incidents, little is known about the sort of person who decides to embark upon this sort of deadly rampage. An article just published in the Journal of the American Academy of Psychiatry and the Law reviews what we know about such people.

It must be said that the article spends a lot of time on the rather interpretive ‘psychodynamics’ of mass shooter personality and less on more systematic evidence, but largely, it seems, because there is very little of the latter.

However, it’s also worth saying that forensic psychology and psychiatry in the US has traditionally been, and remains, much more heavily influenced by Freudian theories than in Europe and so these sorts of analyses are not quite so unusual as they might seem.

The two introductory paragraphs to the paper note the main points and dispel some myths with regard to mass shooters (please note, I’ve removed the numerical references for ease of reading).

The term pseudocommando was used by Dietz in 1986 to describe a type of mass murderer who plans his actions “after long deliberation”. The pseudocommando often kills indiscriminately in public during the daytime, but may also kill family members and a “pseudo-community” he believes has mistreated him. He comes prepared with a powerful arsenal of weapons and has no escape planned. He appears to be driven by strong feelings of anger and resentment, in addition to having a paranoid character. Such persons are “collectors of injustice” who nurture their wounded narcissism and retreat into a fantasy life of violence and revenge. Mullen described the results of his detailed personal evaluations of five pseudocommando mass murderers who were caught before they could kill themselves or be killed. He noted that the massacres were often well planned (i.e., the offender did not “snap”), with the offenders arriving at the crime scene heavily armed, often in camouflage or warrior gear, and that they appeared to be pursuing a highly personal agenda of payback to an uncaring, rejecting world. Both Mullen and Dietz have described this type of offender as a suspicious grudge holder who is preoccupied with firearms.

Mass killings by such individuals are not new, nor did they begin in the 1960s with Charles Whitman. The news media tend to suggest that the era of mass public killings was ushered in by Whitman atop the tower at the University of Texas at Austin and have become “a part of American life in recent decades.” Research indicates that the news media have heavily influenced the public perception of mass murder, particularly the erroneous assertion that its incidence is increasing. Furthermore, it is typically the high-profile cases that represent the most widely publicized, yet least representative mass killings. As an example that such mass murderers have existed long before Whitman, consider a notorious case, the Bath School disaster of 1927, now long forgotten by most. Andrew Kehoe lived in Michigan in the late 1920s. He struggled with serious financial problems, and his wife suffered from tuberculosis. He appeared to focus his unhappiness and resentment on a local town conflict having to do with a property tax being levied on a school building. After becoming utterly overwhelmed with resentment and hatred, Kehoe killed his wife, set his farm ablaze, and killed some 45 individuals by setting off a bomb in the school building. Kehoe himself was killed in the blast, but he left a final communication on a wooden sign outside his property that read: “Criminals are made, not born”‚Äîa statement suggestive of externalization of blame and long-held grievance.

Link to PubMed entry for ‘pseudocommando’ article.

Do animals commit suicide?

Photo by Flickr user dumbskull. Click for sourceTime magazine has a short article on the history of ideas about whether animals can commit suicide. It starts somewhat awkwardly by discussing the recent Oscar winning documentary on dolphins but is in fact based on an academic paper on ‘animal suicide’.

Changes in how humans have interpreted animal suicide reflect shifting values about animals and our own self-destruction, the paper argues. The Romans saw animal suicide as both natural and noble; an animal they commonly reported as suicidal was one they respected, the horse. Then for centuries, discussion of animal suicide seems to have stopped. Christian thinkers like St. Thomas Aquinas deemed suicide sinful for humans and impossible for animals. “Everything naturally loves itself,” wrote Aquinas in the 13th century. “The result being that everything naturally keeps itself in being.”

In 19th century Britain, however, after Darwin demonstrated how humans evolved from animals, humane societies formed, vegetarianism and pets became popular, and reports of animal suicide resurfaced. The usual suspect this time was the dog. In 1845 the Illustrated London News reported on a Newfoundland who had repeatedly tried to drown himself: “The animal appeared to get exhausted, and by dint of keeping his head determinedly under water for a few minutes, succeeded at last in obtaining his object, for when taken out this time he was indeed dead.”

Of course, the article doesn’t answer the question of whether animals can end it all, but is a fascinating look at how the idea that they can has gone in and out of fashion.

UPDATE: Thanks to Mind Hacks reader Avicenna for pointing out that the full text of the academic article ‘The nature of suicide: science and the self-destructive animal’ is available online.

Link to ‘Do Animals Commit Suicide? A Scientific Debate’.

An introduction to cognition and culture

Photo by Flickr user Audringje. Click for sourceThe Culture and Cognition blog covers the territory where culture and psychology meet, and they’ve just released their ‘reader‘ which has a list of essential books and papers to cover the interface between anthropology and the cognitive sciences.

Many of the articles are available in full online and the list is a fantastic guide to the area.

It includes both popular and academic texts but the list works best as a reference, so bookmark it as I’m sure you’ll be returning to it time and again if you’re like me and interested in the cross over between culture and psychology.

Link to Cognition and Culture Reader.

Go Cognitive guide to the brain

Go Cognitive is an awesome free video archive of interviews and discussion that aims to explain some of the core topics in cognitive neuroscience.

It’s a project of the University of Idaho who’ve managed to convince some of the leaders in the science of the brain to talk about their work.

There are videos on fMRI, neuroplasticity, attention and neurological problems to name but a few, and there’s even a talk on psychology and stage magic.

The website also has a demo section that demonstrates some of the principles in action.

My only complaint is that you can’t download the videos, they can only be streamed, but nevertheless they remain a fantastically produced high quality series. Bravo.

Link to Go Cognitive videos (thanks Peter!).

Future neuro-cognitive warfare

Every year the US Army holds an annual conference called the “Mad Scientist Future Technology Seminar” that considers blue sky ideas for the future of warfare. Wired’s Danger Room discusses the conference and links to an unclassified pdf summary of the meeting which contains this interesting paragraph about ‘neuro-cognitive warfare’:

In the far term, beyond 2030, developments in neuro-cognitive warfare could have significant impacts. Neuro-cognitive warfare is the mashing of electromagnetic, infrasonic, and light technologies to target human neural and physiological systems. Weaponized capabilities at the tactical level will be focused on degrading the cognitive, physiological, and behavioral characteristics of Soldiers. Its small size and localized effects will make it ideal for employment in urban areas. Such technology could be employed through online immersive environments such as 2d Life or other electronic mediums to surreptitiously impact behavior without the knowledge of the target.

I presume ‘2d Life’ refers to Second Life, but I could be wrong.

The first part is discussing the conventional development of warfare technology designed to target the nervous system, which is a long-established military tradition that has included weapons such as the rock, the poison-tipped arrow, the nerve gas shell and a new generation of hush-hush electromagnetic weapons.

The second part is a little more interesting, however, it implies that a certain form of stimulation embeddable in a popular game or internet service (I think they’re too shy too to say porn) might reduce cognitive performance by only a fraction, but when considered over a whole army, it could make a difference to the overall fighting force.

The scenario is a little bit science fiction (Snow Crash anyone?) but is an intriguing possibility given that only a slight change would be needed in an individual to justify its effect if it could be distributed over a wide enough population.

For example, many priming studies have shown it is possible to influence behaviour just by exposing people to certain concepts.

In one of my favourite studies, exposing people to ideas about elderly people slowed their walking speed, while a more recent experiment found this effect could change action sequences as well.

Link to Danger Room coverage of ‘Mad Scientist Seminar’.
pdf of unclassified military summary.

Dark clouds and their silver linings

Photo by Flickr user s~revenge. Click for sourceThe New York Times has a thought-provoking article on the possible advantages of depression, suggesting that the negative form of thinking associated with depression may encourage people to focus on their problems to help them solve the life dilemmas that have contributed to their low mood.

The piece explores the idea that rumination, the constant mental re-running of worrying thoughts and concerns, might be a form of self-imposed problem solving that has been evolutionary selected as an adaptive reaction to unfortunate situations.

This hypothesis was suggested by psychologist Paul Andrews and psychiatrist Andrew Thomson and The New York Times piece is largely based on their recently published paper which outlines how many studies have found depressed people are better at solving certain sorts of problems.

One difficulty with their proposal, however, is that while they admit that social problems are one of the most common triggers for depression, they miss out the many studies that have found depressed people and, especially depressed people who ruminate, are reliably worse at social problem solving.

It’s probably also worth saying that depression is not a single entity. Despite there currently being a single diagnosis of ‘major depression’ in reality the problem can range from a few weeks of feeling out of sorts to suicidal despair to a seemingly complete shutdown of body and mind in a state of catatonia.

Evolutionary explanations of psychiatric disorders always sit slightly uncomfortably because its not clear exactly what is being selected for because it’s not clear exactly what we’re talking about.

However, the article has clearly stimulated a great deal of interest, and the author, science writer Jonah Lehrer, tackles some of the feedback on his blog.

Link to NYT piece ‘Depression‚Äôs Upside’.