How cannabis makes thoughts tumble

Cannabis smokers often report that when stoned, their thoughts have a free-wheeling quality and concepts seem connected in unusual and playful ways. A study just published online in Psychiatry Research suggests that this effect may be due to the drug causing ‘fast and loose’ patterns of spreading activity in memory, something known as ‘hyper-priming’.

Priming is a well studied effect in psychology where encountering one concept makes related concepts more easily accessible. For example, classic experiments show that if you see the word ‘bird’ you will react more quickly to words like ‘wing’ and ‘fly’ than words like ‘apple’ and ‘can’ because the former words are more closely related in meaning than the latter.

In fact, it has been shown that the more closely related the word, the quicker we react to it, demonstrating a kind of ‘mental distance’ between concepts. Think of it like dropping a stone into a pool of mental concepts. The ripples cause activity that reduces in strength as it moves away from the central idea.

‘Hyper-priming’ is an effect where priming happens for concepts at a much greater distance than normal. For example, the word ‘bird’ might speed up reaction times to the the word ‘aeroplane’. To return to our analogy, the ripples are much stronger and spread further than normal.

The effect has been reported, albeit inconsistently, in people with schizophrenia and some have suggested it might explain why affected people can sometimes make false or unlikely connections or have disjointed thoughts.

As cannabis has been linked to a slight increased risk for psychosis, and certainly causes smokers to have freewheeling thoughts, the researchers decided to test whether stoned participants would show the ‘hyper-priming’ effect.

The experiment used a classic ‘lexical decision task‘ where the volunteers are shown an initial word (‘time’) and then after a short gap are shown a nonsense word (‘yipt’) and a true word (‘date’) at the same time and have to indicate as quickly as possible which is the real world.

The experimenters altered how related the initial word and true word were to test for the semantic distance effect, and also varied the gap between the initial word and the test to see how long the priming effect might last.

Volunteers who were under the influence of cannabis showed a definite ‘hyper-priming’ tendency where distant concepts were reacted to more quickly. Interestingly, they also showed some of this tendency when straight and sober .

Cannabis also had the effect of causing temporary psychosis-like distortions as would be expected from a psychedelic drug, but the smokers did not make more errors and were not more likely to report psychosis-like symptoms when sober, suggesting the effect was not due to general mental impairment and couldn’t be explained by underlying tendency to mental distortion.

Although the debate is not completely settled, there is now good evidence that cannabis causes a small increased risk for developing schizophrenia particularly when smokers start young. In fact, additional evidence on this front was published only this week.

The researchers discuss the possibility that long-term smokers who spend a lot of time in a chronic ‘hyper-primed’ state might make psychosis more likely by loosening the boundaries of well-grounded thought, although exactly how cannabis raises the risk of psychosis, and indeed, how exactly it affects the brain, is still not understood well-enough to make a firm judgement.

Link to PubMed entry for cannabis ‘hyper-priming’ study.

Tipsy thinking

Photo by Flickr user rpeschetz. Click for sourceSeed Magazine has a great short article on misperceptions and counter-intuitive findings concerning alcohol and drinking.

The piece covers whether alcohol break-down product acetaldehyde plays as much a part in drunkenness as alcohol itself, misperceptions about the chances of women having their drink spiked to facilitate sexual assault, and mothers’ perceptions about their kids future drinking patterns.

Alcohol is so embedded in most cultures that perceptions and reality intermix in surprising ways. Last week psychologist Polly Palumbo discussed a 2008 study about mothers’ beliefs about their own kids’ drinking. You might think that if mothers were concerned about their young children becoming drinkers in high school, they might be more successful in preventing some of the kids from actually engaging in underage drinking. In fact, the study, led by Stephanie Madon and published in the Journal of Personality and Social Psychology, found the opposite. Mothers who worried their children might become drinkers had kids that were significantly more likely to drink.

The researchers are careful to point out that the study is just a correlation; we can’t say that the mothers’ belief about drinking is what caused their kids to drink. But because the study was administered over several years, it’s better than many correlational studies: We know the belief preceded the drinking, so it’s pretty much impossible that the kids’ drinking behavior itself led to the belief.

Link to Seed article ‘A Sober Assessment’.

Teenagers: hyper-mortals

Photo by Flickr user Nik Doof. Click for sourceA common belief about teenagers is that they implicitly assume that they are invincible or immortal and think little about their own deaths. A new study just published in the Journal of Adolescent Health shows this to be a myth, however, as they vastly over-estimate their chances of dying within the next year.

By the mid-teens, our ability to judge the likelihood of uncertain events is usually equal to that of adults, so we might expect that adolescents can judge the chance of death as accurately as grown-ups.

This study, led by psychologist Baruch Fischhoff, surveyed 3,436 14-to-18 year-old adolescents and a local group of 124 seventh graders and 132 ninth graders asking them to estimate their chance of dying in the next year, enquiring about what sort of neighbourhood they lived in, whether they’d experienced or witnessed any violent events and whether they’d had any serious health problems.

Although the statistical death rate is 0.08%, the most common estimates where that they had a 5% of 10% chance of dying within the next year. Interesting, there was a larger than expected number of teens who judge their chance of dying within the next year as 50%, although this likely suggests that they were indicating a sort of 50/50 answer as a way of expressing “I don’t know”.

Adolescents assumptions about how likely they were to die were strongly related to their reports of how much crime they expected to experience and not or only very weakly related to if they’d experienced violent events or had health problems.

In other words, teenagers seem to be personally pessimistic and live in a world where they perceive themselves to have a high chance of dying despite the relatively small actual risk.

Link to PubMed entry for study.

Tell me lies, tell me sweet little lies

Photo by Flickr user adamknits. Click for sourceFlattery can work it’s magic, even when we know it’s insincere. The Boston Globe covers a new study that found that even when we realise the compliments we’re hearing are an attempt to butter us up, they can still have a persuasive effect.

Insincere flattery gets a bad rap. Sure, it sounds cheesy or even awkward. But new research suggests that one‚Äôs initial conscious reaction – discounting the flattery as a self-serving ploy – may mask a more durable implicit positive emotional association with the flatterer. People who were given a printed advertisement from a department store that paid compliments to their sense of fashion had higher opinions of the store, but only when they weren‚Äôt given much time to think about it, or when they were asked several days later. This effect was boosted after people engaged in self-criticism but was nullified after people engaged in self-affirmation, suggesting that flattery – even the patently insincere type – will be especially effective on folks who are down on their luck.

Sadly, the study itself is locked behind a paywall, but there’s a longer summary of the experiment at the journal website which has a few more details.

By the way, could I just say what a lovely gas mask you’re wearing? Mind Hacks, getting the readers we deserve since 2004.

Link to brief Boston Globe write-up.
Link to study abstract.
Link to longer summary of study (via Neuromarketing).

The burglar with the lemon juice disguise

I’ve just re-read the classic study “Unskilled and unaware of it” which established that when we’re incompetent at something we’re often so incompetent that we don’t realise that we’re incompetent. I had forgotten that it starts with a wonderful story about an inept bank robber.

In 1995, McArthur Wheeler walked into two Pittsburgh banks and robbed them in broad daylight, with no visible attempt at disguise. He was arrested later that night, less than an hour after videotapes of him taken from surveillance cameras were broadcast on the 11 o’clock news. When police later showed him the surveillance tapes, Mr. Wheeler stared in incredulity. “But I wore the juice” he mumbled. Apparently, Mr. Wheeler was under the impression that rubbing one’s face with lemon juice rendered it invisible to videotape cameras (Fuocco, 1996).

We bring up the unfortunate affairs of Mr. Wheeler to make three points. The first two are noncontroversial. First, in many domains in life, success and satisfaction depend on knowledge, wisdom, or savvy in knowing which rules to follow and which strategies to pursue. This is true not only for committing crimes, but also for many tasks in the social and intellectual domains, such as promoting effective leadership, raising children, constructing a solid logical argument, or designing a rigorous psychological study. Second, people differ widely in the knowledge and strategies they apply in these domains (Dunning, Meyerowitz, & Holzberg, 1989; Dunning, Perie, & Story, 1991; Story & Dunning, 1998), with varying levels of success. Some of the knowledge and theories that people apply to their actions are sound and meet with favorable results. Others, like the lemon juice hypothesis of McArthur Wheeler, are imperfect at best and wrong-headed, incompetent, or dysfunctional at worst.

Perhaps more controversial is the third point, the one that is the focus of this article. We argue that when people are incompetent in the strategies they adopt to achieve success and satisfaction, they suffer a dual burden: Not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it. Instead, like Mr. Wheeler, they are left with the mistaken impression that they are doing just fine. As Miller (1993) perceptively observed in the quote that opens this article, and as Charles Darwin (1871) sagely noted over a century ago, “ignorance more frequently begets confidence than does knowledge” (p. 3).

This effect has since been named the Dunning-Kruger effect after the authors of the study.

Link to PubMed entry for study.

Bonuses generate more heat than light

The engaging behavioural economist Dan Ariely has just become a columnist for Wired UK and in his first article he describes how the promise of performance-related pay often backfires leading people to do more but perform worse.

To see the effect of bonuses on performance, Nina Mazar (assistant professor of marketing, Toronto University), Uri Gneezy (professor of economics and strategy, University of California, San Diego), George Loewenstein (professor of economics, Carnegie Mellon, Pennsylvania) and I conducted three experiments. In one we gave subjects tasks that demanded attention, memory, concentration and creativity. We asked them, for example, to assemble puzzles and to play memory games while throwing tennis balls at a target. We promised about a third of them one day’s pay if they performed well. Another third were promised two weeks’ pay. The last third could earn a full five months’ pay. (Before you ask where you can participate in our experiments, I should tell you that we ran this study in India, where the cost of living is relatively low.)

What happened? The low-and medium-bonus groups performed the same. The big-bonus group performed worst of all.

Link to ‘Bonuses boost activity, not quality’ in Wired UK.

Full disclosure: I’m a contributing editor to Wired UK. I have never received a bonus in my life, but if I do, I hope to spend it beautiful on women and fast cars, although, in reality, I will probably buy a laptop.

We go with the flow

The Psychologist has a completely fascinating article on how we perceive things to be more appealing, easier to handle and more efficient based on how simple they are to understand – even when this is based on irrelevant or superficial properties – like its name or the font it is described in.

The core idea is that we partly judge things on ‘processing fluency’, that is, how easy it is to immediately grasp something. This seems intuitive, as we tend to prefer things that make sense to us, but it turns out that this preference is also heavily influenced by surface features.

For example, the article discusses the surprising amount of work on how simply changing the font can change our opinion of what the text is describing.

When they were presented [with physical exercise instructions] in an easy-to-read print font (Arial), readers assumed that the exercise would take 8.2 minutes to complete; but when they were presented in a difficult-to-read print font, readers assumed it would take nearly twice as long, a full 15.1 minutes (Song & Schwarz, 2008b). They also thought that the exercise would flow quite naturally when the font was easy to read, but feared that it would drag on when it was difficult to read. Given these impressions, they were more willing to incorporate the exercise into their daily routine when it was presented in an easy-to-read font. Quite clearly, people misread the difficulty of reading the exercise instructions as indicative of the difficulty involved in doing the exercise…

Novemsky and colleagues (2007) presented the same information about two cordless phones in easy- or difficult-to-read fonts. They observed that 17 per cent of their participants postponed choice when the font was easy to read, whereas 41 per cent did so when the font was difficult to read. Apparently, participants misread the difficulty arising from the print font as reflecting the difficulty of making a choice.

The article contains numerous examples of how changing surface features, such as giving something an easy or difficult to pronounce name, alters what we think about it.

However, the piece also mentions that giving something difficult-to-process or unfamiliar features also means we scrutinise it more closely, which means we often pick up errors more easily.

This is is a wonderfully elegant example:

As an example, consider the question ‚ÄòHow many animals of each kind did Moses take on the Ark?‚Äô Most people answer ‚Äòtwo‚Äô despite knowing that the biblical actor was Noah, not Moses. Even when warned that some of the statements may be distorted, most people fail to notice the error because both actors are similar in the context of biblical stories. However, a change in print fonts is sufficient to attenuate this Moses illusion. When the question was presented in an easy-to-read font, only 7 per cent of the readers noticed the error, whereas 40 per cent did so when it was presented in a difficult-to-read font…

Link to Psychologist article on processing fluency.

Full disclosure: I am an unpaid associate editor and columnist for The Psychologist and I have an unfamiliar first name – draw your own conclusions.

Information channelling

Photo by Flickr user leSiege. Click for sourceThe Frontal Cortex has a fantastic piece discussing a new study finding that people choose TV news based on which channels are more likely to agree with their pre-existing opinions and how we have a tendency to filter for information that confirms, rather than challenges, what we believe.

Lehrer discusses various ways in which we selectively attend to information we agree with but the best bit is where he goes on to discuss a wonderful study from 1967 where people demonstrated in the starkest way that they’d rather block out information that doesn’t agree with their pre-existing beliefs.

Brock and Balloun played a group of people a tape-recorded message attacking Christianity. Half of the subjects were regular churchgoers while the other half were committed atheists. To make the experiment more interesting, Brock and Balloun added an annoying amount of static – a crackle of white noise – to the recording. However, they allowed listeners to reduce the static by pressing a button, so that the message suddenly became easier to understand. Their results were utterly predicable and rather depressing: the non-believers always tried to remove the static, while the religious subjects actually preferred the message that was harder to hear. Later experiments by Brock and Balloun demonstrated a similar effect with smokers listening to a speech on the link between smoking and cancer. We silence the cognitive dissonance through self-imposed ignorance.

Link to Frontal Cortex piece ‘Cable news’.
Link to summary of 1967 static study.
Link to PubMed entry for same.

The ominous power of confession

I’ve just read a remarkable article [pdf] on 125 proven cases of wrongful conviction in the US justice system where the accused made a false confession.

While we tend to think that no-one would confess to a crime they’ve never committed the phenomenon is a lot more common than we assume. The article cites studies where convicted people have been subsequently proved innocent, largely through DNA evidence, and 14-25% had made a false confession.

Research has now established that certain police interrogation techniques can lead to false confessions, and it is not only through intimidated suspects confessing even though they know they’re innocent. In some cases, categorised as ‘coerced-internalized’ false confessions, the person starts to doubt their own memory and actually comes to believe that they did commit the crime.

Interestingly, there is evidence that this is most likely to occur in the most serious crimes, possibly because the police themselves are under pressure to solve the cases. In this study, 81% of false confessions were for murders, 9% for rapes and 3% for arsons.

The article also outlines the impact of a confession on the justice system. We discussed an experimental study on the persuasive effect of confessions previously, but below is a remarkable run down of evidence from the ‘real world’.

I’ve taken out the numerical references for ease of reading, but if you want to check out the sources for the following section, it’s taken from p920:

…a suspect‚Äôs confession sets in motion a virtually irrefutable presumption of guilt among criminal justice officials, the media, the public and lay jurors. A suspect who confesses‚Äîwhether truthfully or falsely‚Äîwill be treated more harshly at every stage of the criminal justice process. Once police obtain a confession, they typically close the investigation, clear the case as solved, and make no effort to pursue other possible leads‚Äîeven if the confession is internally inconsistent, contradicted by external evidence or the result of coercive interrogation.

Like police, prosecutors rarely consider the possibility that an entirely innocent suspect has been made to confess falsely through the use of psychologically coercive and/or improper interrogation methods. When there is a confession, prosecutors tend to charge the defendant with the highest number and types of offenses and are far less likely to initiate or accept a plea bargain to a reduced charge. Suspects who confess will experience greater difficulty making bail (especially in serious cases), a disadvantage that significantly reduces a criminal defendant’s likelihood of acquittal.

Defense attorneys are more likely to pressure their clients who have confessed to waive their constitutional right to a trial and accept a guilty plea to a lesser charge. Judges are conditioned to disbelieve claims of innocence and almost never suppress confessions, even highly questionable ones. If the defendant’s case goes to trial, the jury will treat the confession as more probative of the defendant’s guilt than virtually any other type of evidence, especially if—as in virtually all high profile cases—the confession receives negative pre-trial publicity.

Confession evidence (regardless of how it was obtained) is so biasing that juries will convict on the basis of confession alone, even when no significant or credible evidence confirms the disputed confession and considerable significant and credible evidence disconfirms it. Sadly, if a false confessor is convicted, he will almost certainly be sentenced more harshly

The article, ‘The Problem of False Confessions in the Post-DNA World’, originally published in the North Carolina Law Review is quite long but a gripping read.

pdf of article.
Link to citation and summary of article.

Motivated reality

Photo by Flcikr user AMagill. Click for sourceNeurophilosophy has a great piece on a new study finding that the perception of distance to an object was altered by how much someone wanted it, with a greater desire leading the people in the study to perceive the object as closer. This a summary of one of the several experiments that demonstrated the effect:

Participants were asked to throw a small rubber bean bag towards a gift voucher placed on the floor in front of them, and told that the person whose toss landed closest to the voucher would win it. One group was told that the voucher had a value of $25, thus making it desirable to them, while the other was led to believe that it was worthless. This experiment confirmed the earlier ones – those participants who believed the voucher was worth something perceived it to be nearer, and consequently underthrew the bean bag so that it fell short of the target.

As Mo notes, these experiments are related to what is known as the ‘New Look’ movement in psychology which arose in the 1940s as a direct challenge to the behaviourists who said that all mental states, such as beliefs and desire, were illusions and had no scientific basis.

The New Look theories argued that our perception of reality could be directly influenced by our desires and set about proving behaviourists wrong by using their own tools, physical measurements of perception, to prove them wrong.

The movement was sparked by a 1947 study by psychologists Jerome Bruner and Cecile Goodman that has become a classic in the field and is still fascinating today.

They asked children to estimate the size of coins using an adjustable ‘collar’ and found the kids consistently judged the coins to be bigger than identically sized cardboard circles, suggesting the monetary value of the coins was influencing how big they perceived the dimensions to be.

But the clincher for the idea that value and desire altered perception was that the children from poorer backgrounds perceived the coins to be bigger than children from richer backgrounds.

The study caused huge interest and many studies followed in the subsequent years, partly as the field allowed the combination of both experimental psychology and Freudian-inspired ideas about the power of unconscious motivations.

These latest studies, covered expertly by Neurophilosophy, follow in the same tradition.

Link to Neurophilosophy on how ‘Desire influences visual perception’.
Link to full text of Bruner and Goodman’s classic study.

Dealing with data of the damned

Photo by Flickr user Jungleboy. Click for sourceThere’s an interesting article in Wired about how scientists deal with data that conflicts with their expectations and whether biases in how the brain deals with contradictory information might influence scientific reasoning.

The piece is based on the work of Kevin Dunbar who combines the sociology of science with the cognitive neuroscience of scientific reasoning.

In other words, he’s trying to understand what scientists actually do to make their discoveries (rather than what they say they do, or what they say they should do) and whether there are specific features of the way the brain handles reasoning that might encourage these practices.

One of his main findings is that when experimental results appear that can’t be explained, they’re often discounted as being useless. The researchers might say that the experiment was designed badly, the equipment faulty, and so on.

It may indeed be the case the faults occurred, but it could also be the case when consistent information emerges, but these possibilities are rarely investigated when the data agrees with pre-existing assumptions, leading to possible biases in how data is interpreted.

Dunbar is not the first to tackle this issue. In fact, the first to do so is probably one of the most important but unrecognised philosophers of science, Charles Fort, who is typically associated with ‘Fortean’ or anomalous phenomena – such as fish falling from the sky.

Fort did indeed collect reports of all types of anomalous phenomena (interestingly, almost all from scientific journals) and used them as a critique of the scientific method – noting that while scientists say they reason from the data to theories about the world, what they actually do is filter the data in light of their theories and frequently ignore information that contradicts existing assumptions – hence, ‘damning’ some data as unacceptable.

This was later echoed when philosophers and sociologists started studying the scientific community in the 20th century, noting that the scientific method was not a clear practice but more of a tool in a wider consensus-forming toolbox.

Probably the most important thinker in this regard, not mentioned in the Wired article, was the philosopher Paul Feyerabend who noted that researchers regularly violate the ‘rules’ of science and this actually promotes progress rather than impedes it.

The article goes on to discuss research suggesting that part of this bias for information consistent with our assumptions may be due to differences in the way the brain handles this information.

Curiously, the piece mentions a 2003 study, where students were apparently asked to select the more accurate representation of gravity in an fMRI scanner, but unfortunately, I can find no trace of it.

However, a 2005 study by the same team, where participants where asked to match theories supported to different degrees by the data they’d seen (to do with how drugs relieve depression), came to similar conclusions. Namely, that brain activity is markedly different when we receive information that confirms our theories compared to when we receive information that challenges them.

In particular, contradictory information seems to activate an area deep in the frontal lobe (the ACC) often associated with ‘conflict monitoring’, along with an outer area of the frontal lobe (the DLPFC) associated with sorting out conflicting information, likely by filtering out some of the incompatible data so it is less likely to be registered or remembered.

There is clearly much more to scientific reasoning than this, as it is vast and complex both within individual researchers and between groups of people. I was particularly interested to read that breakthroughs were most likely to come from group discussions:

While the scientific process is typically seen as a lonely pursuit — researchers solve problems by themselves — Dunbar found that most new scientific ideas emerged from lab meetings, those weekly sessions in which people publicly present their data. Interestingly, the most important element of the lab meeting wasn’t the presentation — it was the debate that followed. Dunbar observed that the skeptical (and sometimes heated) questions asked during a group session frequently triggered breakthroughs, as the scientists were forced to reconsider data they’d previously ignored. The new theory was a product of spontaneous conversation, not solitude; a single bracing query was enough to turn scientists into temporary outsiders, able to look anew at their own work.

Although it turns out that discussion with people from a diverse range of people is most important – having a room full of people who share assumptions and expertise tends not to lead to creative scientific insights.

Link to Wired article on scientific reasoning.

The psychological effects of brain theories

The Frontal Cortex has an interesting piece on how giving people information suggesting that neuroscience undermines our everyday concept of free will can alter our ethical behaviour.

The post discusses two experiments where participants had been given information suggesting that free will was an illusion – one passage taken from Francis Crick’s book The Astonishing Hypothesis that argues against the everyday concept of free will on the basis of neurobiology.

It seems even these relatively brief encounters with information arguing against free-will had a noticeable effect on behaviour:

It turned out that students who had read the anti-free will quote were significantly more likely to cheat on the mental arithmetic test; their exposure to some basic scientific spin – your soul is a piece of meat – led to an increase in amorality. Of course, this is a relatively mild ethical lapse – as Schooler notes, “None of the participants exposed to the anti-free will message assaulted the experimenter or ran off with the payment kitty” – but it still demonstrates that even seemingly banal materialist concepts can alter our ethical behavior.

In another study, information on a “disbelief in free will” reduced people’s willingness to help others and increased the amount of unhelpful behaviour toward others.

The issue of free will in neuroscience is complex, but it is interesting that the information provided doesn’t bear directly on the issue of whether it is best to help other people or not.

Clearly though, biological explanations have an association with the idea that people are less in control of their actions, as we also know from other studies.

Science tends to assume that theories are not neutral in that they affect how we look at the world as researchers, but it is interesting to find out that this also happens on a personal psychological level as well.

Link to Frontal Cortex on ‘Free Will and Ethics’.

The persuasive power of false confessions

The APS Observer magazine has a fantastic article on the power of false confessions to warp our perception of other evidence in a criminal case to the point where expert witnesses will change their judgements of unrelated evidence to make it fit the false admission of guilt.

We tend to think that no-one would confess to a crime that they didn’t commit but there are numerous high profile cases where this has happened and the article notes that “because of advances in DNA evidence, the Innocence Project has been able to exonerate more than 200 people who had been wrongly convicted, 49 of whom had confessed to the crime we now know they didn’t commit.”

As a result of some of the early discoveries of false confessions, there is now a growing amount of research on what personal and situational factors trigger false confessions.

The classic book on the topic is forensic psychologist Gisli Gudjonsson’s The Psychology Of Interrogations And Confessions. It reviews the scientific evidence but also covers numerous legal cases where false confessions have played a part.

It turns out, people falsely confess to crimes for a wide array of reasons. Some are voluntary confessions where the person might want to gain notoriety, annoy the police or might genuinely believe they’ve committed the crime due to a delusion in the context of a psychotic mental illness like schizophrenia.

In other cases, a false confession can be triggered by pressure from the police or investigators. Sometimes this happens even when the person doesn’t genuinely believe their confession, because they just want to escape the high-pressure situation. In other cases, the psychological pressure leads the person to start doubting their own memories and they come to believe they have committed the crime.

There is now a great deal of research showing that highly suggestible people and people with learning disabilities or mental illnes are much more likely to make a false confession under pressure and police interview guidelines are being changed as a result.

However, the APS article takes a different tack. It looks at the psychology of how other people involved in deciding whether the person is guilty or not are influenced by confessions.

Imagine if an accused but innocent person falsely confesses and the other evidence doesn’t suggest that they have committed the crime. In this situation, it turns out that both lay people and experts tend to change their evaluation of the other evidence and perceive it as being stronger evidence against the accused.

Some of the studies cited in the article just blew me away:

In a 1997 study, Kassin and colleague Katherine Neumann gave subjects case files with weak circumstantial evidence plus either a confession, an eyewitness account, a character witness, or no other evidence. Across the board, prospective jurors were more likely to vote guilty if a confession was included in the trial, even when they were told that the defendant was incoherent at the time of the confession and immediately recanted what he said… Other studies have shown that conviction rates rise even when jurors see confessions as coerced and even when they say that the confession played no role in their judgment…

Kassin recently teamed up with psychologist Lisa Hasel to test the effect of confessions on eyewitnesses. They brought subjects in for what was supposed to be a study about persuasion techniques. The experimenter briefly left the room and, during that time, someone came in and stole a laptop off the desk. The subjects were then shown a lineup of six suspects, none of whom was the actual criminal, and they were asked to pick out which member of the lineup, if any, committed the crime. Two days later, the witnesses were brought back for more questioning… Of the people who had identified a subject from the original lineup, 60 percent changed their identification when told that someone else had confessed. Plus, 44 percent of the people who originally determined that none of the suspects in the lineup committed the crime changed their mind when told that someone had confessed (and 50 percent changed when told that a specific person had confessed). When asked about their decision, ‚Äúabout half of the people seemed to say, “Well, the investigator told me there was a confession, so that must be true.”…

In 2006, University College London psychologist Itiel Dror took a group of six fingerprint experts and showed them samples that they themselves had, years before, determined either to be matches or non-matches (though they weren’t told they had already seen these fingerprints). The experts were now given some context: either that the fingerprints came from a suspect who confessed or that they came from a suspect who was known to be in police custody at the time the crime was committed. In 17 percent of the non-control tests, experimenters changed assessments that they had previously made correctly.

The APS Observer has plenty more examples and demonstrates that false confessions are psychological sink holes that pull in both the accused and the legal process.

Link to ‘The Psychology and Power of False Confessions’.

Optimal starting prices for negotiations and auctions

An article in the latest edition of Current Directions in Psychological Science reviews studies on the best starting points to increase the final price in either negotiations or auctions. In general, start high in negotiations, start low in auctions.

It turns out that negotiations, where several parties are invited to discuss a price, and auctions, where people can include themselves by jumping in when they want, are quite different psychologically.

The article, by business psychologist Adam Gilinsky and colleagues, notes that starting prices are a form of ‘anchor‘ – a piece of information which is known to affect subsequent decisions. As the authors note, anchoring has a powerful influence on our reasoning:

An anchor is a numeric value that influences subsequent numeric estimates and outcomes. When people make judgments, their final estimates are often assimilated to—that is, become more similar to—the initial anchor value (Tversky & Kahneman, 1974).

For example, in one of the best-known anchoring studies (Tversky & Kahneman, 1974), participants were exposed to an arbitrary number between 0 and 100 from the spin of a roulette wheel and then asked to estimate the percentage of African nations in the United Nations: Participants whose roulette wheel landed on a relatively high number gave higher absolute estimates than did participants whose wheel landed on a lower number.

Even outside of trivia questions, few psychological phenomena are as robust as the anchoring effect; it influences public policy assessments, judicial verdicts, economic transactions, and a variety of psychological phenomena.

The evidence suggests that in negotiations, a high starting price most often leads to a high final price, as the anchoring effect seems to work in a relatively undiluted way (with the caveat that completely ridiculous starting prices could prevent any deal being reached).

There’s an interesting aside in the article, mentioning that you can protect yourself from high anchor points from other people by focusing on your own ideal price or your opponents weaknesses, as found by a 2001 study, or by considering why the suggested price might be inaccurate, as found by another study published in the same year.

It also turns out that, contrary to conventional wisdom, making the first offer is also a good strategy:

Many negotiation books recommend waiting for the other side to offer first. However, existing empirical research contradicts this conventional wisdom: The final outcome in single and multi-issue negotiations, both in the United States and Thailand, often depends on whether the buyer or the seller makes the first offer. Indeed, the final price tends to be higher when a seller (who wants a higher price and thus sets a high first offer) makes the first offer than when the buyer (who offers a low first offer to achieve a low final price) goes first.

In contrast, for auctions, starting with a low price is generally more likely to lead to a higher final price. The researchers note this is likely due to three factors: price rise in auctions seems to be driven by social competition and so starting with a low entry point encourages more people to join in; once someone has bid, they have made a commitment which is likely to encourage them to continue; and finally, more bids leads us to infer that the item has a higher value.

It’s not a huge article so is worth reading in full if you’re interested in economic reasoning. Luckily, the full text is available as a pdf pre-print if you don’t have access to the journal.

Link to DOI entry for study.
pdf of full text.

The consequences of faking it

I’ve just caught a short video by the brilliant behavioural economist Dan Ariely who explains the surprising effect of wearing fake goods on the likelihood of us cheating and for on much we suspect that others are being dishonest.

Ariely is riffing on one of his recent studies that was led by psychologist Francesca Gino. It’ll shortly appear in Psychological Science but can read the full text online as a pdf.

The study involved asking people to wear real or fake designer sunglasses, when in reality they were all the genuine article. Interestingly, those wearing the supposedly fake shades behaved less honestly in subsequent tests and were more likely to suspect others of behaving unethically.

Ariely gives a brilliant account of the study but there’s an interesting aspect in the full paper which he doesn’t touch on so much. In the final experiment of the study, the researchers found that it was a change in attitude that seemed to drive the change in honesty.

Wearing the ‘fake’ sunglasses seemed to increase personal feelings of being inauthentic and these feeling of the ‘counterfeit self’ were most associated with changes in behaviour.

Participants who believed they were wearing imitation goods were more likely to agree with the sentiments “Right now, I don’t know how I really feel inside” and “Right now, I feel alienated from myself” and were more like to say that they felt “out of touch with the ‚Äòreal me‚Äô” and felt as if “I don‚Äôt know myself very well”.

The study suggests that fake goods change how we perceive ourselves and this relaxes our boundaries of acceptable behaviour.

The video is short and brilliantly explained and the study is fascinating.

Link to Dan Ariely video on the effect of faking it.
pdf of full text of scientific paper.

You are kind, strong willed, but can be self-critical

I’ve just found a classic study online where psychologist Bertram Forer gave a personality test to his students and then asked each person to rate how the accuracy of their ‘individual personality profile’. In reality, all the ‘individual profiles’ were identical but students tended to rate the descriptions as highly accurate.

In fact, on a scale of 1-5, students rated the accuracy of their profile, on average, as 4.2. This is the profile Forer used:

You have a great need for other people to like and admire you. You have a tendency to be critical of yourself. You have a great deal of unused capacity which you have not turned to your advantage. While you have some personality weaknesses, you are generally able to compensate for them. Your sexual adjustment has presented problems for you. Disciplined and self-controlled outside, you tend to be worrisome and insecure inside. At times you have serious doubts as to whether you have made the right decision or done the right thing. You prefer a certain amount of change and variety and become dissatisfied when hemmed in by restrictions and limitations. You pride yourself as an independent thinker and do not accept others’ statements without satisfactory proof. You have found it unwise to be too frank in revealing yourself to others. At times you are extroverted, affable, sociable, while at other times you are introverted, wary, reserved. Some of your aspirations tend to be pretty unrealistic. Security is one of your major goals in life.

The tendency to see ourselves in vague or general statements has since been called the Forer effect or, alternatively, the Barnum effect, after the famous catchphrase attributed to the travelling circus impresario P.T. Barnum: “There’s a sucker born every minute!”

It has been cited as the basis for palm reading, fortune telling and the like, and in the original article, Forer notes that he was inspired to conduct the study because he was “accosted by a night-club graphologist who wished to ‘read’ his handwriting”.

Forer asked the graphologist what evidence he had for the accuracy of his readings and he replied that his clients usually confirmed that he was correct.

Forer felt this was rather poor evidence but decided on an interesting tack: rather than attempt to validate the test, he decided to study the psychology of agreeing with vague personality profiles.

Link to full text of Forer study (via Las penas del Agente Smith)