A victim of metaphor

A gripping piece from Not Exactly Rocket Science describes how simply changing the metaphors used to describe crime can alter what we think is the best way of tackling it.

The article covers a new study on the power of metaphors and how they can influence our beliefs and understanding of what’s being discussed.

In a series of five experiments, Paul Thibodeau and Lera Boroditsky from Stanford University have shown how influential metaphors can be. They can change the way we try to solve big problems like crime. They can shift the sources that we turn to for information. They can polarise our opinions to a far greater extent than, say, our political leanings. And most of all, they do it under our noses. Writers know how powerful metaphors can be, but it seems that most of us fail to realise their influence in our everyday lives.

First, Thibodeau and Boroditsky asked 1,482 students to read one of two reports about crime in the City of Addison. Later, they had to suggest solutions for the problem. In the first report, crime was described as a “wild beast preying on the city” and “lurking in neighbourhoods”. After reading these words, 75% of the students put forward solutions that involved enforcement or punishment, such as calling in the National Guard or building more jails. Only 25% suggested social reforms such as fixing the economy, improving education or providing better health care.

The second report was exactly the same, except it described crime as a “virus infecting the city” and “plaguing” neighbourhoods. After reading this version, only 56% opted for more enforcement, while 44% suggested social reforms. The metaphors affected how the students saw the problem, and how they proposed to fix it.

The study is interesting because it touches on a central claim of the linguist George Lakoff who has argued that metaphors are central to how we reason and make sense of the world.

Lakoff’s arguments have had a massive influence in linguistics, where they have started more than one scientific skirmish, and were adopted by the US Democratic party in an attempt to reframe the debates over key issues.

Despite the fact that Lakoff was one of the pioneers of the idea that metaphor is central to reasoning, his political associations have made him somewhat unfashionable and it’s interesting that this new study makes only passing reference to his work.
 

Link to Not Exactly Rocket Science on new metaphor study.
Link to full text of scientific study.

25 thoughts on “A victim of metaphor”

  1. I’d be interested in seeing whether just priming people with the words in the different metaphors also produces this effect.

  2. Meh. Orwell, newspeak, totalitarian regime idiosyncratic propaganda vocabulary.

    What can we do? A journalist will always in a form or another display some kind of subjectivity.

    Maybe only having robots allowed to report news in a cold and factual manner as they do in sports news…

    It all comes down to politics, people will always use the metaphors that serve their cause.

    It’s nothing new, unavoidable, and so what?

  3. Thank you for a though-provoking post.
    I have long been fascinated by our cognitive biases and especially how we seem to use metaphors to represent abstract concepts. There are strong indications that we are using existing basic representations reason about complex issues. We may use spatial metaphors and reasoning to work with time (duration is a distance, the past is at our left/back the future is at our right/in front).
    Metaphors are incredibly powerful communication and reasoning tools, but due to their power of simplifying complex issues to ones we are more familiar with, they “by design” introduce strong biases.
    A “war on drugs” brings with it many tools via the metaphor that might not apply. Equally a “war on terror” implies that terror and drugs are entities tbat can be subdued by superior (physical) force. Choosing a different metaphor may as you wrote lead to radically different mindsets and approaches.
    Thank for your inspiring post,
    Jonas

  4. I like this study and the take on it you bring to the table. The one thing I would very much disagree with Lakoff though is that he was confusing the epiphenomenon of the metaphor with the underlying principles of how the brain makes sense of the world by connecting information (across domains, none the less).

    Hence I second Daniel Lewis: Could priming effects account for this?

    All in all I am looking forward to seeing the domain specific approach of language augmented with studies like this. Perhaps we can one day account for systematicities in the way meaning is construed by the brain that incorporate feedback effects from nonlinguistic “metaphors.”

  5. “his political associations have made him somewhat fashionable”

    Should that be UNfashionable?

    @Jakob, I suspect Lakoff wouldn’t agree that linguistic metaphors are epiphenomenal, but rather that they, and language use generally, are intrinsic to how the brain makes sense of the world. For Lakoff, as for social constructionists, language isn’t a translated representation of underlying thought processes, but part of the processes themselves.

  6. This (lack of appropriate references and attribution) isn’t particularly surprising coming from Boroditsky. She’s generally content to let people presume all these ideas are her own, particularly in the mainstream press.
    This may well be a case of turnabout as fair play, though, since Lakoff was never particularly good about citing the embodiment tradition in philosophy in his own work.

    I think it should also be noted that her studies have been extremely difficult to replicate, which has started raising eyebrows. For example, she did an old study about time metaphors and movement using participants at an airport and several people have tried to replicate this study to no avail. The response usually is “ur doin’ it wrong,” of course.
    This study, with *1,482* participants, is unlikely to ever be replicated. If this metaphor effect is really that persistent, I wonder why she didn’t just use, say, 82…

    1. If you actually read the paper, you’ll note that this study was replicated multiple times in slightly different ways in a number of participant pools, including students at different universities, and on Amazon Mechanical Turk. Here’s the relevant participant information from the paper:

      “In Experiment 1, 485 students – 126 from Stanford University and 359 from the University of California, Merced – participated in the study as part of a course requirement. Experiments 2–5 were conducted online with participants recruited from Amazon’s mechanical Turk (347, 312, 185, and 190, respectively). In exchange for participation in the study, people were paid $1.60 – consistent with a $10/hour pay rate since the study took 5 to 6 minutes to complete.”

      Also: “In Experiment 1 the survey was included in a larger packet of questionnaires that were unrelated to this study.”
      This indicates that the study was run as part of a set of surveys presented to students in intro psych classes at these universities, which accounts for the large number of participants.

      In other words, the idea that they just ran a huge number of participants until they got an effect is an inaccurate one and is based on an inaccurate summary given in the blog posts and news articles reporting these findings.

  7. George Lakoff didn’t invent frames or framing. There is a good article by Tom Crompton written for World Wildlife Federation – UK titled Common Cause – The case for working with our cultural values (read it at http://assets.wwf.org.uk/downloads/common_cause_report.pdf) that describes the history of frames and framing (and uses thereof). Lakoff is mentioned as the ‘inventor’ of deep frames but is a relatively small part of the frames discussion.

  8. All of these numbers (485, 190, 312) are still huge by behavioral experimental standards so I’m not sure what your point is.
    Part. numbers are Not a big deal in and of itself, of course, except for the darkish cloud overhanging some of these findings.

    1. The only reason big Ns should be a concern is if they are due to continuing to run more and more subjects until an effect “becomes significant.”. If there happen to have been 300 students filling out the surveys in intro psych at Stanford that quarter, then so be it. Its only if you look at the data and say, “oh, it’s not significant so let me run some more subjects” that it becomes an issue. The more useful question is, what is the observed effect size and what is the confidence interval around it. This will tell you whether you should care about the finding – i.e., whether it’s of practical significance. The idea that a significant result with a smaller N is in and of itself better than the same result with a larger N is a fallacy.

      Another thing I often find strange is how many people seem to believe that if someone’s experiment in which they obtained a small but statistically significant effect doesn’t replicate, then there is a “darkish cloud hanging over” the findings. The probability of replicating a real but small significant effect with a good amount of variance in the measure is nowhere near 1. A failure to replicate in no way implies any kind of scientific misconduct, or that future studies from the same PI’s lab should not be trusted – particularly when the lead author is someone other than said PI.

      1. The p value tells you the likelihood that you should be able to replicate a study. If p<.05, then you should have a 95% of replicating the result.
        If it's real, then it should be able to be replicated most of the time. Not 100%, but most.

        Yes, there are many reasons for getting different Ns, and no one is ever going to say precisely why N=whatever but do you not find it curious that N=190 for one iteration of the mechanical turk run and 390 in another?

        Given how much popular press she's been getting, you'd think there'd be more replication. Indeed, I know of several graduate students enamored of this work that have tried to and failed. The fact that she has co-authors doesn't mean much of anything, unless you personally know this person and can vouch for them.

        This is all anecdotal and proof of nothing, of course, especially something as serious as what you've suggested. I certainly wouldn't put my neck out given that it's not even my area of specialization. I just think people should be aware.

  9. About 1 in 5 people is not “we”.

    Altering what 19% of this population thinks is the way to tackle a problem is not the same as altering “what *we* think is the best way of tackling it.”

    Extrapolating from majorities of a limited study is bad enough, extrapolating from a minority in a study is even worse.

  10. @Dai Hence my bold claim that Lakoff was confusing matters. This “the language of thought must be like natural language” argument does not convince me. It becomes outright folly when chauvinist notions of English as the model for all natural languages (as Fodor proposed) come to bear on hypothesising about the workings of the brain.

    If I may provide a metaphorical example:

    To think that the language of thought must be working much in the way the English language works is like saying assembler code must be looking very much like Windows XP. The basis for this hypothesis is merely that one is so familiar with the OS that one mistakenly assumes it represents the way computers work on a general level.

    1. You’re confusing the two arguments, somewhat.
      Pinker, Chomsky et al. argue that there is a universal language of thought and Chomsky (and perhaps Fodor – I’m not aware he’s argued this point, vs. cited Chomsky) suggest the structure of this language has certain properties that are English-like.
      The other camp (Lakoff, Tomasello, Slobin and many others) suggest there is no universal language of thought and that the language you speak is intimately related in a feedback relationship with the language of your thought.
      The former generally advocate against conceptual metaphor theory.
      The latter generally argue for it. Lakoff would never say that the language of thought is anything like English – except for English speakers.

      1. Chomsky argues (or used to do so) that there is a Universal Grammar. The language of thought argument differs from that in that its scope lies elsewhere. That’s why I pointed to Fodor instead.

        For this argument alone I would not put Lakoff in a camp with Tomasello (all the more so because Tomasello is not a linguist) but in a camp with his peers. While he repeatedly has argued against both Chomsky and Pinker, the underlying premise of Lakoff’s conceptual metaphor theory is a generalizing assumption about the functioning of the brain that revolves around language. He does not (or at least I have not picked up on that) explicitly state functional heuristics of the brain architecture that can account for how meaning is created outside of the language domain.

        From the point of semantics I am very fond of Lakoff’s contribution, by the way. I find it strange that I find myself in a position of arguing against him. It is merely my original statement that his semantic analysis, while convincing, is concerned with an epiphenomenon that might be a bit krass. The Fodor detour was not helping, I admit.

      2. I believe you’re getting Lakoff backwards here. I don’t think he’s saying anything “about the functioning of the brain that revolves around language,” rather conceptual metaphor theory argues that linguistic metaphors reflect conceptual structures and mappings of the mind. The experiential conflation of up and more, for example, can exist and can be instantiated in the conceptual system long before a child can speak.
        I think this qualifies as a “functional heuristics of the brain architecture that can account for how meaning is created outside of the language domain,” but that phrase is a little opaque to me. The basic heuristic is Hebbian learning and conflation tying two sensory-motor experiences together, or an abstract thing with a sensory-motor experience.

        And how is UG not also a (universal) grammar of thought? If you’re not attempting to establish a (universal) deep structure across languages, then that’s not a useful desideratum. They may be separable at some technical level, but only in the same way a donut-hole is delineable from a donut.

      3. I can only refer you back to my metaphor of how assembly code differs from an OS. Even if all OS were written in Objective C there still was an underlying code you cannot deduce by looking at the surface.
        The point I am trying to make is that there is yet another “deep structure” beneath both the deep structure and the metaphorical construction of thought. This is where I am straying into questions of qualia and where I might be wrong and those who claim there is no such underlying architectural code might be right.

  11. As for George Lakoff’s political activity, remember what happened to Jane Fonda’s career after she was christened “Hanoi Jane”! 😦

    Jane was no less an actress, and George’s work is still valuable. Anyone reading this who hasn’t read “Metaphors we live by” – check it out!

  12. Just thought I’d check in and mention that anon’s p-value comment is a common fallacy when interpreting statistical results

    The definition of a p-value is the probability of making a type-I error. For a given significance level, such as .05, it implies that the probability of obtaining a result as extreme or more extreme than the one you obtained given that the null hypothesis is true (i.e., that there is no difference) is less than that significance value. It does not tell you the probability of replicating a result. There have been attempts to come up with measures that do in fact give information about the probability of replicating a result (see, e.g. http://en.wikipedia.org/wiki/P-rep), but these are highly contentious and disputed.

  13. Pingback: baronsintheattic

Leave a comment