Choice Irrationalities

There was a great Analysis programme on radio 4 last night: The Economy on the Couch which was about behavioural economics, neuroeconomics (whatever that is) and ways in which we fail to act like the rational agents that standard economic theory supposes us to be

One irrationality- a human frailty for fairness- is revealed by a thing called the Ultimatum Game. The Ultimatum Game works like this. I am offered some money, say £100, on the condition that I share it with you. I get to decide the split, and you get to say if you accept it or not. If you accept, we get the money in the proportions I determined, if you reject my split then neither of us gets anything. So what would you do if I offered £1 to you, leaving me with the other ninety-nine?

One view of economic ‘rationality’ is that you have a choice between nothing (if you reject) and ¬£1 (if you accept) so the rational choice is to accept. Of course hardly anyone does do this. Most people won’t accept offers lower than a ¬£30-¬£40 limit. Our sense of fair play gets in the way of rational choice.

Or what is one kind of rational choice. Like a lot things in the human judgement literature, one person’s irrationality can look like a rational choice from another point of view. Here, if I accept a measly ¬£1 it seems like I’m setting myself up for a run of bum-deals. If I reject the offer, losing out on a pounds myself but also punishing the guy who cut the cake so unfairly, I’m laying the ground for him or her to make me a better offer next time. Not so irrational, eh?

Here’s another choice irrationality which isn’t so amenable to the ‘different kind of rationality’ analysis, but which is also clear as to why it happens at all. (This wasn’t in the R4 programme, but it’s my favourite example at the moment):

You are offered a choice between $2 for certain, and a gamble where you get a 7 out of 36 chance of winning $9. 29 chances out of 36 you get nothing. What would you choose the gamble? If you do the maths, the expected pay off of the gamble is $1.75 (7/36 x 9), so you probably shouldn’t.

When Paul Slovic and colleagues [1] gave this choice to a sample of people just 33% went for the gamble.

Now consider this: as before you have a choice between $2 for certain and a gamble. The gamble still has a 7/36 chance of winning you $9, but there is 26/36 chance you will have to pay out $0.05. Now the expected pay-off of the gamble is slightly worse ($1.71) but, strangely, around 60% people offered this choice took the gamble.

How come? Slovic argues that this is an example of ‘evaluability’ making the second gamble feel more attractive. Offered a 7/36 chance of winning $9 we don’t compute the exact expected value, but rather do rough and ready reckoning. Does 7/36 feel like good odds? Is $9 a lot of money? It feels like the gamble probably isn’t worth it.

What the 5 cents does is make the $9 easy to emotionally evaluate. Is $9 a lot of money? Hell, yes, compared to 5 cents! So you probably take the gamble, even though it has a lower expected value than $2 for certain, and a lower expected value than the mostly-rejected $9 only gamble.

Moral from this? Well, for me, it says that we can’t rely on any information presented without context to be persuasive. Would you pay $10 for a scientific dictionary with 10,000 entries? Maybe. Who knows? What if you knew that all the other scientific dictionaries are $10 but only have 5,000 entries? Suddenly it becomes obvious. More generally this relates to the importance of correctly framing arguments (about which more later, and there’s some stuff in the book too).

Human reasoning is chock-a-block of ‘irrationalities’, domains in which our limited cognitive resources and our animal ancestry compel us into making irrational choices (even bearing in mind my earlier caveat about defining irrationality). Classic economic theory ignores these foibles entirely and assumes that each economic actor makes rational choices, maximising their expected value in every situation.

Behavioural economics puts the lie to this model, but doesn’t give us any good replacements – a collection of qualifications and observations which can be applied case-by-case, but no systematic replacement for the grand theory of the rational actor. Proponents of the classical model always knew it was psychologically unrealistic, but it’s simplicity bought a lot of progress despite that. All models are false, but some are useful, as they say.

Ref

1. Slovic, P., Finucane, M., Peters, E., & MacGregor, D. G. (2002). The Affect Heuristic. In T. Gilovich, D. Griffin & D. Kahneman (Eds.), Heuristics and biases: The Psychology of Intuitive Judgement (pp. 397-420). New York: Cambridge University Press.

7 Comments

  1. Posted December 6, 2004 at 12:29 am | Permalink

    The Mind Hacks blog: I love this post on reasons why we might choose to be irrational

    http://www.mindhacks.com/blog/2004/12/choice_irrationaliti.html

  2. Posted December 7, 2004 at 4:16 am | Permalink

    I wouldn’t say the Ultimatum Game is revealing a human frailty for fairness per se. Rather, results from cross-cultural economic experiments indicate that this particular “irrationality” might actually be a product of cultural conditioning in market-oriented societies.

  3. Posted December 7, 2004 at 8:02 am | Permalink

    A good and true point, Yami. I guess i was slotting the Ultimatum Game into a bunch of other stuff that shows that fairness of some kind *is* cross-cultural and found in all societies, market-orientated, or not (although obviously what people think of as fair depends on culture). But for me to suggest that the Ultimatum Game reveals anything about what is ‘human’ on it’s own is just sloppy thinking, so cheers for pointing that out.
    Regards
    Tom

  4. JussiR
    Posted December 8, 2004 at 5:07 pm | Permalink

    I actually do think there is somewhat cross-cultural consept of fainess – its just that its not based on an equal society but a hirarchic one. What varies between different cultures is the hierarchic status of different people in different situations.
    For example i would asume, that when you are playing Ultimatum Game with someone who is upper than you in the hierarchy you are more likely not to punish him or her and accept worse deal than you normally would (because for a person who is upper in the hierachy its easy to punish you later on; assuming that this game has social consequences also after the game).

  5. abrown28
    Posted December 8, 2004 at 6:58 pm | Permalink

    Most people are not rational from a mathmatical or scientific point of view but they will always act rationally based on their on set of assumptions and experiences.

  6. Posted December 10, 2004 at 9:13 am | Permalink

    Absolutely, so the interesting question becomes _how_ do people’s individual rationalities tend to differ from the economic rationality (which we might as well use as a benchmark to measure individual diverges from…)

  7. JussiR
    Posted December 10, 2004 at 3:15 pm | Permalink

    This Wired article: http://www.wired.com/wired/archive/11.12/love.html, though a bit straying, might give us some direction (namely the difference betwee logic and emotions).
    And also work done by Antonio Damasio is definetly something to look for if one wants to undestand how essential our emotions are in making decisions.


Post a Comment

Required fields are marked *
*
*

Follow

Get every new post delivered to your Inbox.

Join 2,600 other followers