There was a great Analysis programme on radio 4 last night: The Economy on the Couch which was about behavioural economics, neuroeconomics (whatever that is) and ways in which we fail to act like the rational agents that standard economic theory supposes us to be
One irrationality- a human frailty for fairness- is revealed by a thing called the Ultimatum Game. The Ultimatum Game works like this. I am offered some money, say ¬£100, on the condition that I share it with you. I get to decide the split, and you get to say if you accept it or not. If you accept, we get the money in the proportions I determined, if you reject my split then neither of us gets anything. So what would you do if I offered ¬£1 to you, leaving me with the other ninety-nine?
One view of economic ‘rationality’ is that you have a choice between nothing (if you reject) and ¬£1 (if you accept) so the rational choice is to accept. Of course hardly anyone does do this. Most people won’t accept offers lower than a ¬£30-¬£40 limit. Our sense of fair play gets in the way of rational choice.
Or what is one kind of rational choice. Like a lot things in the human judgement literature, one person’s irrationality can look like a rational choice from another point of view. Here, if I accept a measly ¬£1 it seems like I’m setting myself up for a run of bum-deals. If I reject the offer, losing out on a pounds myself but also punishing the guy who cut the cake so unfairly, I’m laying the ground for him or her to make me a better offer next time. Not so irrational, eh?
Here’s another choice irrationality which isn’t so amenable to the ‘different kind of rationality’ analysis, but which is also clear as to why it happens at all. (This wasn’t in the R4 programme, but it’s my favourite example at the moment):
You are offered a choice between $2 for certain, and a gamble where you get a 7 out of 36 chance of winning $9. 29 chances out of 36 you get nothing. What would you choose the gamble? If you do the maths, the expected pay off of the gamble is $1.75 (7/36 x 9), so you probably shouldn’t.
When Paul Slovic and colleagues  gave this choice to a sample of people just 33% went for the gamble.
Now consider this: as before you have a choice between $2 for certain and a gamble. The gamble still has a 7/36 chance of winning you $9, but there is 26/36 chance you will have to pay out $0.05. Now the expected pay-off of the gamble is slightly worse ($1.71) but, strangely, around 60% people offered this choice took the gamble.
How come? Slovic argues that this is an example of ‘evaluability’ making the second gamble feel more attractive. Offered a 7/36 chance of winning $9 we don’t compute the exact expected value, but rather do rough and ready reckoning. Does 7/36 feel like good odds? Is $9 a lot of money? It feels like the gamble probably isn’t worth it.
What the 5 cents does is make the $9 easy to emotionally evaluate. Is $9 a lot of money? Hell, yes, compared to 5 cents! So you probably take the gamble, even though it has a lower expected value than $2 for certain, and a lower expected value than the mostly-rejected $9 only gamble.
Moral from this? Well, for me, it says that we can’t rely on any information presented without context to be persuasive. Would you pay $10 for a scientific dictionary with 10,000 entries? Maybe. Who knows? What if you knew that all the other scientific dictionaries are $10 but only have 5,000 entries? Suddenly it becomes obvious. More generally this relates to the importance of correctly framing arguments (about which more later, and there’s some stuff in the book too).
Human reasoning is chock-a-block of ‘irrationalities’, domains in which our limited cognitive resources and our animal ancestry compel us into making irrational choices (even bearing in mind my earlier caveat about defining irrationality). Classic economic theory ignores these foibles entirely and assumes that each economic actor makes rational choices, maximising their expected value in every situation.
Behavioural economics puts the lie to this model, but doesn’t give us any good replacements – a collection of qualifications and observations which can be applied case-by-case, but no systematic replacement for the grand theory of the rational actor. Proponents of the classical model always knew it was psychologically unrealistic, but it’s simplicity bought a lot of progress despite that. All models are false, but some are useful, as they say.
1. Slovic, P., Finucane, M., Peters, E., & MacGregor, D. G. (2002). The Affect Heuristic. In T. Gilovich, D. Griffin & D. Kahneman (Eds.), Heuristics and biases: The Psychology of Intuitive Judgement (pp. 397-420). New York: Cambridge University Press.