Nassim Nicholas Taleb, author of Fooled by Randomness:
Finally put my finger on what is wrong with the common belief in psychological findings that people “irrationally” overestimate tail probabilities, calling it a “bias”. Simply, these experimenters assume that people make a single decision in their lifetime! The entire field of psychology of decisions missed the point.
His argument seems to be that risks seem different if you view them from a lifetime perspective, where you might make choices about the same risk again and again, rather than consider as one-offs. What might be a mistake for a one-off risk could be a sensible strategy for the same risk repeated in a larger set.
He goes on to take a swipe at ‘Nudges’, the idea that you can base policies around various phenomena from the psychology of decision making. “Clearly”, he adds, “psychologists do not know how to use ‘probability'”.
This is maddeningly ignorant, but does have a grain of truth to it. The major part of the psychology of decision making is understanding why things that look like bias or error exist. If a phenomenon, such as overestimating low probability events, is pervasive, it must be for a reason. A choice that looks irrational when considered on its own might be the result of a sensible strategy when considered over a lifetime, or even over evolutionary time.
Some great research in decision making tries to go beyond simple bias phenomenon and ask what underlying choice is being optimised by our cognitive architecture. This approach gives us the Simple Heuristics Which Make Us Smart of Gerd Gigerenzer (which Taleb definitely knows about since he was a visiting fellow in Gigerenzer’s lab), as well as work which shows that people estimate risks differently if they experience the outcomes rather than being told about them, work which shows that our perceptual-motor system (which is often characterised as an optimal decision maker) has the same amount of bias as our more cognitive decisions; and work which shows that other animals, with less cognitive/representational capacity, make analogues of many classic decision making errors. This is where the interesting work in decision making is happening, and it all very much takes account of the wider context of individual decisions. So saying that the entire field missed the point seems…odd.
But the grain of truth the accusation is that the psychology of decision making has been popularised in a way that focusses on one-off decisions. The nudges of behavioural economics tend to be drammatic examples of small interventions which have large effects in one-off measures, such as giving people smaller plates makes them eat less. The problem with these interventions is that even if they work in the lab, they tend not to work long-term outside the lab. People are often doing what they do for a reason – and if you don’t affect the reasons you get the old behaviour reasserting itself as people simply adapt to any nudge you’ve introduced Although the British government is noted for introducing a ‘Nudge Unit‘ to apply behavioural science in government policies, less well known is a House of Lords Science and Technology Committee report ‘Behavioural Change’, which highlights the limitations of this approach (and is well worth reading to get an idea of the the importance of ideas beyond ‘nudging’ in behavioural change).
Taleb is right that we need to drop the idea that biases in decision making automatically attest to our irrationality. As often as not they reflect a deeper rationality in how our minds deal with risk, choice and reward. What’s sad is that he doesn’t recognise how much work on how to better understand bias already exists.
When someone says “The entire field of psychology of decisions misses this point” or something similar, you might want to ask yourself what is the probability that it is true? Or what is the evidence? Is it likely that hundreds of scholars are independently dumb about some point but just need Taleb to come along to point it out? I suspect there is a simpler explanation and it’s to do with one persons egotism and sense of self-importance.
There’s another aspect that is too rarely – if ever – considered carefully enough. Much of human thought, and decision-making, is inherently IRRATIONAL, arbitrary, and illogical. That illogical irrationality can only really be understood in the same(equal) irrational illogic. But, psychology is the drive to understand LOGICALLY, and RATIONALLY. But, remember, we’re talking about *illogical*, and *irrational*, which can’t be understood rationally and logically. No, this really isn’t the ‘circular logic” which it looks like. It only *LOOKS*(seems) “circular”, because it leads back to the same conclusion. When we insist on “understanding” the inherently illogical & irrational, in rational & logical terms, we can only make non-sense, not *sense*. Non-sense is human illogic and irrationality expressed in words. You can begin to appreciate this (apparent) contradiction when you read such writings as “The Jabberwocky” from Lewis Carroll: “Twas brillig and the slithy toves did gyre and gimbol in the wabe. All mimsy were the barrow groves, and the mome raths outgrabe(sic)”…. You can ALMOST make sense of that. But it only really makes sense irrationally, & illogically. Much of human thought *IS* irrational & illogical, and if you insist on “understanding” that logically & rationally, then you will be WRONG, or at least frustrated in your efforts to “make sense” of things….Go ahead!! Either prove me wrong, or prove me right….
It seems to me that a lot of the argument is around the connotations of the term irrational. Because the historical development of the field grew out of an opposition to economists’ definition of rationality, there is a gap between what most people would call irrational and how the term is sometimes used in the field. I don’t think many behavioral scientists (psychologists, behavioral economists) would actually argue that it’s rational to always be rational (by the economists’ definition).
I think they’d also happily grant that the “irrational” behavior their research exposes is not always irrational, and is borne out of an adaptation to contexts where that sort of behavior is often advantageous. Still, that does not mean that there are not certain contexts where those adaptations will lead people to systematically behave in problematic ways. That’s where behavioral insights can accomplish the most. Not in making stupid people smarter, but in helping smart people (all people), overcome some of the obstacles that arise in executing their goals.
Yes, thanks Dave, that’s a really useful contribution
I wrote a bit about our (over)enthusiasm for evidence that we’re irrational here http://www.amazon.com/dp/B010O1Z018
I think the term “rational” and its counterpart “irrational” have done a great disservice to the social sciences. It ignores the original intents of Bernoulli and utility theory, namely the subjective interpretation of value. The best treatment on the subject of rationality I have come across is that of Daniel Ellsberg [1961] “Risk, Ambiguity, and the Savage Axioms.”
Click to access eb1961ambiguity.pdf
From his conclusion:
“It would seem incautious to rule peremptorily that the people in question should not allow their perception of ambiguity, their unease with their best estimates of probability, to influence their decision: or to assert that the manner in which they respond to it is against their long-run interest and that they would be in some sense better off if they should go against their deep-felt preferences. If their rationale for their decision behavior is not uniquely compelling (and recent discussions with T. Schelling have raised questions in my mind about it), neither, it seems to me, are the counterarguments. Indeed, it seems out of the question summarily to judge their behavior as irrational: I am included among them.”
And what if what is called rationality were NOT as important in decision making as many continue to believe? What if a more important determinant of utiliy were SUBJECTIVE WELL BEING, i.e. feeling at ease with ourselves and the others , irrespective of how much abstract rationality is incorporated in our choices? If that were the case , using rationality as a reference point would be entirely irrelevant.