BBC Column: auction psychology

My BBC Future column from last week. The original is here

The reason we end up overspending is a result of one unavoidably irrational part of the bidding process – and that’s ourselves.

The allure and tension of an auction are familiar to most of us – let’s face it, we all like the idea of picking up a bargain. And on-line auction sites like eBay cater for this, allowing us to share in the over-excitement of auction bidding in the privacy of our homes. Yet somehow, despite our better judgement, we end up paying more than we know we should have done on that piece of furniture, equipment or clothing. What’s going on?

One estimate states that about half of eBay auctions result in higher sale prices than the “buy it now” price. This is a paradox. If the people going into the auction really wanted the item so badly, why didn’t they get it for less by paying the “buy it now” price?

This has nothing to do with the way the eBay bidding system works. In fact, unlike most auctions, the eBay auction process is actually perfectly designed to allow rational outcomes. By allowing you to set a private “maximum bid” in advance, eBay auctions are better for individual buyers than public auctions where everyone has to shout out their bid in public. No, the reason auctions – both on and offline – produce higher sale prices than any bidder originally imagined they would pay is because of one irreducibly irrational part of the bidding process: the bidders themselves.

Auctions push a number of our psychological buttons, and in fact the phenomenon of “auction fever” is well documented. They are social occasions, with lots of other people around, and this tends to increase your physiological arousal, an effect called social facilitation. As your adrenaline pumps, your heart beats faster, and your reactions quicken. This is ideal for something like sports, but makes cool rational decision making harder. The very rich often send delegates to auctions, and as well as avoiding the paparazzi I suspect this is also a strategy to combat the over-excitement induced by being physically present in the situation.

On top of this, auctions are time pressured, and – by definition – you’re bidding on something you value highly. These factors create excitement whether you are in the room or not.

Persuasive powers

Another psychological bias that operates in auctions is the endowment effect, where we tend to over-value things we already possess. By encouraging us to connect the bid (our money) with the sale item, bidding on items lets us fantasise about owning them – stimulating a kind of endowment effect. This is why the auction catalogue (or the item picture and description on a website) is so important. This forms part of the psychological journey the seller wants you to go on to imagine owning this item in advance, so you’ll place a higher value on it, and so pay more to make imagination reality.

Persuasion plays a huge part here, and the best book you can read on the psychology of the subject is Robert Cialdini’s Influence. Cialdini is a Professor of Social Psychology at Arizona State University, and he lists six major ways you can make yourself persuasive. Auctions hit at least two of these six principles square on the nose.

First, auctions use the principle of scarcity, whereby we overvalue things that we think might run out. Auction items are scarce in that they are unique (only one person can have it), and scarce in time (after the bids are finished, you’ve lost your chance). Think how many shop sales successfully rely on scarcity heuristics such as “Last day of sale!”, or “Only 2 left in stock!”, and you’ll get a feel for how powerful this persuasion principle can be.

The other principle used by auctions is that of “social proof”. We all tend to take the lead from other people; if everybody does something, or says something, most of us join in before we think about what we really should do. Auctions put you in intimate contact with other people who are all providing social proof that the sale item is important and valuable.

A final ingredient in the magic-spell cast by auctions was uncovered by researchers from Princeton. Their experiments asked volunteers to play on-line auctions with different rules. Some of these auctions had rules that encouraged over-bidding (like typical open auctions, which most of us are familiar with from movies), and some had rules that encouraged rational behaviour (like the eBay structure). With enough guidance from the auction rules, the bidders didn’t end up paying much more than they originally thought was reasonable – but only if they thought they were bidding against a computer programme. As soon as the volunteers thought they were bidding against other live humans they found it impossible to bid rationally, whatever the auction rules.

This implies that the competitive element of auctions is crucial to provoking our irrational buying behaviour. Once we’re involved in an auction we’re not just paying to own the sale item, we’re paying to beat other people who are bidding and prevent them from having it.

So it seems Gore Vidal had human nature, and the psychology of auctions, about right when he said: “It is not enough to succeed. Others must fail.”

BBC Column: stopped clocks and dead phones

My column for BBC Future from last week. It’s another example of how consciousness isn’t just constructed, but is a construction for which the signs of artifice are hidden. The original is here

 

Ever stared at a second hand and think that time stands still for a moment? It’s not just you.

Sometimes, when I look at a clock time seems to stand still. Maybe you’ve noticed this to your bemusement or horror as well. You’ll be in the middle of something, and flick your eyes up to an analogue clock on the wall to see what the time is. The second hand of the clock seems to hang in space, as if you’ve just caught the clock in a moment of laziness. After this pause, time seems to restart and the clock ticks on as normal.

It gives us the disconcerting idea that even something as undeniable as time can be a bit less reliable than we think.

This happened to me for years, but I never spoke about it. Secretly I thought it was either evidence of my special insight to reality, or final proof that I was a little unhinged (or both). But then I found out that it’s a normal experience. Psychologists even have a name for it – they call it the “stopped clock illusion”. Thanks psychologists, you really nailed that one.

An ingenious experiment from a team at University College London recreated the experience in the lab and managed to connect the experience of the stopped clock to the action of the person experiencing it. They asked volunteers to look away and then suddenly shift their gaze to a digital counter. When the subjects tried to judge how long they had been looking at the digit that first appeared, they systematically assumed it had been on for longer than it had.

 

Filling gaps

Moving our eyes from one point to another is so quick and automatic that most of us probably don’t even think about what we are doing. But when you move your eyes rapidly there is a momentary break in visual experience. You can get a feel for this now by stretching your arms out and moving your eyes between your two index fingers. (If you are reading this in a public place, feel free to pretend you are having a good stretch.) As you flick your eyes from left to right you should be able to detect an almost imperceptibly brief “flash” of darkness as input from your eyes is cut off.

It is this interruption in consciousness that leads to the illusion of the stopped clock. The theory is that our brains attempt to build a seamless story about the world from the ongoing input of our senses. Rapid eye movements create a break in information, which needs to be covered up. Always keen to hide its tracks, the brain fills in this gap with whatever comes after the break.

Normally this subterfuge is undetectable, but if you happen to move your eyes to something that is moving with precise regularity – like a clock – you will spot this pause in the form of an extra long “second”. Fitting with this theory, the UCL team also showed that longer eye-movements lead to longer pauses in the stopped clock.

It doesn’t have to be an eye movement that generates the stopped clock – all that appears to be important is that you shift your attention. (Although moving our eyes is the most obvious way we shift our attention, I’m guessing that the “inner eye” has gaps in processing in the same way our outer eyes do, and these are what cause the stopped clock illusion.) This accounts for a sister illusion we experience with our hearing – the so-called “dead phone illusion”, which is when you pick up an old-fashioned phone and catch an initial pause between the dial tone that seems to last longer than the others.

These, and other illusions show that something as basic as the experience of time passing is constructed by our brains – and that this is based on what we experience and what seems the most likely explanation for those experiences, rather than some reliable internal signal. Like with everything else, what we experience is our brain’s best guess about the world. We don’t ever get to know time directly. In this sense we are all time travellers.

BBC Future column: What a silver medal teaches us about regret

Here’s my column from last week for BBC Future. The original is here

The London 2012 Olympic Games are almost over now, and those Olympians with medals are able to relax and rest on the laurels of victory. Or so you might think. Spare a thought for the likes of Yohan Blake, McKayla Maroney, or Emily Seebohm – those people who are taking home silver.

Yes, that’s right, I’m asking you to feel sorry for silver medallists, not for the bronze medallists or for those who didn’t get the chance to stand on the podium at all.

Research has shown that silver medallists feel worse, on average, than bronze medallists. (Gold medallists, obviously, feel best of all.) The effect is written all over their faces, as psychologists led by Thomas Gilovich of Cornell University found out when they collected footage of the medallists at the 1992 Olympic games in Barcelona. Gilovich’s team looked at images of medal winners either at the end of events – that is, when they had just discovered their medal position – or as they collected their medals on the podium. They then asked volunteers who were ignorant of the athlete’s medal position to rate their facial expressions. Sure enough, the volunteers rated bronze medallists as consistently and significantly happier than silver medallists, both immediately after competing, and on the podium.

The reason is all to do with how bronze and silver medallists differ in the way they think events could have turned out – what psychologists call “counterfactual thinking”. In a follow-up study, the team went to the 1994 Empire State Games and interviewed athletes immediately after they had competed. Silver medallists were more likely to use phrases like “I almost…”, concentrating their responses on what they missed out on. Bronze medallists, on the other hand, tended to contemplate the idea of missing out on a medal altogether. These differences in counterfactual thinking make silver medallists feel unlucky, in comparison to a possible world where they could have won gold, and make bronze medallists feel lucky, in comparison to a possible world where they could have returned home with nothing.

So the research seems to add a bit of scientific meat to Hamlet’s famous line “there is nothing either good or bad, but thinking makes it so“, as well as revealing something about the psychology of regret. Even though we must deal with the world as it is, a vital part of life is imagining the world as it could be – thinking about a job you should have applied for (or said “no” to), or someone you should (or shouldn’t) have asked out on a date, for instance.

Haunted by the past

Different possible worlds crowd compete, some seeming closer than others, and this is what drives regret. This is illustrated by a study that asked volunteers to read a story about a plane crash survivor who walked through the wilderness for days, collapsing and die before reaching civilisation. They were then asked how much compensation the victim’s family should receive. People who read a version where the survivor collapsed 75 miles (120 kilometres) from safety awarded less compensation than those who read that the survivor collapsed just a quarter of a mile from safety.

Both scenarios ended the same, but the second version seems more tragic to us because the person seemed so much closer to safety. Remember that the next time you see a Hollywood film that plays with your emotions in this manner.

Understanding the psychology of regret also helps to put our own thoughts and emotions into context. We’re all haunted by things we could have done, or shouldn’t have done. What’s the point in dwelling on such matters, we may ask, when we can’t change the past? But the study of the Olympic medallists gives us two thoughts that might help us deal with regret.

The first is that regret, like imagination generally, exists for a reason – this amazing cognitive ability is what allows us to plan for the future and, with luck, change things based on how we imagine they might turn out. Medallists who feel more regret may well go on to train harder, and smarter, and so be better able to win gold at the next Olympics. Regret, like so many of the territories of the mind, can hurt. It hurts whether we can change how things have worked out, or not, but the feeling is built into our brains for a good reason (however little comfort that provides).

The second thought that might help us deal with regret is to realise that there are many possible worlds we could compare events to. It’s natural for many silver medallists to feel that they’ve missed out on gold, and to the extent we can choose what we compare ourselves to, we can choose how we feel about our regrets. We can use them to drive us to future success, but also to appreciate what we do have.

So maybe it isn’t all bad for Blake, Maroney or Seebohm after all?

BBC Future column: Wear red, win gold?

My latest column for BBC Future, a cautionary tale of scientific research, with an Olympic theme. Original here.

Studies show that wearing a particular colour increases the chances of winning a gold medal. Why this is the case serves as a timely reminder that we should always be wary of neat explanations for complex phenomena.

What does it take to be an Olympic winner? Skill? Yes. Dedication to training? Definitely. Luck? Perhaps. What colour kit you wear? Possibly.

Research conducted during the 2004 Olympic Games in Athens showed that competitors in taekwondo, boxing and wrestling who wore red clothing or body protection had a higher chance of winning. The effect wasn’t large, but when the statistics were combined across all these sports it was undeniable – wearing red seemed to give a slightly better chance of winning gold. The effect has since been shown for other sports, such as football.

The researchers had a straightforward explanation for why wearing red makes a difference. Across the animal kingdom, red colouration is associated with male dominance, signalling aggression and danger to others. The vividness of the red displayed by individuals of various species has been shown to relate to the amount of the hormone testosterone they have in the bodies, which also correlates with their physical health and eventual breeding success. The researchers claimed that humans too are subject to this “red = dominance” effect, and so, for combat sports, the athlete wearing red had a psychological advantage.

In competitive sport, small advantages like this matter. The difference between winning and losing can be milliseconds, or millimetres. So, should every country be fighting for the right for their sportspeople to wear red?

Maybe they should, but not for the reasons the study authors claimed. What happened next is a textbook case of the way in which research happens, showing us why we should always be wary of neat explanations for complex phenomena.

Close calls

Like all good science, once someone has proposed a theory, others can hold it up to scrutiny. And so it was the case with the red=dominance explanation. Another research group analysed data from a different sport at the Athens Olympics, Judo, but they found that contestants who wore either white or blue had an advantage. Instead of being an effect of evolutionary colour signals, the new claim was that the difference in performance was due entirely to the visibility of the different colours. In a combat sport the person wearing the brightest clothing will be at a disadvantage – their opponent will find it slightly easier to see where they are and anticipate their next move.

Convinced? Don’t make up your mind yet, because there’s a further twist to the tale.

This debate was resolved in the most interesting way at around the time of the Beijing Olympics in 2008. A new study suggested that the previous theories based on dominance or visibility of the competitor were wrong. The effect wasn’t anything to do with the effect of colour on the athletes, but instead to do with the effect on the referees.

I’ve written before about how we all have a tendency to look for causes that are somehow part of the essence of a person, and this seems to be another example. The statistics were correct, contestants wearing red really do win more, but we had been looking in the wrong place for an explanation. This study used digital manipulation to show experienced taekwondo referees fights that were identical, except for the colours worn by the contestants. Judging the same fights, referees awarded more points to contestants who had been photoshopped red than to contestants who had been photoshopped blue.

In any competitive sport there will be close calls, situations where the margin of victory is small, and a referee has to make a judgment to the best of their abilities in the blink of an eye. It seems that because red does have an association with victory and dominance, the judgement of these marginal situations can, occasionally, be influenced by the contestant’s clothes colour. Colour does produce a psychological effect, but it is a bias in the refs, not in the contestants.

Horse play

This story provides a classic warning for anyone trying to find psychological causes for things: the effect can just as easily be in the observers as in the thing we observe. Psychology students around the world are taught the story of Clever Hans, a horse that many believed could do arithmetic. Huge crowds would pay to see Hans, held by his trainer, being asked questions such as “what is five plus two”, and answer by stamping his foreleg seven times.

This seemed like a wondrous example of animal intelligence, until a psychologist showed that Hans was performing his trick by reading the body language of his trainer. Hans would start tapping his foot. When he got to the correct number his trainer would relax, and Hans would read this signal and stop. What looked like a miraculous ability to do maths, was really a clever – but not miraculous – ability to act according to what his trainer did.

So there the matter rests – for the moment at least. Wearing red could give you an advantage in competitive sports, but its because of the effect it has on the observers, not the observed. And, just maybe, we’ll try to be a bit more careful about calling victory as we watch contests happening in the London Olympics.

BBC Future column: Why we love to hoard

Here’s last week’s column from BBC Future. The original is here. It’s not really about hoarding, its about the endowment effect and a really lovely piece of work that helped found the field of behavioural economics (and win Daniel Kahneman a Nobel prize). Oh, and I give some advice on how to de-clutter, lifehacker-style.

Question: How do you make something instantly twice as expensive?

Answer: By giving it away.

This might sound like a nonsensical riddle, but if you’ve ever felt overly possessive about your regular parking space, your pen, or your Star Wars box sets, then you’re experiencing some elements behind the psychology of ownership. Our brains tell us that we value something merely because it is a thing we have.

This riddle actually describes a phenomenon called the Endowment Effect. The parking space, the pen and the DVDs are probably the same as many others, but they’re special to you. Special because in some way they are yours.

You can see how the endowment effect escalates – how else can you explain the boxes of cassette tapes, shoes or mobile phones that fill several shelves of your room… or even several rooms?

No trade

To put a scientific lens on what’s going on here, a team led by psychologist Daniel Kahneman carried out a simple experiment. They took a class of ordinary University students and gave half of them a University-crested mug, the other half received $6 – the nominal cost of the mug.

Classic economics states that the students should begin to trade with each other. The people who were given cash but liked mugs should swop some of their cash a mug, and some of the people who were given mugs should swop their mugs for some cash. This, economic theory says, is how prices emerge – the interactions of all buyers and sellers finds the ideal price of goods. The price – in this case, of mugs – will be a perfect balance between the desires of people who want a mug and have cash, and the people who want cash and have a mug.

But economic theory lost out to psychology. Hardly any students traded. Those with mugs tended to keep them, asking on average for more than $5 to give up their mug. Those without mugs didn’t want to trade at this price, being only willing to spend an average of around $2.50 to purchase a mug.

Remember that the mugs were distributed at random. It would be weird if, by chance, all the “mug-lovers” ended up with mugs, and the “mug-haters” ended up without. Something else must be going on to explain the lack of trading. It seems the only way to understand the high-value placed on the mugs by people who were given one at random is if the simple act of being given a mug makes you value it twice as highly as before.

This is the endowment effect, and it is the reason why things reach a higher price at auctions – because people become attached to the thing they’re bidding for, experiencing a premature sense of ownership that pushes them to bid more than they would otherwise. It is also why car dealers want you to test drive the car, encouraging you in everyway to think about what it would be like to possess the car. The endowment effect is so strong that even imagined ownership can increase the value of something.

Breaking habits

The endowment effect is a reflection of a general bias in human psychology to favour the way things are, rather than the way they could be. I call this status quo bias, and we can see reflections of it in the strength of habits that guide our behaviour, in the preference we have for the familiar over the strange or the advantage the incumbent politician has over a challenger.

Knowing the powerful influence that possession has on our psychology, I take a simple step to counteract it. I try to use my knowledge of the endowment effect to help me de-clutter my life. Perhaps this can be useful to you too.

Say I am cleaning out my stuff. Before I learnt about the endowment effect I would go through my things one by one and try to make a decision on what to do with it. Quite reasonably, I would ask myself whether I should throw this away. At this point, although I didn’t have a name for it, the endowment effect would begin to work its magic, leading me to generate all sorts of reasons why I should keep an item based on a mistaken estimate of how valuable I found it. After hours of tidying I would have kept everything, including the 300 hundred rubber bands (they might be useful one day), the birthday card from two years ago (given to me by my mother) and the obscure computer cable (it was expensive).

Now, knowing the power of the bias, for each item I ask myself a simple question: If I didn’t have this, how much effort would I put in to obtain it? And then more often or not I throw it away, concluding that if I didn’t have it, I wouldn’t want this.

Let this anti-endowment effect technique perform its magic for you, and you too will soon be joyously throwing away things that you only think you want, but actually wouldn’t trouble yourself to acquire if you didn’t have them.

And here’s the thing… it works for emails too. If someone sends me a link to an article or funny picture, I don’t think “I must look at that”, I ask “If I hadn’t just been sent this link, how hard would I endeavour to find out this information for myself?”. And then I delete the email, thinking that however fascinating that article on the London sewerage system sounds or that funny picture of a cat promises to be, I didn’t want them before the email was in my possession, so I probably don’t really want them now.

That’s my tip for managing my clutter. If you have any others, let me know.

Berlin cognitive science safari: report

So I’m back from my time in Berlin at the BMW Guggenheim Lab. As announced previously, I was there to give a talk about how perception works, and how cities control our perception. If you’re a regular mindhacks.com reader nothing I said would have been earth-shattering – it was a tour through some basics of perception and attention. I’ll just highlight two points:

Perception is about meaning. We so effortlessly transform visual input into percepts that we can forget what a difficult task it is. Fortunately we have a heap of dedicated brain machinery to do this for us. A common mistake is to think of perception as mere projection on an inner screen. Part of this fallacy is to think that perception is trivial, but another important part is to think that perception is about the production of images of some sort. Perception is the production of meaning, not the production of images. Our associations and experience are incorporated in the act of perception, so that they are intrinsic to the perceptual act (not somehow added “on top”, or as an after thought). This goes so way to explaining why foreigners appear so stupid in cities. In know that personally I feel my IQ drop at least 15 points as soon as the plane touches down in a foreign country. Native city dwellers have learn to read the city, through experience forming webs of association that build up into symbols. This allows them to instantly perceive what different scenes in the city mean for how they should act. Here’s an example I used in my talk.

Outside Berlin Zoo, looking for the underground: which way should I go? The visual sign for the U-bahn actually forms a tiny fraction of the visual field, so small that I’d bet it is invisible to the majority of my peripheral vision. To a resident of Berlin the way to the tube is obvious, perceptual learning ensures that they don’t even have to think about which symbol to look for, or what it means. The accumulation of thousands of pieces of perceptual expertise is what makes us natives in a city, and what renders us flailing when abroad.

Attention is co-constituted with history and the environment. What we notice depends on what we are seeking, what we have previously experienced and the world around us. We can choose to look for something, or concentrate on something, but our attention can also be driven by factors outside of our
direct control. Advertisers know this, and hence we get bright adverts, moving adverts, and the plethora of adverts which use faces and particularly eyes. Light contrasts, movement and human eyes are all elements which are fundamentally wired into the operation of our visual system. Advertisers are using them to perform a subcortical hijack of what we look at as we navigate the city. The psychology of advertising is a different talk, in Berlin it occurred to me that attention could be a useful, concrete, model generally for thinking about how our agency is spread between self and world.

After the talk was the real highlight – a cognitive science safari where we went out into the city and tried out some interventions based on classic experiments from psychology. Demonstration of strange allure of a crowd all looking the same way worked reasonably well (looking up is definitely more attention-capturing than horizontal gaze). So did ‘reading’ someone’s country of origin from their appearance alone, but the real treat of the tour was the change blindness ‘door’ experiment

This video shows one run of the experiment (thanks to tour particpant Hans Huett for taking it. Jump to about 0:50 for the action). We can see Matt Craddock and another volunteer (sorry, I didn’t catch your name) waiting for an unsuspecting member of the public. After engaging him in asking for directions, Yunus (my Berlin fixer) and Jakub Limanowski (mindhacks.com reader and volunteer), arrive from around the corner, carrying the door. After swopping Matt for Jakub we can see the member of the public continuing giving directions as if nothing has happened – he was blind to the change! Later we tried a more extreme change, swopping an older, shorter, beardless gentleman into Matt’s place – again it worked, asking the question of just how extreme a change you could make and the phenomenon still work.

The moral of this story is not that many people are stupid, just that attention is a double-edged sword. The good citizens of Berlin focus hard on giving directions, not on monitoring the identity of their interlocutor for signs of an improbable change. Yes, the phenomenon shows how much of the environment we are not aware, but it is also a back-handed tribute to our ability to focus our attention where we want.

BBC Future column: Why I am always unlucky but you are always careless

From lost keys to failed interviews, we blame other people for mishaps but never ourselves, because assuming causes helps us to make sense of the world.

When my wife can’t find her keys, I assume it is because she is careless. When I can’t find my keys I naturally put it down to bad luck. The curious thing is that she always assumes the opposite – that she’s the one with the bad luck, and I’m the careless one.

When we observe other people we attribute their behaviour to their character rather than to their situation – my wife’s carelessness means she loses her keys, your clumsiness means you trip over, his political opinions mean that he got into an argument. When we think about things that happen to us the opposite holds. We downplay our own dispositions and emphasise the role of the situation. Bad luck leads to lost keys, a hidden bump causes trips, or a late train results in an unsuccessful job interview – it’s never anything to do with us!

 This pattern is so common that psychologists have called it the fundamental attribution error. And there’s a whole branch of psychology that investigates how we reason about causes for things called attribution theory. The fundamental attribution error is a good example of a quirk in the way we reason about causes, but it isn’t the only one. Despite the name, it may not even be the most fundamental.

Seeking causes

Psychologists are interested in attribution of causation because it tells us important things about how the mind works. To illustrate this, imagine you see a man asleep under a tree, and a leaf fluttering down to land on his head. As the leaf touches his head he wakes up and shouts “Yikes”. Anyone watching this scene would assume the man woke up because of the falling leaf.

 But this simple statement is remarkably difficult to prove – you have no direct access to the cause, just the before (a leaf) and after (“Yikes”). We automatically assume the cause. We talk about it like it is a thing – somehow in the middle between the leaf and the man, but really it is just an assumption, not a thing. And indeed, some new information could come along and force us to reconsider our assumptions. We might find out later that a philosophically-minded ant had come along and, just at that minute, decided to bite the sleeping man’s hand.

 So our causes are assumptions, based on what we perceive but with an extra bit of imagination. They are necessary assumptions. Without looking for causes we would be stuck with a confusing picture of the world. Rather than say “the falling leaf caused the man to wake up”, we have to take everything into account and say the following. “The leaf fell. The grass did the same as before. A bird flew between two trees one hundred and thirty yards away. I lost my keys. My Romanian aunt’s clock in my Romanian aunt’s house continued ticking (on and on and on). The man woke up.”

 Assuming causes in this way lets us make sense of the world. Not only is it easier to describe, the descriptions tell you how to make things happen (or avoid them – for instance, if you want the man to stay asleep next time, catch the leaf). In this way, attributions are psychological magic that help us control the future. No wonder psychologists find them interesting.

Built on sand

 The fundamental attribution error is just a continuation of a wider pattern: we blame individuals for what happens to them because of the general psychological drive to find causes for things. We have an inherent tendency to pick out each other as causes; even from infancy, we pay more attention to things that move under their own steam, that act as if they have a purpose. The mystery is not that people become the focus of our reasoning about causes, but how we manage to identify any single cause in a world of infinite possible causes.

 Even the way I described cause-seeking as an “inherent tendency” is part of this pattern. I have no direct access to what causes the results of experiments that have made me think this, just as I would have no direct access to what caused the man to wake as the leaf fell. I assume a thing, hidden, somehow, underneath the experiments – an inherent tendency for humans to identify each other as causes – which I then rely on to tell you what I’m thinking.

 That thing might not exist, or might have a reality very different from how I describe it, but we are forced to rely on assumptions to make sense of the world, and these assumptions create a reality of causes and essences that seems solid, despite its uncertain foundation.

 This all might sound overly philosophical, but once you are switched on to this tendency to invent essences you’ll hear them everywhere. Generalisations or stereotypes such as “women can’t do maths” or “Americans don’t have a sense of humour” also rely on an invented essence of a sex, or of a nationality, a term that some psychologists have called ultimate attribution error. These views don’t have a concrete existence. They are based in imagination, and are subject to all the psychological forces that are at play there.

 In more prosaic domestic moments, when it feels like such bad luck that I can’t find my keys, yet my wife seems so careless when she can’t find hers, I know I’m performing psychological magic. I’m observing the myriad events in the world and imagining things – my bad luck, her carelessness – which I use to explain the world with.

 With the knowledge that these explanations can only ever be built on sand, I know to be a bit more careful about how I use them.

My most recent column for the BBC Future website, the original is here

BBC Future column: why are we so curious?

My column for BBC Future from last week. The original is here.

 

Evolution made us the ultimate learning machines, and the ultimate learning machines need to be oiled by curiosity.

I hate to disappoint you, but whatever your ambitions, whatever your long-term goals, I’m pretty sure that reading this column isn’t going to further them. It won’t stop you feeling hungry. It won’t provide any information that might save your life. It’s unlikely to make you attractive to the opposite sex.

And yet if I were to say that I will teach you a valuable lesson about your inner child, I hope you will want to carry on reading, driven by nothing more than your curiosity to find out a little more. What could be going on in your brain to make you so inquisitive?

We humans have a deeply curious nature, and more often than not it is about the minor tittle-tattle in our lives. Our curiosity has us doing utterly unproductive things like reading news about people we will never meet, learning topics we will never have use for, or exploring places we will never come back to. We just love to know the answers to things, even if there’s no obvious benefit.

From the perspective of evolution this appears to be something of a mystery. We associate evolution with ‘survival-of-the-fittest’ traits that support the essentials of day-to-day survival and reproduction. So why did we evolve to waste so much time? Shouldn’t evolution have selected for a species which was – you know – a bit more focussed?

 

Child’s play

The roots of our peculiar curiosity can be linked to a trait of the human species call neoteny. This is a term from evolutionary theory that means the “retention of juvenile characteristics”. It means that as a species we are more child-like than other mammals. Being relatively hairless is one physical example. A large brain relative to body size is another. Our lifelong curiosity and playfulness is a behavioural characteristic of neoteny.

Neoteny is a short-cut taken by evolution – a route that brings about a whole bundle of changes in one go, rather than selecting for them one by one. Evolution, by making us a more juvenile species, has made us weaker than our primate cousins, but it has also given us our child’s curiosity, our capacity to learn and our deep sense of attachment to each other.

And of course the lifelong capacity to learn is the reason why neoteny has worked so well for our species. Our extended childhood means we can absorb so much more from our environment, including our shared culture. Even in adulthood we can pick up new ways of doing things and new ways of thinking, allowing us to adapt to new circumstances.

 

Exploration bonus
In the world of artificial intelligence, computer scientists have explored how behaviour evolves when guided by different learning algorithms. An important result is that even the best learning algorithms fall down if they are not encouraged to explore a little. Without a little something to distract them from what they should be doing, these algorithms get stuck in a rut, relying on the same responses time and time again.

Computer scientists have learnt to adjust how these algorithms rate different possible actions with an ‘exploration bonus’ – that is, a reward just for trying something new. Weighted like this, the algorithms then occasionally leave the beaten track to explore. These exploratory actions cost them some opportunities, but leave them better off in the long run because they’ve gain knowledge about what they might do, even if it didn’t benefit them immediately.

The implication for the evolution of our own brain is clear. Curiosity is nature’s built-in exploration bonus. We’re evolved to leave the beaten track, to try things out, to get distracted and generally look like we’re wasting time. Maybe we are wasting time today, but the learning algorithms in our brain know that something we learnt by chance today will come in useful tomorrow.

Obviously it would be best if we knew what we needed to know, and just concentrated on that. Fortunately, in a complex world it is impossible to know what might be useful in the future. And thank goodness – otherwise we would have evolved to be a deadly-boring species which never wanted to get lost, never tried things to just see what happened or did things for the hell of it.

Evolution made us the ultimate learning machines, and the ultimate learning machines need a healthy dash of curiosity to help us take full advantage of this learning capacity.

Or, as Kurt Vonnegut said, “We are here on Earth to fart around. Don’t let anybody tell you any different.”

BBC Column: What makes us laugh?

This is my BBC Future column from a couple of weeks ago. You can find the original here

 

A simple question with a surprisingly complex answer – understanding laughter means understanding fundamental issues of human nature.

Why do we laugh? Well it’s funny you should ask, but this question was suggested by reader Andrew Martin, and it is a very interesting one to investigate. For what at first seems like a simple question turns out to require a surprisingly complex answer – one that takes us on a journey into the very heart of trying to understand human nature.

Most people would guess that we laugh because something is funny. But if you watch when people actually laugh, you’ll find this isn’t the case. Laughter expert Robert Provine spent hours recording real conversations at shopping malls, classrooms, offices and cocktail parties, and he found that most laughter did not follow what looked like jokes. People laughed at the end of normal sentences, in response to unfunny comments or questions such as “Look, it’s Andre,” or “Are you sure?”. Even attempts at humour that provoked laughter didn’t sound that funny. Provine reports that the lines that got the biggest laughs were ones such as “You don’t have to drink, just buy us drinks,” and “Do you date within your species?”. I guess you had to be there.

Brain triggers
So if we want to understand laughter, perhaps we need to go deeper, and look at what is going on in the brain. The areas that control laughing lie deep in the subcortex, and in terms of evolutionary development these parts of the brain are ancient, responsible for primal behaviours such as breathing and controlling basic reflexes. This means laughter control mechanisms are located a long way away from brain regions that developed later and control higher functions such as language or even memory.

Perhaps this explains why it is so hard to suppress a laugh, even if we know it is inappropriate. Once a laugh is kindled deep within our brains these ‘higher function’ brain regions have trouble intervening. And the reverse is true, of course, it is difficult to laugh on demand. If you consciously make yourself laugh it will not sound like the real thing – at least initially.

 

There is another fundamental aspect to laughing. All humans laugh, and laughter always involves a similar pattern of whooping noises. Deaf people who have never heard a sound still make laughing noises. The laughing noises produced by humans share many of the acoustic properties of speech, further evidence laughter is hijacking the brain and body apparatus that we use for breathing and talking.

But this does not fully answer the original question. Even if we identified the precise brain areas associated with laughing, even if we were able to make someone laugh by stimulating part of their brain (which can be done), we still don’t know what makes people laugh. Yes, we know about the effect, but what about the cause, that is, the reason why we laugh in the first place?

Shared joke
To answer this, perhaps we need to look outwards, to look at the social factors at play when people laugh. I’ve already mentioned Provine’s study of laughter in its natural context. Provine showed that laughter is used to punctuate speech, it doesn’t just interrupt at random. This suggests that it plays a communicative role – it isn’t just some independent process that happens to us while we are talking to someone. He also found that the speaker typically laughs more than the audience, and that laughter was most common in situations of emotional warmth and so-called ‘in-groupness’. Again, all strongly suggesting that laughter has an important social role. And it is not always used for positive reasons. For all the good feeling that goes with laughing with someone, there is also a dark side, when someone is laughed at to belittle or show disdain.

Perhaps the most important social feature of laughter is how contagious it is. Just listening to someone laugh is funny. To test this, try keeping a straight face while watching this video of a man tickling a gorilla. You can even catch laughter from yourself. Start with a forced laugh and if you keep it up you will soon find yourself laughing for real.

What these observations show is that laughter is both fundamentally social, and rooted deep within our brains, part and parcel of ancient brain structures. We laugh because we feel like it, because our brains make us, and because we want to fit in socially. All these things are true. But biologists distinguish at least four fundamental types of answer you can give to explain behaviour: “why did it evolve?”; “how did it evolve?”; “How does it develop across the lifespan?” and  “how does it work?”.

This column has given some answers to the first question (laughter evolved for social interaction) and the last question (laughter is controlled by evolutionary ancient brain centres that control breathing and speech), but even with the beginnings of answers to these two questions, the other two are far from being answered. Each time we get closer to an answer for a fundamental question, it deepens our appreciation of the challenge remaining to answer the others.

Thank you to Andrew Martin for suggesting the topic. If you have your own suggestions please send them to tom@mindhacks.com

Berlin plan #3: Instant social knowledge through unconscious perception

So I think I’ve figured out the third and final intervention I want to run for the cognitive science safari I’ll be leading in Berlin on the 11th of July. Regular readers will recall that I first wanted to try a field test of the change blindness phenomenon, and to follow that up with an exercise in contaigous attention. For my final trick, I’m going to try something which demonstrates how rapidly, and successfully, we can make unconscious judgements about people.

There’s a powerful demonstration of this that I experienced thanks to Professor Jon May during my undergraduate degree. Jon showed the class black and white photos of middle aged men and women and asked us to judge if they were American or British. There were no obviously clues, no cowboy hats, no uniforms or flags. Just boring pictures. If you had of asked any of us in the class we would all have said that we had no idea who was American and who was British. It just wasn’t possible to be sure, but we all guessed and – of course – at the end of the demonstration we found out that we’d mostly been right. It’s an important demonstration that we often have access to information that we aren’t fully aware of or certain about. We couldn’t make judgements on explicit criteria, but instead relied on a perceptual intuition. Without realising it, we’d been trained by experience to associate certain things – styles of haircut? certain facial features? clothing? who knows – with the different nationalities.

So it seems that throughout our lives we’re building up tacit knowledge of how we expect different kinds of people to look. This effect isn’t just for nationalities. Famously, it also seems to work for things like sexual orientation. This is a remarkable paper :Brief exposures: Male sexual orientation is accurately perceived at 50 ms. As the title suggests, it shows that people were able to judge at above chance rates if someone was straight or gay merely from a photo of their face shown for a twentieth of a second. It’s not quite instant, but it shows that even the briefest of flashes can contain a surprising amount of information. You can try a version of this experiment yourself, thanks to the wonders of the internet, with the “Gay? or Eurotrash?” game (via this neurocritic post).

What I’d like to try in Berlin is a demonstration of this phenomenon, but for geography. Using the group of people on the tour, I will find willing volunteers from around Berlin and ask them where they come from. Then we’ll ask the tour to try and guess, through a series of Yes/No answers like “Is this person a European?”, “From Germany?”, “From Berlin?” and so on. Through what has been called the wisdom of crowds we should be able to take the average guess of those on the tour to come up with a more accurate judgement than any one of us will individually produce. The fun will be in seeing how often we are able to judge someone’s hometown from no more than how they look.

Berlin plan #2: Contagious attention

As I’ve mentioned, I’ll be leading a ‘cognitive science safari’ in Berlin on 11th of July. We’ll be generating some experiences based on classic psychology experiments, experiments which tell us important things about how cities organise our perceptions.

Previously I described how I’ll be trying to revive a classic change blindness experiment. For my next trick, I plan to re-mix another classic experiment. This is one by famed social psychologist Stanley Milgram on the drawing power of crowds (Milgram et al, 1969).

We’ve all hear that nothing attracts a crowd like a crowd, but Milgram set out to systematically test this idea. Filming from a sixth floor window, Milgram arranged for collaborators to stop on a busy street and stare up at him. With the video evidence he could then record data on what proportion of passers-by would stop and join the crowd. In agreement with his classic work on obedience to authority, he found that the drawing power of crowds increased rapidly as the first few members joined.

Recently, research led by Princton’s Iain Couzin has provided an improved analysis on how this kind of shared attention spreads through a crowd (Gallup et al, 2012). Using automated tracking tools, the new research showed that people only follow the gaze of people near them, and – like traffic jams – attention tends to spread backwards in the crowd, rather than between people next to each other, or facing each other. There’s a great write-up of this research over at Ed Yong’s Discover blog: What are you looking at? People follow each other’s gazes, but without a tipping point.

One of the conclusion of Couzin’s recent study was that there wasn’t a tipping point for crowd gathering – no magic threshold where a crowd would just get bigger and bigger under its own `attentional gravity’.

Well, this sounds like a challenge to me, and I think I’ve thought of a way we can try and hack these experiments for added interest. Milgram and Couzin’s experiments both had a single crowd looking at a relatively uninteresting phenenon (Milgram filming from his window, a pair of experimenters filming surreptitiously). In Berlin, I’d like to try to plug two crowds into each other, so to speak. We’ll start off as in Milgram’s experiment, with one person looking up at the experimenter (Perhaps on the bridge overlooking Alexanderplatz – although suggestions welcome). The rest of us can watch the behaviour of passers-by: will they join the person staring up at the bridge? What kind of person will stop to have a look? How long will they stay? We’ll add more people to this crowd and should be able to see the patterns Milgram and Couzin observed: what is the effect of a bigger crowd? How far does the influence of the crowd extend?

Next, we’ll see if we can generate a self-sustained crowd by having more and more people join the experimenter on the bridge – creating two crowds watching each other, both attracting the attention of their nearby passers-by. If my reading of Iain Couzin’s research is right then there should be a stable equilibrium where the crowds reach a certain size and stop growing. If his theory is wrong, we could generate an endlessly growing crowd, driven by the power of positive feedback until it encompasses the whole population of the world – a Psychology equivalent to grey goo or one of those particle physics experiments which risks creating a black hole in the centre of Planet Earth.

Okay, so that second possibility is unlikely, but we are sure to generate a rich field in which to observe the interply of shared attention among the city-crowd. So please join me in Berlin as we travel the spectrum from science to speculation to experience in an attempt to unravel the mysteries of psychology in the city. As ever, I’m eagar to meet any mindhacks.com readers who live in Berlin and would like to come along (or even help out). Get in touch!

Original announcement: Meet me in Berlin
Plan #1: The Change Blindness Experiment
Make sure you check out the video of the analysis technique on Iain Couzin’s page here (it’s the one where everyone in the crowd looks like they’ve got a yellow arrow protruding from their foreheads).
HT to Vaughan Bell for the phrase ‘cognitive science safari’

References:

Gallup, A.C., Hale, J.J., Garnier, S., Sumpter, D.J.T., Kacelnik, A., Krebs, J. & Couzin, I.D. (2012) Visual attention and the acquisition of information in human crowds. PNAS, published online April 23rd, open access.

Milgram, S., Bickman, L. & Berkowitz, L. (1969) Note on the drawing power of crowds of different size. Journal of Personality and Social Psychology 13, 79–82.

Berlin Plan #1: The Change Blindess Experiment

I’m giving a talk and leading an ‘experience treasure hunt’ in Berlin on July 11th (see here). The aim will be to show how our perception works, using examples from city life. Cities, like all environments, channel our attention. One of the things I’m planning on doing is to recreate a classic experiment which shows how much we don’t notice about the city around us.

The experiment is a demonstration of change blindness – a phenomenon where we don’t notice changes in something we’re supposed to be watching. Here’s Richard Wiseman with a short video showing off the effect: The Colour-changing Card Trick. In 1998 Simons & Levin took this research out of the lab, using the general public as their experiment participants. They got a confederate, who we’ll call Person A, to approach people with the pretence of asking directions. When the unwitting participant had got underway giving directions, Simons & Levin had another pair of confederates walk rudely between them and Person A carrying a door. Person A used the shield provided by the door to sneak off, and another experimental confederate, person B, took their place. The research measure is whether the person giving directions noticed that they were now giving them to a totally different person. Amazingly, slightly less than half of the people approached noticed the switch. Here’s a fun recreation of the experiment I found on YouTube.

As well as being a great example of the fun of taking Psychology experiments out of the lab, this research confirms how narrow our perception of the world around us is. As with our visual blindspots, we think we capture a full-spectrum high-resolution image of the world, but actually we only sample a very limited slice, and our perceptual machinery infers across the gaps.

With luck we’ll be recreating this experiment in Berlin on July 11th (if you’d like to help out, get in touch!). Like all good experiments, this one opens up as many questions as it answers. Will different kinds of people be more or less sensitive to the switch? Will people be more likely to notice switches that cross social categories (men switched with women, old with young, etc etc)? Join me in Berlin and we’ll make a start on finding out.

Reference: Simons, Daniel J.; Levin, Daniel T. (1998), “Failure to detect changes to people during a real-world interaction”, Psychonomic Bulletin and Review 5 (4): 644–649, DOI:10.3758/BF03208840

Meet me in Berlin

On July 11th I’ll be running a workshop as part of the BMW Guggenheim Lab in Berlin. The lab is a temporary public space, in the neighbourhood of Prenzlauer Berg, dedicated to encouraging ‘open dialogue about issues related to urban living’. I’ve been invited by Corinne Rose, a psychologist and artist who has an interest in microanalyzing urban environments.

I’ll be giving a talk about the psychology of attention and perception in the city, and the leading a tour out into Berlin on an ‘experience treasure hunt’ where we’ll be trying to collect some interesting experiences of attention in the city. Here’s the blurb for my talk.

Lens and filters: the mind and the city

The city trains you to both see and unsee. There is a riot of experience available in cities, which stimulates our hearts and heads, but for everything you see there are things you have to unsee. We can use illusions, tricks and curiosities to focus back on the psychological processes which generate our experience of the city. In this session I will give an introduction to the study of perception and to some of the fascinating psychology research about living in cities. For the second part we will venture into the city on a “treasure hunt” for experiences which illustrate something important about how our minds respond to the city or how the city affects our minds. The hope is that by putting our own lens and filters to inspection we can gain a deeper understanding of how both cities and ourselves work.

I’ve got a few ideas for the treasure hunt aspect, which I’ll blog about shortly. The session is due to start at 3pm, I think, and it would be great to see any mindhacks.com readers there. In particular, if you are Berlin based psychologist (studying or practicing) and fancy helping out with some of the demonstrations, I’d love to hear from you before the 11th.

Update: The plan is taking shape! See parts #1, #2 and #3

BBC Future column: Hypnic Jerks

Here’s my column at BBC Future from last week. You can see the original here. The full listof my columns is here and  there is now a RSS feed, should you need it

As we give up our bodies to sleep, sudden twitches escape our brains, causing our arms and legs to jerk. Some people are startled by them, others are embarrassed. Me, I am fascinated by these twitches, known as hypnic jerks. Nobody knows for sure what causes them, but to me they represent the side effects of a hidden battle for control in the brain that happens each night on the cusp between wakefulness and dreams.

Normally we are paralysed while we sleep. Even during the most vivid dreams our muscles stay relaxed and still, showing little sign of our internal excitement. Events in the outside world usually get ignored: not that I’d recommend doing this but experiments have shown that even if you sleep with your eyes taped open and someone flashes a light at you it is unlikely that it will affect your dreams.

But the door between the dreamer and the outside world is not completely closed. Two kinds of movements escape the dreaming brain, and they each have a different story to tell.

Brain battle

The most common movements we make while asleep are rapid eye-movements. When we dream, our eyes move according to what we are dreaming about. If, for example, we dream we are watching a game of tennis our eyes will move from left to right with each volley. These movements generated in the dream world escape from normal sleep paralysis and leak into the real world. Seeing a sleeping persons’ eyes move is the strongest sign that they are dreaming.

Hypnic jerks aren’t like this. They are most common in children, when our dreams are most simple and they do not reflect what is happening in the dream world – if you dream of riding a bike you do not move your legs in circles. Instead, hypnic jerks seem to be a sign that the motor system can still exert some control over the body as sleep paralysis begins to take over. Rather than having a single “sleep-wake” switch in the brain for controlling our sleep (i.e. ON at night, OFF during the day), we have two opposing systems balanced against each other that go through a daily dance, where each has to wrest control from the other.

Deep in the brain, below the cortex (the most evolved part of the human brain) lies one of them: a network of nerve cells called the reticular activating system. This is nestled among the parts of the brain that govern basic physiological processes, such as breathing. When the reticular activating system is in full force we feel alert and restless – that is, we are awak

Opposing this system is the ventrolateral preoptic nucleus: ‘ventrolateral’ means it is on the underside and towards the edge in the brain, ‘preoptic’ means it is just before the point where the nerves from the eyes cross. We call it the VLPO. The VLPO drives sleepiness, and its location near the optic nerve is presumably so that it can collect information about the beginning and end of daylight hours, and so influence our sleep cycles. As the mind gives in to its normal task of interpreting the external world, and starts to generate its own entertainment, the struggle between the reticular activating system and VLPO tilts in favour of the latter. Sleep paralysis sets in.

What happens next is not fully clear, but it seems that part of the story is that the struggle for control of the motor system is not quite over yet. Few battles are won completely in a single moment. As sleep paralysis sets in remaining daytime energy kindles and bursts out in seemingly random movements. In other words, hypnic jerks are the last gasps of normal daytime motor control.

Dream triggers

Some people report that hypnic jerks happen as they dream they are falling or tripping up. This is an example of the rare phenomenon known as dream incorporation, where something external, such as an alarm clock, is built into your dreams. When this does happen, it illustrates our mind’s amazing capacity to generate plausible stories. In dreams, the planning and foresight areas of the brain are suppressed, allowing the mind to react creatively to wherever it wanders – much like a jazz improviser responds to fellow musicians to inspire what they play.

As hypnic jerks escape during the struggle between wake and sleep, the mind is undergoing its own transition. In the waking world we must make sense of external events. In dreams the mind tries to make sense of its own activity, resulting in dreams. Whilst a veil is drawn over most of the external world as we fall asleep, hypnic jerks are obviously close enough to home – being movements of our own bodies – to attract the attention of sleeping consciousness. Along with the hallucinated night-time world they get incorporated into our dreams.

So there is a pleasing symmetry between the two kinds of movements we make when asleep. Rapid eye movements are the traces of dreams that can be seen in the waking world. Hypnic jerks seem to be the traces of waking life that intrude on the dream world.

BBC Future column: why your brain loves to tune out

My column for BBC Future from last week. The original is here. Thanks to Martin Thirkettle for telling me about the demo that leads the column.

Our brains are programmed to cancel out all manner of constants in our everyday lives. If you don’t believe it, try a simple, but startling experiment.

The constant whir of a fan. The sensation of the clothes against your skin. The chair pressing against your legs. Chances are that you were not acutely aware of these until I pointed them out. The reason you had somehow forgotten about their existence? A fundamental brain process that we call adaptation.
Our brains are remarkably good at cancelling out all sorts of constants in our everyday lives. The brain is interested in changes that it needs to react or respond to, and so brain cells are charged with looking for any of these differences, no matter how minute. This makes it a waste of time registering things that are not changing, like the sensation of clothes or a chair against your body, so the brain uses adaptation to tune this background out, allowing you to focus on what is new.

If you don’t believe me, try this simple, but startling demonstration. First, hold your eyeball perfectly still. You could use calipers to do this, or a drug that paralyses the eye muscles, but my favourite method is to use my thumb and index finger. Using the sides of your thumb and finger, press on the bone of the eye socket, through your upper and lower eyelids. Do this gently. Try it with one eye first, closing the other eye or covering it with your hand.

With your eye fixed in position, keep your head still and soon you will experience the strangest thing. (You will have to stop reading at this point. I don’t mind. We will pick up when you have finished). After a few seconds the world in front of you will fade away. As long as you are holding your eyeball perfectly still, you will very quickly discover that you can see nothing at all. Blink, or move your head, let go of your eye and the world will come back. What’s going on?!

Now you see it…

For all of our senses, when a certain input is constant we gradually get used to it. As you are holding your eye still, exactly the same pattern of light is falling on each brain cell that makes up the receptors in the back of your eye. Adaptation cancels out this constant stimulation, fading out the visual world. The receptors in your eye are still processing information. They have not gone to sleep. They simply stop firing as much, reducing the messages they pass on about incoming sensations – in effect the message passed on to the rest of the brain is “nothing new… nothing new… nothing new…”. You can make your brain cells spring into action by moving your eye, or by waving your hand in front of your face. Your hand, or anything moving in the visual world, is enough of a change to counteract the adaptation.
This sounds like it could go badly wrong. What if I am watching something, or someone, I am thinking hard about it, and I forget to move my eyes for a few seconds. Will adaptation mean that thing disappears? Well, yes, it could in principle. But the reason it does not happen in practice is due to an ingenious work-around that the evolution has built into the design of the eyes – they constantly jiggle in their sockets. As well as the large rapid eye movements we make several times a second, there is also a constant, almost unnoticeable twitching of the eye muscles that means that your eyes are never absolutely still, even when you are fixing your gaze on one point. This prevents any fading out due to adaptation.

 

You can see this twitching when you look at a single point of light against a dark background (such as a single star in the sky, or a glowing cigarette end in a totally dark room). Without a frame of reference your brain will be unable to infer a stable position of the point of light. Every twitch of your eye muscles will seem like a movement of the point of light (a phenomenon called the autokinetic effect).

Adaptation is so useful for the brain’s processing of information that it has been kept by evolution, even in basic visual processing, and this extra muscle twitching has been added in to prevent too much adaptation causing problems for us. But the basic mechanism is still there, as my eye experiment revealed.

Once you understand adaptation, you discover that it is all around us. It is the reason people shout when they come out of nightclubs (they have got used to the constant high volume, so it does not seem as loud to them as it does to the people they wake up on the way home). It is why a smell that might have hit you as overpowering when you first enter a room can actually be ignored after you’ve got used to it. And it is related to the phenomenon of word alienation, whereby you repeat a word so often it loses its meaning. But most of the time it operates quietly, in the background, helping to filtering out the things that do not change, so that we can concentrate on the more important tasks of those that do.