My BBC Future column from Tuesday. The original is here. It’s a Christmas theme folks, but hopefully I cover an interesting research area too: Berridge, Robinson and colleagues’ work on the wanting/liking distinction.
As the holiday season approaches, Tom Stafford looks at festive overindulgence, and explains how our minds tell us we want something even if we may not like it.
Ah, Christmas, the season of peace, goodwill and overindulgence. If this year is like others, I’ll probably be taking up residence on the couch after a big lunch, continuing to munch my way through packets of unhealthy snacks, and promising myself that I’ll live a more virtuous life once the New Year begins.
It was on one such occasion that I had an epiphany in the psychology of everyday life. I’d just finished the last crisp of a large packet, and the thought occurred to me that I don’t actually like crisps that much. But there I was, covered in crumbs and post-binge guilt, saturated fats coursing through my body looking for nice arteries to settle down on. In an effort to distract myself from the urge to reach for another packet, I started to think about the peculiar psychology of the situation.
Every bite seemed essential, but in a way that seem to suggest I was craving them rather than liking them. Fortunately for my confusion (and my arteries), there’s some solid neuroscience to explain how we can want something we don’t like.
Normally wanting and liking are tightly bound together. We want things we like and we like the things we want. But experiments by the University of Michigan’s Kent Berridge and colleagues show that this isn’t always the case. Wanting and liking are based on separate brain circuits and can be controlled independently.
To demonstrate this, Berridge used a method called “taste reactivity“, in effect, recording the faces pulled when animals are given different kinds of food. Give an adult human something sweet and they’ll lick their lips. This might sound obvious, but when you take it to the next level in terms of detail and rigour you start to get a powerful system for telling how much an animal likes a particular type of food. Taste reactivity involves defining the reactions precisely – for example, lip-licking would be defined as “a mild rhythmic smacking, slight protrusions of the tongue, a relaxed expression accompanied sometimes by a slight upturn of the corners of the mouth” – and then looking for this same expression in other species. A baby human can’t tell you they like the taste like an adult can, but you can see the same expression. A chimpanzee will do the same with a sweet taste. A rat won’t do exactly the same thing, but they do something similar. By carefully observing and coding the facial expressions that accompany nice and nasty tastes, you can tell what an animal is enjoying and what they aren’t.
This method is a breakthrough because it gives us another way of looking at how non-human species feel about things. Most animal psychology uses overt actions – things like pressing levers – as measures. So, for example, if you want to see how a reward affects a rat, you put it in a box with a lever and give it food each time it presses the level. Sure enough, the rat will learn to press the lever once it learns that this produces food. Taste reactivity creates an additional measure, allowing us insight into how much the animal enjoys the food, as well as what it makes it want to do.
From this, the neuroscientists have been able to show that wanting and liking are governed by separate circuits in the brain. The liking system is based in the subcortex, that part of our brain that is most similar to other species. Electrical stimulation here, in an area called the nucleus accumbans, is enough to cause pleasure. Sadly, you need brain surgery and implanted electrodes to experience this. But another way you can stimulate this bit of the brain is via the opioid chemical system, which is the brain messenger system directly affected by drugs like heroin. Like brain surgery, this is also NOT recommended.
Wanting happens in nearby, but distinct, circuits. These are more widely spread around the subcortex than the liking circuits, and use a different chemical messenger system, one based around a neurotransmitter called dopamine. Surprisingly, it is this circuit rather than the one for liking which seems to play a primary role in addiction. For addicts a key aspect of their condition is the way in which people, situations and things associated with drug taking become reminders of the drug that are impossible to ignore. Berridge has hypothesised that this is due to a drug’s direct effects on the wanting system. For addicts any reminder of drug taking triggers a neural cascade, which culminates in feelings of desire, but crucially, without needing any actual enjoyment of the drug to occur.
The reason wanting and liking circuits are so near each other is that they normally work closely together, ensuring you want what you like. But in addiction, the theory goes, the circuits can become uncoupled, so that you get extreme wanting without a corresponding increase in pleasure. Matching this, addicts are notable for enjoying the thing they are addicted to less than non-addicts. This is the opposite of most activities, where people who do the most are also the ones who enjoy it the most. (Most activities except another Christmas tradition, watching television, where you see the same pattern as with drug addictions – people who watch the most enjoy it the least).
So now you know what do when you find yourself chomping your way through yet another packet of crisps over the holiday period. Watch your face and see if you are licking your lips. If you are, perhaps your liking circuits are fully engaged and you’ll be happy with what you’ve eaten when you’re finished. If there’s no lip-licking then perhaps your wanting circuits are in control and you need to exercise some self-restraint. Perhaps after the next mouthful, though.
My BBC Future column from a few days ago. The original is here. I’m donating the fee from this article to Wikipedia. Read the column and it should be obvious why. Perhaps you should too: donate.wikimedia.org.
We like to think our intelligence is self-made; it happens inside our heads, the product of our inner thoughts alone. But the rise of Google, Wikipedia and other online tools has made many people question the impact of these technologies on our brains. Is typing in the search term, “Who has played James Bond in the movies?” the same as knowing that the answer is Sean Connery, George Lazenby, Roger Moore, Timothy Dalton, Pierce Brosnan and Daniel Craig (… plus David Niven in Casino Royale)? Can we say we know the answer to this question when what we actually know is how to rapidly access the information?
I’ve written before about whether or not the internet is rewiring our brains, but here the question is about how we seek to define intelligence itself. And the answer appears to be an intriguing one. Because when you look at the evidence from psychological studies, it suggests that much of our intelligence comes from how we coordinate ourselves with other people and our environment.
An influential theory among psychologists is that we’re cognitive misers. This is the idea that we are reluctant to do mental work unless we have to, we try to avoid thinking things though fully when a short cut is available. If you’ve ever voted for the political candidate with the most honest smile, or chosen a restaurant based on how many people are already sitting in there, then you’ve been a cognitive miser. The theory explains why we’d much rather type a zipcode into a sat-nav device or Google Maps than memorise and recall the location of a venue – it’s so much easier to do so.
Research shows that people don’t tend to rely on their memories for things they can easily access. Things like the world in front of our eyes, for example, can be changed quite radically without people noticing. Experiments have shown that buildings can somehow disappear from pictures we’re looking at, or the people we’re talking to can be switched with someone else, and often we won’t notice – a phenomenon called “change blindness”. This isn’t as an example of human stupidity – far from it, in fact – this is an example of mental efficiency. The mind relies on the world as a better record than memory, and usually that’s a good assumption.
As a result, philosophers have suggested that the mind is designed to spread itself out over the environment. So much so that, they suggest, the thinking is really happening in the environment as much as it is happening in our brains. The philosopher Andy Clark called humans “natural born cyborgs“, beings with minds that naturally incorporate new tools, ideas and abilities. From Clark’s perspective, the route to a solution is not the issue – having the right tools really does mean you know the answers, just as much as already knowing the answer.
A memory study by Daniel Wegner of Harvard University provides a neat example of this effect. Couples were asked to come into the lab to take a memorisation test. Half the couples were kept together, and half were reassigned to pair up with someone they didn’t know. Both groups then studied a list of words in silence, and were then tested individually. The pairs that were made up of a couple in a relationship could remember more items, both overall and as individuals.
What happened, according to Wegner, was that the couples in a relationship had a good understanding of their partners. Because of this they would tacitly divide up the work between them, so that, say, one partner would remember words to do with technology, assuming the other would remember the words to do with sports. In this way, each partner could concentrate on their strengths, and so individually they outperformed people in couples where no mental division of labour was possible. Just as you rely on a search engine for answers, so you can rely on people you deal with regularly to think about certain things, developing a shared system for committing items to memory and bringing them out again, what Wegner called “transactive memory”.
Having minds that work this way is one of the great strengths of the human species. Rather than being forced to rely on our own resources for everything, we can share our knowledge and so pool our understanding. Technology keeps track of things for individuals so we don’t have to, while large systems of knowledge serve the needs of society as a whole. I don’t know how a computer works, or how to grow broccoli, but that knowledge is out there and I get to benefit. And the internet provides even more potential to share this knowledge. Wikipedia is one of the best examples – an evolving store of the world’s knowledge for which everyone can benefit from. I use Wikipedia every day, aware of all the caveats of doing so, because it supports me in all the thinking I do for things like this column.
So as well as having a physical environment – like the rooms or buildings we live or work in – we also have a mental environment. Which means that when I ask you where your mind is, you shouldn’t point toward the centre of your forehead. As research on areas like transactive memory shows, our minds are made up just as much by the people and tools around us as they are by the brain cells inside our skull.
ENDNOTE: Wikipedia is an unparalleled democratisation of knowledge, a
wonderful sharing of human intelligence that’s free to anyone to view. I’m
donating the fee for this article to help support Wikipedia’s work. If you feel you can help out please follow this link: https://donate.wikimedia.org.