better to light a candle?

She says: It’s better to light a candle than to curse the darkness

He says: I wouldn’t be so sure, maybe a candle would destroy your night-vision – without the candle your eyes could adjust to the lowered light levels (a process called adaptation, [Hack #26])

She says: But if you’re in total darkness, there’s no light at all to adjust to seeing

He says: Good point, so maybe it should be “It’s better to wait for a bit, then, if your eyes don’t adjust, you should light a candle rather than curse the darkness”

She says: How long do you have to wait until you know?

He says: Ah well, the cone receptors in the eye – which let you see colour – adapt fully after about 5 minutes. But it takes about 30 minutes for the rod receptors to fully adapt. These are the important ones for night vision, since they are specialised in detecting light or dark – which is presumably the fundamental information you are interested in.

She says: Okay. So it should be “It’s better to sit in the dark for up to 30 minutes doing nothing, then light a candle rather than curse the darkness”?!

He says: Oh, you don’t have to do nothing. Adaptation happens at the retina. You can prove this to yourself by adapting to the dark and then looking at a light with only one eye. One eye will adjust to the light, and the other (which you kept closed) will keep it’s dark adaptation. Now if you go back to darkness you can switch between being blind (in your light adapted eye) and being able to see (in your other eye), just by openning and closing your eyes alternately. So, you can do anything you want with the rest of your brain, it shouldn’t matter.

She says: So talking would be okay?

He says: Talking would be fine. Or whistling.

She says: So “It’s better to wait in the dark to see if your eyes dark adapt (you can do anything you want while you’re waiting) and only then, if they don’t, light a candle rather than curse the darkness”

He says: You could even curse the darkness while you’re waiting and get it out of the way. And really a red light would be better than a candle, because red spectrum light doesn’t affect your dark adaptation (which is why cabin lights in aeroplanes and ships are red).

She says: “It’s better to wait in the dark to see if your eyes dark adapt (you can do anything you want while you’re waiting) and only then, if they don’t, light a candle rather than curse the darkness. But it would be better if you had a red light rather than a candle for preference”

He says: That’s it

She says: Snappy. I like it

He says: Someone should tell Amnesty

The one hundred most influential works in cognitive science

The Cognitive Science society has voted on The one hundred most influential works in cognitive science from the 20th century. Although we have tended to refer to the contents of Mind Hacks as ‘cognitive neuroscience’, much of what we’ve written about is classic cognitive science material. It was this discipline that first aimed to use a information processing view of mind to synthesise work in linguistics, artifical intelligence, ethology, biology and experimental psychology (and I’m sure a few others). The relevance to the more recent ‘cognitive neuroscience’, and to the spirit of ‘Mind Hacks’, should be obvious. So have a browse of the top 100. There’s quite a few that get cited here and there in the book, and lots of other gems that might catch your interest.

‘A Genius Explains’

There was an interesting piece in last weekend’s Guardian (A Genius Explains) about a high-functioning autistic who is also a savant (i.e. he’s got amazingly intellectual abilities – he can recall pi to 22,514 decimal places for example). Autistic savants are more common than non-autistic savants, but usually they aren’t able to quite so lucidly explain how they manage to do the things they do.

The article left me curious, and a little jealous (“It’s mental imagery”, he said “It’s like maths without having to think.”) and makes me feel like we’re in for some interesting times ahead as research into savantism, synthesia, developmental cognitive neuroscience and mental imagery converges.

What you lookin’ at?

The eyes are the primary social signal. It’s the eyes we spend most of the time looking (“To See, Act” [Hack #15]). Even when the other person is talking, we look most at the eyes, not the mouth. We use them to signal turn-taking in conversation, to read emotions from, like fear…and we use them to work out what another person is looking at.

It’s this – gaze perception – that I’ve been getting interested in. How accurately can we tell where someone is looking? How accurately can we tell if someone is looking at us, or not? I’ve been looking out for some actual figures here, basic parameters on how small a difference we can detect in where someone is looking, either when they are looking at us, or at someone else.

Obviously, to be able to answer this question with actual parameters would have all sorts of implications. For, say, the design & manipulation of pictures showing people looking at things, for VR interfaces and, also, I guess it might give a better idea of when someone can tell i’m looking at them, and when they just can’t know I am for sure. You know, just as a sort of side benefit…

Continue reading “What you lookin’ at?”

New Scientist review

New Scientist reviews Mind Hacks:


Which is nice. I’m pleased they picked up on all the links and references we give if you want to explore the phenomena further. Like another (very favourable) review said:

“Mind Hacks” is helpfully structured to take you just as deep as you want to go.

From which also contains this interesting suggestion:

[Mind Hacks] is totally overflowing with examples and simple exercises — the “hacks” — that you can do by yourself or with friends. Better yet, buy the book and give a “Mind Hacks” party! Ask your guests to open the book randomly, exclaim on the particular mental characteristic explained on that page, and then put everyone through the exercise or group discussion implied.

If you do have a Mind Hacks party and manage to get a group of people all doing one of the demos (I think some of the mood induction ones like “Make Yourself Happy” [Hack #95] would serve well for this) then make sure you take pictures and let us know how it goes!

Size and selection times: Fitts’s Law

Oo Oo – Just when I thought I was settling down to do some of the work i’m actually paid to do, I discovered a bit of psychology that is relevant to interaction design:-
Did you know that the time it takes you to point your mouse, or your finger, at something is predictable from the size and distance of the object using an equation known as Fitts’s Law?

Nope, neither did I till today. But if you apply it right it shows how you can get a big gain in how quick and easy it is to select something with just a small change in the selection interface.

Continue reading “Size and selection times: Fitts’s Law”

Oxford Companion To The Mind, 2nd Edition

companiontothemind.gifThe second edition of The Oxford Companion to the Mind has been published and I didn’t even notice. It’s been ten years since the first edition, and I’m sure that for the second editon editor Richard Gregory has preserved and nurtured all the breadth and good humour of the first. The book has it’s own site here, along with some sample PDFs of entries on everything from tickling to memes to attachment theory. This book will keep you company with wit and information as you explore all the myriad shores that make up psychological science. At ¬£40 it’s not cheap, but if you’ve got the money spare it is truly worth it.

Ones to watch

Two blogs I’ve just discovered and will be keeping an eye on are <a href="
“>Mixing Memory (who has recently done an excellent post on time perception, in two parts!) and Circadiana who has just started and promises:

‘This blog will be dedicated to tracking and commeting on the advances in the study of biological time, mainly circadian rhythms, but also other aspects of temporal biology, e.g., developmental timing.’

And to wet your appetite is this post Everything You Always Wanted To Know About Sleep (But Were Too Afraid To Ask)

Hacking Consciousness

Susan Greenfield was on BBC Radio 4’s Today programme this morning, talking about a new ‘centre for the mind’ at Oxford (apologies if i’ve got the exact name wrong, but i can’t find a web reference) which she will be directing. The centre will carry out cross-disciplinary research into topics like consciousness, and Prof. Greenfield has some well put things to say about the whole topic – you can hear her again here.

Cross-disciplinary studies of consciousness must be a good thing – in the book see “Talk To Yourself” [Hack #61] for an example of some good work done by a philosopher (Peter Carruthers at The University of Maryland), based on the work of psychologists (most notably Elizabeth Spelke at Harvard)

One phrase Susan Greenfield used a couple of times jumped out at me: ‘hacking’! “You can’t just hack into someone’s consciousness”, she said. Well, maybe not in the sense she meant it….

No uniqueness in the speed of the brain’s evolution?

Reports (eg) of genetic evidence that the human brain evolved usually fast may be exaggerated – see this very thorough post at language log (thanks to Cosma for the heads up).

This quote seems pretty typical of the media reports:

Humans went into evolutionary overdrive as their brains developed, sending them on a path that set them apart from other animals, scientists believe

And you can understand the general yearning for signs of human uniqueness. Despite this there is no structure, or chemical, in the human brain that isn’t found in other species – and, it seems, even the pace of genetic change associated with human brain evolution isn’t unprecentedly fast (languagelog cites a cell adhesion protein in the zebrafish, and the SARS virus as just a couple examples of higher rates of change).

Continue reading “No uniqueness in the speed of the brain’s evolution?”

Eyes wide with fear


Here’s another story related to Vaughan’s post of a couple of days ago about the amygdala and fear perception.

A brain imaging study reported in the journal Science [1] found that showing the silhouettes of fearful eyes for just 17 milliseconds was enough to increase activity in the amygdala’s of human subjects – the effect is something like just seeing the whites of someone’s eyes in the dark (as shown in the picture, along with the comparison condition – the silhouette of the eyes of someone showing a happy expression).

The two things struck me about this. The first, obviously, is how brief the exposure is. If you are shown something for 17ms you will probably be unable to tell that you’ve been shown anything at all (you might see a flash), you certainly won’t be able to tell what it is. In this study the 17ms picture of eyes was immediately followed by a picture of a normal, expressionless, face – which makes perceiving the eye-silhouettes even harder (and, indeed, none of the participants in the experiment reported that they noticed anything unusual).

But their brains did. The amygdala was already ramping up, ready to signal ‘be afraid’ to the rest of the brain. And this to something that isn’t actually scary in itself – but a social signal that there is something to be afraid of nearby. Social and emotional information is being priority-routed through the brain’s processing streams.

Continue reading “Eyes wide with fear”


A reader writes:

I’ve recently discovered that I can play a video game while listening to spoken word audio (podcasts).

The game, AntiGrav, uses the body (via a cam which is interpreted as movements). It’s physically demanding and demands quick visual recognition and response– ie. flailing arms about and generally looking like an idiot. Terrific game.

The podcasts on the other hand are fairly intellectually engaging. However, I find that I cannot just sit and listen to them… I need to be doing something else. I can’t do programming work or read blogs/web pages, because I get overwhelmed by the two language-based inputs.

So I’m able to turn off the game music / effects and listen, while playing and do as well as I would listening to the game soundtrack.

This seems a suprising result, and I gather that they use different parts of the brain. Care to comment?

Good question – it is a little suprising that you can do both at once. I think the answer is not so much that they involve different input modalities (one visual, one auditory), but that the two tasks involve different types of processing which do not require a change of the ‘representational code’ between input and output.

Continue reading “Multi-tasking”

Cultivated Perception

Lots of psychology isn’t rocket science – it’s not exactly stuff you couldn’t have figured out yourself if you’d have thought about it for long enough. Often the conclusions from some area of investigation are explained to you and you think ‘Well, hey, that’s obvious’. And of course there’s an argument that true answers often should be obvious, once you’ve been told them.

One of the the things I hoped we could do with Mind Hacks was give people framworks for looking at how our minds work, and how we interact with the environment, so that it becomes easier to spot the obvious in advance. After all, we all have minds, so we all have access to the raw data to draw the conclusions – it’s just that there are many things you don’t notice until you’ve learnt to see them. (Until someone stops me i’m going to call this ‘cultivated perception’).

So, I should be working on designed a questionnaire (a sign that I committed grevious sins in a past life?) and I noticed how I could improve it with a little lesson from Chapter 8 of the book.

Continue reading “Cultivated Perception”

The Social Yawn


All animals yawn (see and in humans yawning seems to be contagious. Seeing another person yawn, or even just reading about yawning can make you yawn. (We talk about unconscious immitation in chapter 10 of the book). James Anderson from the University of Stirling gave a lecture in Sheffield last week about yawning – in the introduction he told us that when he lectures on yawning lots of people in the audience, well, yawn. But his talk was only yawn-inducing in the social-contaigon sense.

Yawning, it seems to me, may provide us with paradigm case of an automatic behaviour that, moving along the phylogenetic scale, has become co-opted into a quasi-voluntary social signal.

Continue reading “The Social Yawn”

Hack #102 : Alter Input With Expectations

This is a hack which never made it into the book, but we thought it worth sharing. At this point, to get the most out of this hack, look at this figure (in a pop-up window) quickly before reading on. It’s not important to try and work out what it is, but have a good look. Seen it? Now, without further ado…

Hack #102: Alter Input With Expectation


The balance between feed-forward and feed-back connections in the brain gives a clue to the balance between raw sensation and expectations in constructing experience.

Feedback is ubiquitous in the brain. The brain is not just massively parallel [Hack #52], it is also massively interconnected- an awesomely complex cybernetic system.

Continue reading “Hack #102 : Alter Input With Expectations”