Faces, faces everywhere

The New York Times has a brief article on why we have a tendency to see faces in chaotic or almost random visual scenes.

The tendency to see meaning in essentially random data is variously known as apophenia or pareidolia, and statistically would be known as a Type I error – a false positive.

Although it is controversial as to whether it is specifically dedicated to recognising faces, an area of the brain known as the fusiform gyrus is certainly heavily involved in perceiving faces.

The fact that this area is so specialised for faces might lead us to detect faces even when they are only suggested by a few dots, the position of clouds or the markings on just about anything.

“The information faces convey is so rich ‚Äî not just regarding another person’s identity, but also their mental state, health and other factors,” he said. “It’s extremely beneficial for the brain to become good at the task of face recognition and not to be very strict in its inclusion criteria. The cost of missing a face is higher than the cost of declaring a nonface to be a face.”

There’s a great web page with pictures of ‘cloud faces‘ if you want to see how spectacular some of these effects can be.

Link to NYT article ‘Faces, faces everywhere’.

Beauty and the average girl

Flickr user Pierre Tourigny has created a series of composite images from popular portrait rating website Hot or Not? that nicely demonstrates our bias for perceiving average faces as beautiful.

He’s made average images from a series of female faces but divided them up into the scoring categories, so there’s an average of faces rated 5 to 5.4, 5.5 to 5.9 and so on.

The average image of the highest rated faces, and an average of faces from all rating categories are shown on the left, although the whole range is on Tourigny’s Flickr page.

If you do have a look at the full series, you’ll notice that the overall average face seems more attractive than the composite face created from images rated in the average range (5-5.4).

Previously on Mind Hacks, we reported on research that suggested that faces created from the average of many others possibly seem more beautiful because they’re easier for the brain to process.

This may be because our brain does a similar averaging process to create a ‘face template’ which we use during face recognition.

Faces that deviate least from this template are easier to match and, therefore, tend to be seen as more attractive.

This, of course, is not the complete story as cultural ideas of what is considered beautiful and perhaps even specific ways in which a face could differ from the ‘template’ might also contribute to our subjective perception of beauty.

Link to Pierre Tourigny’s ‘Average Face Scale’.
Link to previous post on facial attractiveness perception on Mind Hacks.

The colourful world of naming and knowing

The Economist has a short article on two recent studies which have examined the theory that our ability to perceive colours is influenced by the way a language labels different hues.

The general idea that language shapes our thoughts and experience is known as the Sapir-Whorf hypothesis.

For example, some languages don’t have separate names for green and blue, and so this theory might predict that speakers of these languages would be less able to distinguish between the colours.

A less strong prediction might be that speakers of these languages might be able to distinguish between what we label as green and blue, but wouldn’t necessarily make the division at the same point in the colour spectrum as English speakers typically do.

The Economist article discusses two recent experiments which have tested this idea, both in quite ingenious ways – suggesting that colour perception may indeed be influenced by colour naming.

Link to Economist article ‘How grue is your valley?’.

Fading faces

face_blur.jpgWired Magazine has an article on a curious condition known as prosopagnosia where affected individuals cannot recognise people by their faces, despite being able to recognise and distinguish everyday objects with little trouble.

Until recently, it was thought that the condition only arose after brain injury – usually because of damage to an area of the brain known as the fusiform gyrus. This area is known to be heavily involved in face recognition.

It has more recently been reported as an inherited form, suggesting that some people are simply born with particularly bad face recognition skills.

The article looks at the work of neuropsychologist Dr Bradley Duchaine who is investigating the psychology and neuroscience of face recognition impairment, and discusses the experience of several people who have the condition.

One of the people is Bill Choisser, who created ‘Face Blind!‘, one of the first and longest-running prosopagnosia websites on the net.

A particularly striking feature of his site is a self-published book which is an in-depth discussion of the condition and its effects.

Link to Wired article ‘Face Blind’.
Link to Bradley Duchaine’s page with copies of his scientific papers.
Link to Bill Choisser’s website on prosopagnosia.

Mechanical brain sculptures

Introspection2001_lewis_tardy.jpgNeurofuture is back with a bang after a late-summer sabbatical and has alerted me to some wonderful mechanical brain sculptures by artist Lewis Tardy.

Tardy has created a range of mechanical people and beasts all rendered as if they were powered by complex clockwork and hydraulics.

Some of these include cut-away heads, such as the one featured, with the thinking mechanisms exposed for the world to inspect.

Link to ‘Mechanical brains’ on Neurofuture.
Link to Lewis Tardy’s website.

Average girls are hot

average_face_girl.jpgSeed Magazine has an article on recent research published in Psychological Science that suggests that average faces are more attractive because they are easier for the brain to process.

The image on the right (go to the article for a bigger version) is a composite of a number of different female faces rated as attractive.

However, an average of all sorts of faces also tends to be attractive, as demonstrated by a page at the University of Regensburg (which also has an image of an hot average man as well).

In the Psychological Science article (pdf) the research team, led by Prof Piotr Winkielman, asked people to judge the attractiveness of shapes and dot patterns. Participants were more likely to judge the most average patterns as attractive.

In a further experiment, they used the same technique for faces and found the same result.

The researchers argue that the reason we prefer average faces is because the brain creates an idea of a ‘prototype’ face, based on the average of all the faces we have seen. Attractive faces are the ones that best match this prototype because they require less processing to match and recognise.

Link to Seed Magazine article.
Link to facial beauty research lab of Uni Regensburg (great examples).
pdf of research paper.

From sci-fi footnote to cutting-edge vision science

hoyle_black_cloud.jpgThere’s a fascinating letter in today’s Nature about how a footnote in one of Fred Hoyle’s science fiction novels inspired a branch of research in vision science on how the brain estimates when moving objects will arrive at a certain point.

The characters in the book discover an ominous black cloud that appears to be heading towards Earth. Will the cloud hit Earth and, if so, when? The first question is solved when the characters examine the relative speed at which the cloud is translating across the night sky to the rate at which it is looming, or seeming to get larger. The second question is tackled with a bit of impromptu algebra in which the time until impact is calculated from the ratio of the current size of the cloud to its rate of change…

David Lee realized in the 1970s that the brain can use the ratio of size to its rate of change, previously identified by Hoyle, to estimate the imminence of arrival. David Regan realized soon afterwards that the brain can use the ratio of lateral speed to looming rate to calculate where an object is travelling….

Since the early work of Lee and Regan, a considerable amount of research in areas including psychophysics, motor action, neurophysiology and computational modelling has followed (see D. Regan and R. Gray Trends in Cognitive Sciences 4, 99–107; 2000). The whole body of work that exists today can be traced back to a casual footnote and a couple of sketches in a science-fiction novel.

Fred Hoyle was a professional astronomer working at Cambridge University so knew plenty about mathematics, but wrote a number of notable science fiction novels during his lifetime.

The full letter is freely available at the following link.

Link to Nature letter ‘Hoyle’s observations were right on the ball’

after-effect illusions

There’s an illusion popular on youtube.com right now here. Have a look – it’s a motion after-effect illusion. These are discussed in the book (Hack #25). The basic story is the same for all after-effects – continuous exposure to something causes a shift in sensitivity. For continuous motion this means that the visual system shifts its baseline so that, subsequently, stillness looks like movement in the opposite direction to the adapted-to direction. The nice thing about this demo is that is shows that you can have separate motion after-effects in different parts of your visual field. My top tip is to look at your hand at the end of the video for an extra-weirdness effect.

Also today someone asked me how the moving green dot illusion works. Answer: again, i think, it is an after-effect. The purple dots create a colour after-effect, a green dot. All the separate after-effects are joined together by the phi-phenomenon (Hack #27) to give an illusion of one single, moving, green dot.

To understand why we get after-effects, check out Hack #26 (‘Get Adjusted’). Which makes this post the biggest plug for the book I’ve done in a long while!

3D rooms

Perception is a fundamentally underconstrained problem. You get information in through your senses, but not enough information to be absolutely sure of what is causing those sensations. A good example is perception of depth in vision. You get a pattern of light falling on your retinas (retinae?), in two dimensions, and from that you infer a three dimensional world, using various clever calculations of the visual system and some assumptions about what is likely. But because the process remains fundamentally underconstrained, there is always the possibility that you will see something that isn’t really there – that is, your visual system will take in a pattern of information and decide that it is more likely to be produced by a scenario different from the real one.

Which is a all a long winded way of saying: “Look, cool! Illusions rooms!” (thanks Yalda)

3d_room_01.jpg

They’re painted so that from one particular angle the shapes line up and your visual system flips into thinking that it can see a flat, 2D, pattern when the reality is a disjoint 3D one. Awesome.

There’s plenty more here

Continue reading “3D rooms”

Misunderstanding mirrors

mirror.jpgIf I asked you to draw a full-size outline of your head on a flip chart, and then to draw the outline of your head as it appears in the mirror, would you draw the two outlines the same size? You shouldn’t do because the mirror image of your head (as it appears to you) is exactly half its true size, irrespective of how far you are from the mirror, a fact that few people realise. That’s according to a new study published in Cognition by Marco Bertamini and Theodore Parks at the Universities of Liverpool and California.

They also found that most people believe the mirror image of their own head will grow smaller as they move away from the mirror – it doesn’t it stays the same. Yet most participants correctly realised that if they watched the mirror image of another person’s head, it would get smaller as that other person moved away from the mirror. Finally, only a minority of participants realised that the size of the mirror image of another person’s head would get bigger as they, the participant, moved away from the mirror. Confused? Me too.

Link to study abstract

Giant Squid – woah!

giantsquid.jpg

The giant squid has the largest eye in the natural world. Although squid’s eyes evolved on a separate branch of the tangle bank of life, they are remarkably like ours, except that they don’t have the blind spot that human eyes have (Hack #16). This picture is from a book ‘Extreme Nature’ by Mark Carwardine (which the Guardian Weekend ran a piece on two weeks ago). This immature female is 17 foot long, but they go up to 49 foot apparently.

Photo from from here, some more on Giant Squid here

Hack #103: See more with your eyes closed

A reader writes (thanks nick!)

Not gonna impress any girls with this one, but… I was looking at my mother’s ceiling fan the other day trying to determine how many blades it had. It was on its highest setting so it was nearly impossible to do. Until I blinked. If you blink rapidly, it disrupts the brains attempts at connecting frames of sight into continuous motion. Thus a whirling blur becomes a clear frame of sight, easily analyzed. Not sure where else this little trick could pay off. A nice illustration of the characteristics of our visual systems though.

Cool. Freed from the constraint having to make sense of continous input, your visual system can to make sense of the single ‘frame’ of input it does have. An example of less is more? I noticed something similar when riding my bike. When I glance down at the front wheel, it appears blurred. But when I look back at the road, my visual system delivers me a snapshot of the wheel, unblurred. What is happening – I’m guessing – is that as I move from looking at the wheel to the road ahead there is a moment of saccadic suppression [Hack #17] when visual input is cut off. Into this gap the ‘frame’ of the wheel is resolved. Also lending a hand may be a neural mechanism which turns off saccadic suppression if the velocity of the eyes matches that of a moving object (with your eyes stationary a moving object is blurred, with your eyes moving a stationary object is blurred, but if your eyes move at the same speed as an object you can get a clear image). For this to work the object needs to be nicely textured, so your low-level visual apparatus can gauge its velocity. Which explains why i get the effect on my mountain bike, which has big treads on the tyres, but not on a road bike, which has smooth tyres.

changing diet might allow you to see infrared

Thanks to Eric Lundquis for typing this up and putting it on the internet. It’s an experiment done by the army and cited by Rubin, M. L., and Walls, G. L. (1969). Fundamentals of visual science. Springfield, Ill.: Thomas, p. 546, which is in turn cited Sekuler, R., and Blake, R. (1994). Perception (3rd ed.). Springfield, Ill.: Thomas, pp. 62-63:

The following story dramatizes how photopigments determine what one can see. During World War II, the United States Navy wanted its sailors to be able to see infrared signal lights that would be invisible to the enemy. Normally, it is impossible to see infrared radiation because, as pointed out earlier, the wavelengths are too long for human photopigments. In order for humans to see infrared, the spectral sensitivity of some human photopigment would have to be changed. Vision scientists knew that retinal, the derivative of vitamin A, was part of every photopigment molecule and that various forms of vitamin A existed. If the retina could be encouraged to use some alternative form of vitamin A in its manufacture of photopigments, the spectral sensitivity of those photopigments would be abnormal, perhaps extending into infrared radiation. Human volunteers were fed diets rich in an alternative form of vitamin A but deficient in the usual form. Over several months, the volunteers’ vision changed, giving them greater sensitivity to light of longer wavelengths. Though the experiment seemed to be working, it was aborted. The development of the “snooperscope,” an electronic device for seeing infrared radiation, made continuation of the experiment unnecessary (Rubin and Walls, 1969). Still, the experiment demonstrates that photopigments select what one can see; changing those photopigments would change one’s vision.

Another look at mindsight

eye.jpgLast year, psychologist Ronald Rensink at the University of British Columbia proposed that some people have an alternative mode of visual experience – one that involves sensing but not ‘seeing’ – what Rensink dubbed ‘mindsight’. Now his claims have been forcefully rebutted by Daniel Simons and colleagues who argue it’s far more mundane than that: it’s all to do with how cautious people are in deciding whether or not they’ve seen something.

Rensink had performed a kind of change blindness experiment (see Hack #40) that involved participants reporting when they spotted a subtle change between two pictures. He invited participants to press one key when they ‘sensed’ a change between the pictures and to press another key only when they could ‘see’ the change and knew where and what it was. Rensink reported in Psychological Science that a subset of participants (30 %) showed evidence of what he dubbed ‘mindsight’: on a minority of trials they would report sensing the change at least a second earlier than they reported seeing it. “This mode of perception involves a conscious (or mental) experience without an accompanying visual experience”, Rensink explained. “The results presented here point towards a new mode of perceptual processing, one that is likely to provide new perspectives on the way that we experience our world”, he said.

But in this month’s issue of Psychological Science, Daniel Simons and colleagues at the University of Illinois dismiss Rensink’s findings. “Provocative claims merit rigorous scrutiny”, they said. “We rebut the existence of a mindsight mechanism by replicating Rensink’s core findings and arguing for a more mundane explanation…”.

Continue reading “Another look at mindsight”