Technology Review has an article on using humans as part of a digital face recognition system. Uniquely, you don’t have to take part in any deliberate recognition, the system uses electrical readings to automatically measure the response of the brain – even if you’re not aware of it.
The system, developed by Microsoft Research, takes advantage of the fact that when we see something we recognise as a face, a specific electrical signal is generated by face-perception brain activity that can be picked up by electrodes.
Crucially, this brain activity happens automatically, we don’t have to make a special effort.
Last year, I wrote an article entitled ‘Hijacking Intelligence‘, noting that software is increasingly being designed to use humans as ‘biological subroutines’ for the things computers find most difficult.
Labelling pictures is one such task – it’s something humans find trivial, computers find difficult, and it’s needed in large numbers to create an index for image searches.
To get round this problem, Google designed an online game that involved labelling pictures. Humans play for fun, while Google get the benefit of your intelligence for their database.
This new system takes it a step further, as you don’t have to be doing anything related for it to take advantage of your ‘mental work’.
For example, a picture could flash up every time you hit save on a word processor, or every time you look at a certain website.
Each time your brain signals that you’ve seen a face, the system reads your recognition activity and sends it back to the main database to classify the image.
This might be one way of sifting through security images to see which should be inspected in more detail.
As a substitute for advertising, maybe you’d be offered free internet access if you had the system installed. Your brain would pay the bills.
While the system has only been developed as a proof-of-concept, it’s interesting, if not a little scary, to speculate how technology will harness our mental skills, even when we’re not aware of it.
Link to Technology Review article ‘Human-Aided Computing’.
Word has it that the same is being done by the military for the detection of conspicuous images in satellite data, using imagery experts. Given the debates about the FFA as either a face area or expertise area, I wonder whether they’re detecting the same waveforms.
The sort of thing the authors are talking about seems entirely impractical. To get it to work, you need to have the viewer wearing an EEG cap, which presumably is attached through wires to a computer, and depending on the setup you may also have to get wet gunk in their hair. This is clearly not something where they can take advantage of spare moments of people’s time in day to day life. And if you’re going to go through the trouble of getting volunteers to sit at your workstation with an EEG cap on, it would probably be a lot easier for both you and the volunteer if s/he just classified an image as face or non-face with a keypress. So until the days when everyone has wireless microelectrodes implanted in their brains, this stuff doesn’t seem all that useful.
chris: Most likely they are using the N170 waveform to indicate the presence of faces. The N170 is supposed to be specific to face detection, and I’m not aware of any controversy over its functional correlates analogous to the controversy over FFA.