Your ears emit sounds

There’s a fascinating article on the evolution of hearing in The Scientist that also contains an interesting gem – your ears produce sounds as well as perceiving them.

In addition to amplifying hair-cell activity, these active mechanisms manifest as spontaneous movements of the hearing organ, oscillating even in the absence of sound stimuli. Such spontaneous movements actually produce sound that is emitted through the middle ear to the outside world and can be measured in the ear canal.

I wondered whether this only applied to non-human animals – it’s not clear from the text – but a brief search brings up various studies on spontaneous otoacoustic emissions in humans. For example:

Spontaneous otoacoustic emissions were evaluated in 36 female and 40 male subjects. In agreement with the results of previous surveys, emissions were found to be more prevalent in female subjects and there was a tendency for the male subjects to have fewer emissions in their left ears.

It also turns out that spontaneous otoacoustic emissions are used to test hearing in newborn babies.

It’s an interesting problem, because normally hearing is tested by giving people tests where they are played various sounds or silence at certain time points and they have to signal whether they think they heard the sound, and correct decisions are counted.

This clearly doesn’t work with babies but one way of testing hearing is by measuring the response of nerve signals that connect to the auditory brain stem.

This training video shows how it’s done but in the last section there’s a test which relies directly on detecting the ‘echoes’ created by otoacoustic emission when a tone is played directly into the ear.
 

 

Link to The Scientist article on the evolution of hearing.

Hearing WiFi

New Scientist has a fascinating article on Frank Swain who has hacked his hearing aid to allow him to hear WiFi.

It’s a great idea and riffs on various attempts to ‘extend’ perception into the realm of being able to sense the usually unnoticed electromagnetic environment.

I am walking through my north London neighbourhood on an unseasonably warm day in late autumn. I can hear birds tweeting in the trees, traffic prowling the back roads, children playing in gardens and Wi-Fi leaching from their homes. Against the familiar sounds of suburban life, it is somehow incongruous and appropriate at the same time.

As I approach Turnpike Lane tube station and descend to the underground platform, I catch the now familiar gurgle of the public Wi-Fi hub, as well as the staff network beside it. On board the train, these sounds fade into silence as we burrow into the tunnels leading to central London.

I have been able to hear these fields since last week. This wasn’t the result of a sudden mutation or years of transcendental meditation, but an upgrade to my hearing aids. With a grant from Nesta, the UK innovation charity, sound artist Daniel Jones and I built Phantom Terrains, an experimental tool for making Wi-Fi fields audible.

Do also check out a fantastic radio documentary by Swain we featured earlier this year which is a brilliant auditory journey into the physics and hacking of hearing and hearing loss.
 

Link to NewSci article ‘From under-hearing to ultra-hearing’

From under-hearing to ultra-hearing

The BBC World Service has a fascinating radio programme on hearing loss and how it’s spurring the move towards auditory enhancement technology for everybody.

The documentary, called Hack My Hearing, was created by science writer Frank Swain who is suffering hearing loss. He explores different forms of hearing disturbance and looks at technologies that aim to enhance hearing and how they soon might provide ‘super human’ auditory abilities.

One of the best things about the documentary is that it has been brilliantly engineered so you can experience what most of the forms of hearing loss and hearing enhancement in the documentary sound like.

It is definitely one to be listened to on headphones and it sounds wonderful.

Sadly, it’s only available as streamed audio at the moment, but you can listen to the full programme at the link below.

Update: The programme is now also available from the BBC as a podcast – downloadable directly as an mp3.

 

Link to Hack My Hearing streamed audio.

Listening for the voices of the dead

I’ve got an article in The Observer about our tendency to perceive meaning where there is none and how this inadvertently popped up in one of the strangest episodes in the history of psychology.

The article discuss the work of psychologist Konstantīns Raudive who began to believe that he could hear the voices of the dead amid the hiss of radio static – after, it must be said, much re-recording and amplification of the samples.

He wrote a 1971 book called Breakthrough where he explained his technique which was even accompanied by a flexidisc that has lots of not very convincing examples of the dead speaking through noise. You can listen to it on YouTube if you’re so inclined.

He gained widespread media attention but subsequent scientific studies found that everyone was hearing something different amid the static, making it one of the most well-know examples of illusory meaning or pareidolia of its time.

However, the experience of illusory meaning has become widely studied for its relationship to magical thinking and hallucination but was even recently deployed as a practical tool for the assessment of dementia.

More in the full article at the link below. It’s been given a somewhat odd title but hopefully, it should be fairly self-explanatory.
 

Link to Observer article on illusory meaning.

With every language, a personality?

The Medieval Emperor Charlemagne famously said that “to have another language is to possess a second soul” but the idea that we express different personality traits when we speak another language has usually been left as anecdote.

But The Economist takes this a step further and examines the science behind this idea – which may have more weight than we might first think.

It looks at the issue from lots of intriguing angles. Perhaps the most obvious is that bilingual speakers may have different associations with each language – for example, home and work – and so come to associate different sorts of social behaviours with each.

One of the most interesting is how different language structures might allow for different behaviours, although a grammatical explanation for why the Greeks having a tendency to interrupt during conversation is given short shrift

Is there something intrinsic to the Greek language that encourages Greeks to interrupt?…

In this case, Ms Chalari, a scholar, at least proposed a specific and plausible line of causation from grammar to personality: in Greek, the verb comes first, and it carries a lot of information, hence easy interrupting. The problem is that many unrelated languages all around the world put the verb at the beginning of sentences. Many languages all around the world are heavily inflected, encoding lots of information in verbs. It would be a striking finding if all of these unrelated languages had speakers more prone to interrupting each other. Welsh, for example, is also both verb-first and about as heavily inflected as Greek, but the Welsh are not known as pushy conversationalists.

There’s plenty more interesting analysis in the Economist article and it turns out the magazine’s language blog, called Johnson (relax Americans, it’s a reference to Samuel Johnson) is very good as a whole.
 

Link to ‘Do different languages confer different personalities?’

The deafening silence

All silences are not equal, some seem quieter than others. Why? It’s all to do with the way our brains adapt to the world around us, as Tom Stafford explains

A “deafening silence” is a striking absence of noise, so profound that it seems to have its own quality. Objectively it is impossible for one silence to be any different from another. But the way we use the phrase hints at a psychological truth.

The secret to a deafening silence is the period of intense noise that comes immediately before it. When this ends, the lack of sound appears quieter than silence. This sensation, as your mind tries to figure out what your ears are reporting, is what leads us to call a silence deafening.

What is happening here is a result of a process called adaptation. It describes the moving baseline against which new stimuli are judged. The way the brain works is that any constant simulation is tuned out, allowing perception to focus on changes against this background, rather than absolute levels of stimulation. Turn your stereo up from four to five and it sounds louder, but as your memory of making the change rapidly fades, your mind adjusts and volume five becomes the new normal.

Adaptation doesn’t just happen for hearing. The brain networks that process all other forms of sensory information also pull the same trick. Why can’t you see the stars during the daytime? They are still there, right? You can’t see them because your visual system has adapted to the light levels from the sun, making the tiny variation in light that a star makes against the background of deep space invisible. Only after dark does your visual system adapt to a baseline at which the light difference created by a star is meaningful.

Just as adaption applies across different senses, so too does the after-effect, the phenomenon that follows it. Once the constant stimulation your brain has adapted to stops, there is a short period when new stimuli appear distorted in the opposite way from the stimulus you’ve just been experiencing. A favourite example is the waterfall illusion. If you stare at a waterfall (here’s one) for half a minute and then look away, stationary objects will appear to flow upwards. You can even pause a video and experience the illusion of the waterfall going into reverse.

It’s a phenomenon called the motion after effect. You can get them for colour perception or for just lightness-darkness (which is why you sometimes see dark spots after you’ve looked at the sun or a camera flash).

After-effects also apply to hearing, which explains why a truly deafening silence comes immediately after the brain has become adapted to a high baseline of noise. We perceive this lack of sound as quieter than other silences for the same reason that the waterfall appears to suck itself upwards.

So while it is true that all silences are physically the same, perhaps Spinal Tap lead guitarist Nigel Tufnel was onto something with his amplifier dials that go up to 11. When it comes to the way we perceive volume, it is sometimes possible to drop below zero.

This was my BBC Future from last weekend. The original is here.

The Mystery of The Cuckoo’s Calling

One of the computational linguists who applied forensic text analysis to JK Rowling’s books to uncover her as the author of The Cuckoo’s Calling describes the science behind his investigation in a post for Language Log.

It seems Rowling’s authorship was originally leaked by her law firm and a UK newspaper turned to two academics who specialise forensic text analysis to back up their suspicions.

On of those academics, computer scientist Patrick Juola, wrote a piece for Language Log to describe how this sort of text analysis works.

Of the 11 sections of Cuckoo, six were closest (in distribution of word lengths) to Rowling, five to James. No one else got a mention.

Another feature I used were the 100 most common words. What percentage of the document were “the,” what were “of,” and so on. Again, a rich data set that is easy to extract by computer. Using an otherwise similar analysis (including cosine distance again), four of the sections were Rowling-like, four were McDermid-like, and the other three split between James and Rendell.

I ran two tests based on authorial vocabulary. The first was on the distribution of character 4-grams, groups of four adjacent characters. These could be words, parts of words (like four letters “nsid” that would be inside the word “inside”) or even parts of two words (like the four letters “n th” as part of the phrase “in the”)… I also ran on word bigrams, pairs of adjacent words, again a feature with a good track record.

The character 4-grams showed a preference for McDermid, with 8 sections close to her. Three were Rowling-like, and no one else was mentioned. The word pairs, on the other hand, were clearly Rowling-like (9 sections, against 2 by McDermid, no one else mentioned).

If you want to play around with some of the technology behind both Juola’s authorship attribution work, or that of Peter Millican – the other academic contacted by the press to do an analysis – you can actually download them both from the net.

Juola’s JGAAP programme is available here while you can get Millican’s at this page.

Rumours that Mind Hacks is actually written by Natalie Portman will be strictly denied.
 

Link to Juola’s post on Language Log.