Nature Neuroscience has an intriguing fMRI brain scanning study where the researchers could work out what sort of silent video clip the volunteers were watching by observing activity in the part of their brains specialised for perceiving sound.
Although silent, the video clips were all chosen to ‘imply’ sound by depicting things such as a howling dog, a piano key being struck or coins being dropped into a drinking glass. This reliably caused activity in the auditory cortex as the brain ‘simulated’ likely sounds.
The researchers used an analysis technique called ‘multivariate pattern analysis’ that can pick out brain activity patterns associated with different types of experience.
In this study, the analysis was set up to work as a ‘classifier’ where the research team entered both the brain scanning data and what video clips the participants were viewing for part of the experiment, and the ‘classifier’ then tried to guess which types of picture were being viewed for the rest of the experiment just by using the brain scan patterns it had learnt earlier.
After being trained on a sample of the data, the ‘classifier’ could identify which type of silent clip (animal, musical instrument or object) the person was watching by analysing the pattern of activity in the auditory part of the brain.
This is a lovely demonstration of how the brain ‘simulates’ the type of neural activity that would normally be triggered by other senses to help flesh out what it is experiencing.
Perhaps the earliest demonstration of this was from several studies that reported activity in parts of the brain specialised for visual perception that was triggered when you try and picture objects in your imagination.
These visualisation studies are an example of where you are consciously trying to ‘simulate’ another sense through active imagination but, as this new study shows, this sensory ‘filling’ in by the brain also seems to happen automatically.
Studies on ‘implicit motion’ provide another demonstration of this. For example, being shown a still picture that implies movement (such as a ball being dropped) will cause activity in V5, an area specialised for motion perception.
Link to Nature Neuroscience study (seems to be open access).
Link to PubMed entry for same.
One thought on “The ‘sound’ of the silent howl”
Hey, do you think this might offer a good explanation for what occurs in (at least some forms of) synesthesia? That for some reason, for example, in certain people sounds might “imply” colors or motion might “imply” sound in the same way that the videos and pictures “implied” sound and motion? It would explain some of the phenomenal qualities of synesthesia– namely, that the synesthetic qualities of the sense are not as “strong” or vivid as normal sensory perception but are experienced as stronger than just regular semantic associations. I’m not sure it explains why synesthetes often report synesthetic properties as being “attached to” external objects & why synesthetes are subject to certain cognitive effects, i.e. Stroop effects in color-grapheme synesthesia, but maybe it’s just a matter of degree? Perhaps certain forms of synesthesia– motion –> sound, for example– are really just overactive variants of normal processing. If auditory areas in non-synesthetes are recruited when viewing silent videos, then it’s only a hop and a skip to those people whose auditory areas are used in understanding a larger variety of visual events (blinking lights, for example).
I’m a bit worried, though, about the authors’ claims about “experience” rather than perceptual processing. Sure, the auditory activation was correlated with the evocativeness of the pictures, but is there any reason to believe that the subjects were having auditory *experiences*? As you point out, unlike the imagination studies, the subjects weren’t asked to actively imagine or simulate the sound of the videos. Isn’t it premature to draw conclusions about experience? Another interpretation of the study is that brain activity is still primarily reflective of stimulation: it’s just that stimulation in a single mode is processed cross-modally anyway. The “filling in” as you called it is just a normal part of perception, though it’s probably later, “higher” processing.
As a last point of curiosity, I’m wondering what would happen to one’s understanding of visual events if a person was prevented from accessing their early auditory cortices, or had a lesion or abnormal development, etc. If the “filling in” process is a part of normal perception/ discriminatory abilities, you’d expect there to be a difference between impaired and non-impaired individuals on certain tasks (as for example, in motion-sound synesthesia, synesthetes are better at comparing visually presented rhythmic patterns). Does anyone know of anybody that’s done research relevant to this question (across any sensory modalities, not just vision and sound).
Sorry to monopolize your comment space, cross-modal phenomena are just one of those areas I find fascinating.
(P.S. if this turns out as one big block of text, I apologize, there are supposed to be paragraph breaks in here)