The busy night

Two things I love are sleeping and data collection. Now, thanks to a new iPhone app, I can do both at once.

Sleep Cycle uses the accelerometer in the iPhone to record vibrations in your mattress caused by you moving in the night. In this way it acts as an actigraph, keeping a record of your body movement, which in this context reflects how deeply you are asleep.

sleepgraph.jpgHere is the data from my last night’s kip. As you can see I show a fairly typical pattern: sleeping deeper in the first half of the night, compared to the second half, and having alternating patterns of deep and light sleep (although I seem to cycle through the stages of sleep every hour, rather than the typically quoted every one and a half hours).

The app also has an alarm which promises to wake you up during a lighter stage of sleep, so saving you the unpleasant sensation of being woken by your alarm from deep sleep. I’ve yet to try this out but it sounds like a good thing, as long as avoiding the jarring sensation is worth forgoing the extra minutes shut-eye!

Link: to the Sleepcycle app

Bang goes the bus top and still no tickle

Last night, I walked past a bus stop adorned with a poster advertising the new BBC science programme Bang Goes the Theory asking “Is it possible to tickle yourself?” and giving a number to text for an explanation.

Fantastic, I thought. Neuroscientist Sarah-Jayne Blakemore’s work on the role of action prediction in the sensory attenuation of self-produced actions summarised in 160 characters.

But here’s the response I got sent to my phone:

Your brain tells your body not to react when you tickle yourself hard, but skin with no hair is sensitive to a light touch. More at http://bbc.co.uk/bang

Admittedly, I was a little worse for wear last night, but even in the cold hard light of day, this doesn’t make a lot of sense.

The second bit (“skin with no hair is sensitive to a light touch”) just seems irrelevant to the question, the webpage has nothing more and the actual explanation is kinda screwy.

Your brain is not telling your body not to react because, except for reflex actions (which are handled by reflex arcs and can be managed entirely by the spinal cord), sensory reactions are handled by the brain.

So if you’re taking this line, a more accurate description is that your brain is telling your brain not to react but this still explains virtually nothing about why you can’t tickle yourself.

However, a scientific paper [pdf] entitled ‘Why can’t you tickle yourself?’ addresses exactly this question.

The science of this is quite well known (in fact, it was featured in the original Mind Hacks book as Hack #65) but in summary it seems that the brain simulates of the outcomes of actions based on your intentions to move because the actual sensory information from the body takes so long to arrive that we’d be dangerously slow if we relied only on this.

This slower information is used for periodic updates to keep everything grounded in reality, but it looks like most of our action is run off the simulation.

We can also use the simulation to distinguish between movements we cause ourselves and movements caused by other things, on the basis that if we are causing the movement, the prediction is going to be much more accurate.

If the prediction is accurate, the brain reduces the intensity of the sensations arising from the movement – for good safety reasons, perhaps – we want to be more aware of contact from other things than touches from ourselves.

So Aunty BBC, here’s one you can use for free:

Your brain predicts the effects of movement and reduces sensations if it guesses right. We guess our own actions better, so it tickles less. http://is.gd/2978A

The next one will cost you the 10p I spent texting Bang Goes the Theory for an inaccurate explanation.

pdf of scientific paper ‘Why can’t you tickle yourself?’
Link to Hack #65 in Mind Hacks.

Taking pride in your posture

A simple but elegant study just published in the European Journal of Social Psychology found that getting people to generate words about pride caused them to unknowingly raise their posture, while asking them to generate words about disappointment led to an involuntary slouch.

The research team, led by psychologist Suzanne Oosterwijk, asked people to list words related to ‘pride’ and ‘disappointment’, and some emotionally neutral control categories of ‘kitchen’ and ‘bathroom’, while being secretly filmed.

‘Pride’ caused a slight increase in posture height, while ‘disappointment’ caused the participants to markedly slouch.

The researchers suggest that the activation of the concept of disappointment led to a spontaneous bodily simulation of the feeling. They link this to the idea of embodied cognition that suggests that our mental life is fundamentally connected to acting on the world.

As we discussed last year, research has suggested that bodily expressions of pride and shame are the same across cultures, indicating that this connection between action and emotion may be a core part of our emotional make-up.

Link to abstract of study (via the BPSRD).

Bionic arm technology reroutes nervous system

Damn this is cool. The New York Times has an article on an innovative technology that allows people to naturally use mechanical prosthetic arms.

While most of the media attention has been focused on implanting electrodes directly into the brain as a form of ‘neuroprosthetics’, this technology takes a novel and remarkably ingenious approach with impressive results.

The technique, called targeted muscle reinnervation, involves taking the nerves that remain after an arm is amputated and connecting them to another muscle in the body, often in the chest. Electrodes are placed over the chest muscles, acting as antennae. When the person wants to move the arm, the brain sends signals that first contract the chest muscles, which send an electrical signal to the prosthetic arm, instructing it to move. The process requires no more conscious effort than it would for a person who has a natural arm.

Researchers reported Tuesday in the online edition of The Journal of the American Medical Association that they had taken the technique further, making it possible to perform 10 hand, wrist and elbow movements, a big improvement over the typical prosthetic repertoire of bending the elbow, turning the wrist, and opening and closing the hand.

It’s an inventive technique because it takes a whole chunk of the hard work away from the technology.

With neural implants, the major obstacle is developing the technology to reduce the noisy neural information into simpler signal channels. The patient then needs to be trained to generate the right brain activity to funnel the activity into the broad channels of the digital signal processor.

This technology takes advantage of existing healthy nerves but just reassigns them to other muscles and the activity in these is just converted into mechanical actions.

Of course, it isn’t useful for people who are completely paralysed, but the results are quite spectacular.

The article has an embedded video which illustrates the remarkable dexterity that the woman with the prosthetic arm is able to achieve.

The scientific article describing the technology has just been published in the Journal of the American Medical Association and describes five prosthetic limb patients who were asked to complete a number of manual dexterity tests.

The study found that they completed tasks only marginally less well than comparison participants who had no damage and were using their original arms.

UPDATE: Mo has reminded me that Neurophilosophy covered an single case of the same procedure earlier in its development cycle. Mo also notes that the technology has the potential to feed-back touch information to the phantom limb!

Link to NYT article ‘In New Procedure, Artificial Arm Listens to Brain’.
Link to scientific article.
Link to JAMA entry for same.

Mirror’s Edge as proprioception hack

mirrorsedge.jpgMirror’s Edge is a first person computer game in which you play an urban free-runner, leaping, sliding, and generally acting fly across the roofs of a dystopian city (see the trailer here). It looks good. In fact, it looks amazing. But, reportedly, to actually play it is even better, sickeningly better.

Clive Thompson, writing for wired.com, suggests that the total interactivity of the environment (if you can see something, you can jump on it, or off it) along with the visual cues about what your character’s arms and legs are doing (they appear in shot as you run and jump) makes the game a convincing proprioception hack. In other words, it remaps your body schema so that you feel more fully that you are the character in the game. When your character runs fast, you feel it is you running fast. When you character jumps across between two buildings and look down, you feel a moment of sickening vertigo.

Research into illusions of proprioception — your sense of where you body is in space — has shown that our body map is surprisingly flexible. It is possible to mislocate your hand, for example, coming to believe that it is directly in front of you when in fact out at the side, or behind you (see video here). Jaron Lanier has reported on an early virtual reality experience he had that made him feel like he had the body of a lobster, with 6 extra limbs. The important feature of all these illusions is that they rely on precisely timed visual feedback. Although visual input can reprogramme our body image, it only does so when there is a tight coupling between what we see and feel. The importance is not the level of detail in what we see, but in the fluidity of the interaction. If Mirror’s Edge makes you feel like you are really are doing Parkour then it is because it has the correct kind of visual feedback (your limbs, in a fully interactive world) with the correct timing.

A final thought: if a computer game really is immersive for something as visceral as free-running, isn’t that kind of surprising, given how complex free running is physically, and how simple the commands used to control a computer game are? Perhaps what this is because when we automatise an action such as a run, a jump or a roll part of the process of making it automatic is losing the experience of the component parts. So, when a computer game feels like real, it is because real feels like nothing — we just ask our brains ‘jump’ and the motor system sorts out the details without our any deep experience of how the jump is performed.

link Clive Thompson’s report on playing Mirror’s Edge
link YouTube trailer for the game

The common language of pride and shame

Wired Science covers an elegant study that suggests that spontaneous expressions of pride and shame are innate behaviours that are not significantly influenced by culture.

The researchers came up with the ingenious idea of comparing how judo wrestlers from the 2004 Olympics and blind judo wrestlers from the 2004 Paralympics celebrated and commiserated their matches.

This allowed a cross cultural comparison, but it also allowed a comparison with blind athletes who have never seen another person in the same position to copy their behaviour.

The new research, however, distilled from high-resolution, high-speed photographic sequences of sighted and blind judo competitors at the 2004 Olympics and Paralympics, suggests that most nonverbal responses to wins and losses are almost universal.

No cultural differences were observed among competitors from different countries and, aside from the shaking of the fists after a loss, sighted and blind athletes displayed remarkably similar nonverbal behavior.

In other words, it made virtually no difference what culture each individual came from, or even whether the person had seen another wrestler at the end of a match or not – the expression of pride was indistinguishable, suggesting that this may be a common expression that we all share.

There was a slight effect of culture on the expression of shame – as the researchers note “it was less pronounced among individuals from highly individualistic, self-expression-valuing cultures, primarily in North America and West Eurasia”.

However, as there was no difference within cultures between sighted and blind individuals, they further suggest that both pride and shame are likely to be innate, but that shame display may be intentionally inhibited by some sighted individuals in accordance with cultural norms.

Link to Wired Science on elegant study.
Link to full text of paper.

Rock climbing hacks! (now with added speculation)

reach.jpgI’m going to tell you about an experience that I often have rock-climbing and then I’m going to offer you some speculation as to the cognitive neuroscience behind it. If you rock-climb I’m sure you’ll find my description familiar. If you’re also into cognitive neuroscience perhaps you can tell me if you think my speculation in plausible.

Rock-climbing is a sort of three-dimensional kinaesthetic puzzle. You’re on the side of rock-wall, and you have to go up (or down) by looking around you for somewhere to move your hands or feet. If you can’t see anything then you’re stuck and just have to count the seconds before you run out of strength and fall off. What often happens to me when climbing is that I look as hard as I can for a hold to move my hand up to and I see nothing. Nothing I can easily reach, nothing I can nearly reach and not even anything I might reach if I was just a bit taller or if I jumped. I feel utterly stuck and begin to contemplate the immanent defeat of falling off.

But then I remember to look for new footholds.

Sometimes I’ve already had a go at this and haven’t seen anything promising, but in desperation I move one foot to a new hold, perhaps one that is only an inch or so further up the wall. And this is when something magical happens. Although I am now only able to reach an inch further, I can suddenly see a new hold for my hand, something I’m able to grip firmly and use to pull myself to freedom and triumph (or at least somewhere higher up to get stuck). Even though I looked with all my desperation at the wall above me, this hold remained completely invisible until I moved my foot an inch — what a difference that inch made.

Psychologists have something they call affordances (Gibson, 1977, 1986), which are features of the environment which seem to ‘present themselves’ as available for certain actions. Chairs afford being sat on, hammers afford hitting things with. The term captures an observation that there is something very obviously action-orientated about perception. We don’t just see the world, we see the world full of possibilities. And this means that the affordances in the environment aren’t just there, they are there because we have some potential to act (Stoffregen, 2003). If you are frail and afraid of falling then a handrail will look very different from if you are a skateboarder, or a freerunner. Psychology typically divides the jobs the mind does up into parcels : ‘perception’, (then) ‘decision making’, (then) ‘action’. But if you take the idea of affordances seriously it gives lie to this neat division. Affordances exist because action (the ‘last’ stage) affects perception (the ‘first’ stage). Can we experimentally test this intuition, is there really an effect of action on perception? One good example is Oudejans et al (1996) who asked baseball fielders to judge were a ball would land, either just watching it fall or while running to catch it. A model of the mind that didn’t involve affordances might think that it would be easier to judge where a ball would land if you were standing still; after all, it’s usually easier to do just one thing rather than two. This, however, would be wrong. The fielders were more accurate in their judgements — perceptual predictions basically — when running to catch the ball, in effect when they could use base their judgements on the affordances of the environment produced by their actions, rather than when passively observing the ball.

The connection with my rock-climbing experience is obvious: although I can see the wall ahead, I can only see the holds ahead which are actually within reach. Until I move my foot and bring a hold within range it is effectively invisible to my affordance-biased perception (there’s probably some attentional-narrowing occurring due to anxiety about falling off too, (Pijpers et al, 2006); so perhaps if I had a ladder and a gin and tonic I might be better at spotting potential holds which were out of reach).

There’s another element which I think is relevant to this story. Recently neuroscientists have discovered that the brain deals differently with perceptions occurring near body parts. They call the area around limbs ‘peripersonal space’ (for a review see Rizzolatti & Matelli, 2003). {footnote}. Surprisingly, this space is malleable, according to what we can affect — when we hold tools the area of peripersonal space expands from our hands to encompass the tools too (Maravita et al, 2003). Lots of research has addressed how sensory inputs from different modalities are integrated to construct our brain’s sense of peripersonal space. One delightful result showed that paying visual attention to an area of skin enhanced touch-perception there. The interaction between vision and touch was so strong that providing subjects with a magnifying glass improved their touch perception even more! (Kennett et al, 2001; discussed in Mind Hacks, hack #58). I couldn’t find any direct evidence that unimodal perceptual accuracy is enhanced in peripersonal space compared to just outside it (if you know of any, please let me know), but how’s this for a reasonable speculation — the same mechanisms which create peripersonal space are those which underlie the perception of affordances in our environment. If peripersonal space is defined as an area of cross-modal integration, and is also malleable according to action-possibilities, it isn’t unreasonable to assume that an action-orientated enhancement of perception will occur within this space.

What does this mean for the rock-climber? Well it explains my experience, whereby holds are ‘invisible’ until they are in reach. This suggests some advice to follow next time you are stuck halfway up a climb: You can’t just look with your eyes, you need to ‘look’ with your whole body; only by putting yourself in different positions will the different possibilities for action become clear.

(references and footnote below the fold)

Continue reading “Rock climbing hacks! (now with added speculation)”