On Walking

When I walk, it feels like a unified action. I mean this in contrast to something like climbing a ladder whence I am extremely aware of the left-right-left-right nature of the commands I must send to my limbs in order to achieve my ascent.

I was thus quite surprised to learn from a paper appearing in journal last year that there appear to be completely separate control mechanisms for operating each of one’s legs while walking. I had (somewhat naively) assumed that my coherent ambulatory experience implied a single underlying motor-program or brain-circuit.

The authors of this paper showed that human beings have no trouble at all walking on a pair of treadmills (one for the left leg and one for the right) moving in opposite directions. Further, they had people abruptly switch between various combinations of directions (forward and forward, backward and forward, forward and backward, backward and backward) and speeds, with short periods (5-10 minutes) of readjustment. Because we essentially never encounter these types of situations in out every day experience, and yet adapt to them very rapidly, the authors concluded (sensibly, I think) that we must have distinct regulators of leg movement for walking.

A figure from the paper mentioned above

On some level this is unsurprising, clearly it is possible to move one leg independently from the other. However my assumption above is not completely without basis; it was Charles Sherrington who won a nobel prize for the discovery that cats can execute a walking motion using only the neurons in their spinal cord. In a somewhat troublesome to consider series of experiments, he demonstrated that spinal-cord severed cats (no communication between spinal cord and brain) whose weight is mostly supported while their feet rest on a moving treadmill can go for a rudimentary stroll. These cats were in effect walking reflexively.

What is intriguing about this whole situation is the degree to which our consciousness has access to what is going on in our neurons. Obviously we don’t have to determine individual muscle tensions or relationships between contraction and flexion when we move, instead we have ideas like “kick the ball” or “walk up the stairs” and our subconscious translates those into motor output. But could it be possible to gain access to that information? Highly trained athletes and those who must be extremely in tune with their bodies probably have a much greater degree of control, but they probably never feel a motor neuron’s spike rate change as they command it to apply more force. Thus, at some level, we simply do not have conscious control of our bodies.

This is a bit unsettling, but it is also exciting because it means that we really must reframe the way we think about the relationship between minds and brains. At least, I must not take for granted that my consciousness is a total reflection of what is happening in my brain.


1. Choi JT, Bastian AJ. (2007) Adaptation reveals independent control networks for human walking. Nat Neurosci. 10(8):1055-62.

On Active Perception

Perception is more than the passive response to stimuli. When you focus your visual attention on something, it looks different. This is not about gaze, the orientation of the most sensitive part of your retina (the fovea), here I mean that somewhat intangible ability we have to devote our mental processing to an object in, or area of visual space without directly regarding it. Experimental data in the form of human verbal reports and the activity of single cells in the brains of monkeys demonstrate that this is quite concrete: visual attention makes you and the cells in your brain better able to distinguish a variety of properties such as color, the angle of lines and small distances1.

This viewpoint, observation as both active and reflexive, highlights a dichotomy present in debates concerning brain function in general. That is the distinction between seeing such neural exciation as occurring “bottom-up,” driven by the responses to incoming stimuli, versus “top-down,” controlled by higher level cognitive (dare I say conscious) processes.

Figure 1

For example, there is a Dalmatian (somewhat) hidden in figure 1. Without your knowledge of its presence, it is perhaps more natural to simply see a collection of dots, but once you’ve found that dog, its impossible to miss. This demonstrates that your perception of the object is not purely bottom-up. The image impinging on your photoreceptors alone doesn’t necessarily lead to the experience of seeing the object. Other examples include seeing faces in clouds, a result of our overactive face recognition areas, or hearing words in the sounds produced by a gaggle of geese. These are both top-down examples, your existing perceptive mechanisms imposing themselves on the incoming information.

Charles Gilbert, a professor of Neurobiology at Rockefeller University, has been researching visual perception by recording from single neurons in the brains of monkeys for quite some time. A recent paper from his laboratory was aimed at quantifying the role of attention in the perception and cortical processing of a specific visual stimulus: long contours made of small individual line segments3. Figure 2 contains examples of these contours made from more (A) to fewer (C) subsegments .

Figure 2

There are single neurons in your visual cortex (area V2) that will become active when exposed to these larger, constructed contours in certain parts of the visual field. This response is built up from those of cells (in area V1) that are excited by exposure to small, continuous lines in particular places like the ones making up the larger edges above. The reactions of these cells are in turn shaped from the combination of many small, pixel-like bits of information, coming from the retina. This is the hierarchy of the visual system, the responses of neurons that represent progressively more complex objects are formed from earlier, simpler patterns, until we end up finally with neurons that respond best to images of your grandmother, or Bill Clinton, or Jennifer Aniston2.

This hierarchy is important to the theme of top-down versus bottom-up because it informs us as to what is at the top and what is at the bottom. It also allows us to construct a simple example that we can apply to more complicated cases.

Figure 3

The shapes in figure 3 are called Kanisza figures, but many scientists do refer to them as the pac-men that they bear more than a passing resemblance to. It is hard to ignore the triangle that seems to be formed by this particular configuration of polygons, despite the fact that each edge is missing a large segment. What’s happening here is that there are enough cells in V1 turned on, collinearly, by the partial edge of the invisible triangle to excite the V2 cell that would respond to a whole triangle edge in the same location. This is not the whole story however, it turns out that in addition to the bottom-up connections mediating the hierarchy I described before, there are also extensive feedback projections from higher areas like V2 to lower ones like V1. Thus, when the cell in V2 relays information up to higher areas that there appears to be a line spanning two of the Kanisza figures, it also informs all of the V1 cells that might be making up that line, including both the ones which are in this case actually being stimulated, and the interstitial ones where there is no edge to detect. The cells not receiving any actual visual stimulus are activated, to a lesser extent than they might be, by the feedback or top-down signal.

This then is the paradigm for top-down perception. Something like this is most likely happening in the Dalmatian example as well, with some high-level neuron that responds to dogs being activated and sending feedback signals down to all of the neurons that would normally activate the percept, facilitating the “segmentation” of the dog from the background.

What Gilbert and his colleagues did in order to more thoroughly understand this process was to engage a monkey in a task related to the perception of these incomplete contours while measuring the responses of the neurons in their area V1.

The monkey was presented with two images like the ones in figure 2 simultaneously, one in which some (1-9) of the line segments were oriented to form a contour, and one in which their angles were random. It’s task was to simply look at the one with the contour. Maybe because of something intrinsic to their visual system, or maybe just because the monkey didn’t understand what the experimenters wanted them to do, they required extensive training before they became proficient at this game.

Figure 4

Before becoming experts at performing this chore, the cells in V1 which respond to the smaller constituent segments responded with a transient increase in their activity (figure 4, left). However, once they had become skilled at this task, the transient was followed by a prolonged bout of activity whose amplitude was proportional to the number of colinear segments making up the larger contour (figure 4, right). Intriguingly, even after the training, if the monkeys were anesthetized and exposed to the images the responses again showed only a transient increase, and no difference between the number of line segments making up the contour. What this suggests is that the engaged, top-down process of performing the task and attending to the stimuli is what generated the difference in the responses, thus active perception.

The idea of active perception brings to mind a question: what aspects of sensory experience are subject to this kind of cognitive control, and what are its limitations? Extreme cases are somewhat helpful: you can’t see a circle when presented with a square, although every circle you’ve ever seen on a computer monitor or television is simply a lot of pixels, and thus not really a circle. This sort of fuzziness is certainly present in somatosensation (touch). An example, if I arrange a situation where your hand is hidden, but there is a rubber hand approximately where yours might be, I can (with a bit of “training”) evoke a somatosensory experience in you by touching the rubber hand. Multiple pairings of poking your hidden hand while simultaneously having you watch me poke the rubber stand-in will lead to the feeling that any touch of the rubber hand is a touch of your hand. Further, I have certainly had the experience of enjoying the taste of something before knowing what it was, implying that the mere knowledge of the thing to be tasted can modify the experience of tasting it.

There must be some evolutionary/developmental aspect to all of this. At least in the vision example, if our evolution didn’t provide us with feedback connections the likes of which I described above, there would be no anatomy to mediate the top-down control. Similarly, if our development didn’t equip our visual systems with automatic detection systems for faces and lines and circles, there would be no high-level percept to feed-back down to lower systems, against which our brains might try and favorably compare incoming data.

The question then becomes, to what extent is this effect mediated by sheer brain circuitry and to what extent by the nebulous mystery that is conscious experience? I would have to argue that our brain circuitry is the only basis for our conscious experience, and thus any effect that we might attribute nonspecifically to our mental being represents our lack of knowledge about the connectivity in our skulls. However, my mother taught me that it is a great thing to be wrong, because that means you’ve got something to learn. So in any case, I look forward to deeper unravelling of these phenomena.


1. Maunsell JH, Treue S. (2006) Feature-based attention in visual cortex. Trends Neurosci. 29(6):317-22.
2. Quiroga RQ, Reddy L, Kreiman G, Koch C, Fried I. (2005) Invariant visual representation by single neurons in the human brain. Nature 23;435(7045):1102-7.
3. Li W, Piëch V, Gilbert CD. (2008) Learning to link visual contours. Neuron 7;57(3):442-51.

On Bodies and Brains

Being a dyed in the wool materialist, I believe that the cellular material hidden in our skulls creates our conscious experience. I was reminded this week, however, of just how irrevocably the body is involved in the generative process as well.

I read a piece of work which appears in the journal Cell, about a newly identified form of stimulated muscle contraction. Apparently, in the nematode Caenorhabditis Elegans, the muscles involved in expunging digested foosdtuffs from the body can be stimulated to do so directly by the intestinal tract1. Asim A. Beg et al, working in the lab of Erik M. Jorgensen, demonstrates that these muscles can be signaled that it’s time to go to work by a high proton concentration, i.e. an acid. Thus, as the gut works, and the space between the intestine and the muscles becomes acidified, the muscles contract.

Normally, muscles are only commanded to produce a force by the release of neurotransmitter from neurons that specifically innervate these tissues. In other words, the nervous system must tell muscles when it’s time to act. Of course the heart represents a notable counter-example, but there one finds specialized muscle cells that endogenously signal the heart to beat at regular intervals; native activity, not native stimulus response. The worm-gut case is unique because it is an example of the body bypassing the need for neural intervention.

I was further disabused of my cephalocentric ideology by listening to an old episode of RadioLab titled “Where am I?” That program contained several magnificent examples of the theme I’m expounding on, and one in particular that caught my attention.

The hosts of this not to be ignored radiological phenomenon had as a guest the science writer and scientist Robert M. Sapolsky (amongst others). He commented on a theory concerning brain-body interactions first attributed to William James. It goes like this: not only do the bodily physiological manifestations of emotional states precede conscious awareness of the source-stimulus, but those responses can themselves cause emotional experiences. I found this fascinating, and being a student of the brain, I wanted a little bit more information than the show had to offer, so I got in touch with Professor Sapolsky (at Stanford), and he gave me the following distillation (of the first part):

“The basic story is that sensory information (with the exception of olfaction) gets to the amygdala by way of the usual projections to the cortex, where there is classical sensory cortical processing, and with information eventually passed on to the amygdala. But there is an alternative pathway going straight from the thalamus to the amgdala, bypassing Cortex, so that information gets there sooner. So there’s the potential for amygdaloid activation in response to stimuli before there is conscious (i.e., cortical) awareness of the stimuli. So very fast, but because the cortex really does do all the important transformations of sensory information, this fast short-cut can be quite inaccurate.”

This explains how — by activating the amygdala — emotions and concomitant corporal responses can occur before conscious awareness of their origins, but the bit about the body informing the emotional state goes further. The idea there is that even if you’ve decided on some rational level that there’s nothing to be upset about, the body’s state can convince the brain that there is.

I am not sure of the mechanism that the brain employs to read off the emotional state of the body, but this interplay between mental and body implies a couple of things. First, our body has the ability to inform our brains how we’re feeling, which is especially remarkable in light of the example of the body working independently of the brain, and second that how we’re feeling comes before what we think.

I suppose my dream of some day existing as a brain floating in a tank of reddish liquid is going to turn out far duller than I had imagined.


1. Beg AA, Ernstrom GG, Nix P, Davis MW, Jorgensen EM. (2008) Protons Act as a Transmitter for Muscle Contraction in C. elegans. Cell, 11;132(1):149-60.