Category Archives: consciousness

On Subconscious Action

from reference 1

The subconscious components of our minds are more powerful than many admit, or feel comfortable admitting. As we go about our lives, our subconsciousness learns about aspects of our existence that might otherwise clutter our thoughts with a distracting chatter of activity: what pressure must I apply to the coffee cup in order for it to remain in my grasp, what route must I take through the throng of commuters in Penn-station to avoid colliding with others, has my blood-sugar dropped below the threshold where I experience hunger, et cetera. In fact, to some extent, the subconscious mind has access to information that the conscious mind does not, as in the muscle tension and blood-sugar examples. Understanding how these abilities are segregated between conscious and unconscious, and to what extent that question even makes sense to ask at both a behavioral and neurophysiological level are of fundamental interest to the understanding of consciousness and human neurological function in general.

A recent study speaks to this topic by probing the extent to which the subconscious can learn about the association between briefly presented visual cues and a monetary reward1. Specifically, Chris Frith & colleagues had subjects play a game where the ability to win money in a given turn of the game was predicted by a visual cue which was presented too briefly to be consciously perceived (see instructions below2). The results of this study suggest that humans are reliably able to subconsciously learn the rewarding value of these visual cues. Importantly, in a control experiment, the researchers demonstrated that the subjects were unable to discriminate between the stimuli without the monetary reward/punishment scheme.

Given the abilities of humans (sketched above) to relegate processing to the subconscious, this finding isn’t that surprising. However, this paper demonstrates the importance of feedback (reward or punishment) for instructing the subconscious. Furthermore, the fact that something as arbitrary as the conscious perception of promised financial reward can serve as the feed-back signal suggests a fundamental role for this type of learning that isn’t limited to certain acts, but might underlie the learning abilities of humans in general.

References/Notes:
1. Pessiglione M, Petrovic P, Daunizeau J, Palminteri S, Dolan RJ, Frith CD. Subliminal instrumental conditioning demonstrated in the human brain. Neuron, 59(4): 561-567, 2008.

2. “The aim of the game is to win money, by guessing the outcome of a button press.

At the beginning of each trial you must orient your gaze towards the central cross and pay attention to the
masked cue. You will not be able to perceive the cue which is hidden behind the mask.

When the interrogation dot appears you have 3 seconds to make your choice between
– holding the button down
– leaving the button up
If you change your mind you can still release or press the button until the 3 seconds have elapsed.

‘GO!’ will be written in yellow if, at the end of the 3 seconds delay, the button is being pressed.

Then we will display the outcome of your choice. Not pressing the button is safe: you will always get a
neutral outcome (£0). Pressing the button is of interest but risky: you can equally win £1, get nil (£0) or
lose £1. This depends on which cue was hidden behind the mask.

There is no logical rule to find in this game. If you never press the button, or if you press it every trial,
your overall payoff will be nil. To win money you must guess if the ongoing trial is a winning or a losing
trial. Your choices should be improved trial after trial by your unconscious emotional reactions. Just
follow your gut feelings and you will win, and avoid losing, a lot of pounds! “

On the Importance of Single Spikes

from reference 1

As mentioned numerous times before in this forum, neurons in the brain communicate by action potentials: pulses of voltage that usually propagate from the cell body (soma) down a specialized outcropping of membrane called the axon which synapses onto other neurons. Usually, these synapses link pre-synaptic axons to post-synaptic dendrites, cellular structures specialized for recieving input.

Until recently, it was thought impossible for a single action potential, initiated in soma, to cause a second, post-synaptic neuron to fire an action potential; rather, as has been extensively documented, single neurons require many simultaneous dendritic inputs which are summed together to cause an action potential to be initiated in the soma. Recent research, however, has identified a cell type found exclusively in the cerebral cortex of human beings which seems to contradict this generalization. These neurons, termed “chandelier cells” are able to cause a chain of post-synaptic events (action potentials in several cells) lasting up to, on average, 37 milliseconds, ten times longer than had been previously assumed possible1.

The article reporting these findings, published in the estimable journal PLoS Biology, describes one feature that the authors feel is of paramount importance to this phenomenon. Apparently chandelier cells are much more likely to make axo-axonic connections. That is, they send their pulses of activity not to dentrites, but to other axons. The reason for this somewhat exotic type of connectivity is that chandelier cells normally turn off the output of other neurons by sending inhibitory signals that cancel action potentials being sent down axons of the chandelier’s targets. It seems then, that single chandelier cell action potentials inhibit other cells which are themselves inhibitory, indirectly exciting the targets of these secondary inhibitory cells.

The relevance of these findings to human cognition or consciousness is unclear, but this represents a significant advancement for our understanding of the functional connectivity of the human brain.

References:
1. Molnár G, Oláh S, Komlósi G, Füle M, Szabadics J, et al. (2008) Complex Events Initiated by Individual Spikes in the Human Cerebral Cortex. PLoS Biol 6(9): e222 doi:10.1371/journal.pbio.0060222

On Time Perception & Sleep

Courbet’s Sleep

Why is it that our perception of the passage of time changes around and during periods of sleep? While it is known that there are diurnal variations in time perception1, and that insomniacs have irregular perception of duration of sleep2, this basic question remains.

In an article concerning regular and pathological conscious perception of time, Oliver Sacks speculates that “visual perception might in a very real way be analogous to cinematography, taking in the visual environment in brief, instantaneous, static frames, or ‘stills,’ and then, under normal conditions, fusing these to give visual awareness its usual movements and continuity3.

This suggests the possibility that our perception of time is a function of our ability to impose a sense of continuity on our own perceptions. Thus, in the absence of external stimuli for this continuity system to act on, we have no mechanism to calculate the passage of time, and instead estimate this variable in a noisy, post-hoc manner.

In any case, the fact that it’s possible to drastically misestimate how long one a bout of sleep has lasted implies that there is something fundamental about the state of consciousness (wakefullness) and judgement of time perception which remains to be understood.

References:

1. Pöppel E, Giedke H. (1970) Diurnal variation of time perception. Psychol Forsch. 34(2):182-98.
2. Knab B, Engel RR. (1988) Perception of waking and sleeping: possible implications for the evaluation of insomnia. Sleep. 11(3):265-72.
3. Sacks, O. (2004) In the River of Consciousness. The New York Review of Books. 51(1):

On Emergent Causation

Max Ernst – “L’invention” or “L’oiseau de L’infini”

I recently read a one-page book review of a text whose subject matter strides through consciousness, free will, and emergence1. The review, by Todd S. Ganson, focuses on how the book, Did My Neurons Make Me Do it?, contends with a classic problem in neuroscience and the philosophy of mind: how is it possible to attribute mental states exclusively to the brain while avoiding a completely determined (lacking in free will) existence2?

Ever since Descarte pointed out the problems with dualism (a separation of the material and the mental), philosophers have been hard at work to find a middle ground between eliminating the mental and resorting to the supernatural. On the one hand, subjective conscious experiences cannot be denied, and it thus seems foolish to claim that they do not exist. However, there is absolutely no hint of a description as to how mentality might be caused by our biological apparatus, and it is thus somewhat attractive to assert some other author to our cognitive being, leading some to invoke the supernatural.

It has been suggested that one way to illustrate the manufacture of subjective experience is describe it as emergent. An example of an emergent property that I find to be particularly useful is the liquidity of water. A single molecule of H2O is not a fluid; rather the quality of being liquid is predicated on the interactions between many molecules. It is a property that emerges from the collective. Another example might be sand dunes: the patterns present in large quantities of grains are a feature of their concert, not guaranteed by the individuals.

Emergence has been very helpful to some because it paints a picture in which consciousness is not a priori predictable from the actions of single neurons, and yet retains a tangible quality. It doesn’t explain how the cerebrum causes consciousness but it does assert a mode in which consciousness might stymie our current scientific attempts to understand it based on the actions of single brain cells.

This book takes the utility of emergence one step further by putting forth the idea that emergence might help us reconcile our personal feeling of responsibility for our actions with our materially deterministic substrates of brain. The idea is that the complex system that is our emergent consciousness “can causally influence what bottom-level events occur by shaping the conditions that trigger these events.2

An apt analogy here again is the sand dune. Its over-all shape determines how the individual grains interact with such forces as the wind. If it forms a flatter dune, it will be less susceptible to the whims of the wind while a tall structure will be more fragile. In this sense, the collective behavior can influence the actions of the individuals which make up the whole.

As mentioned previously, an alternative to searching for ways in which our seemingly ephemeral consciousness can effect the matter in our heads, we can adopt the view that free-will is an illusion; another mechanism of our brains that keeps us happy in the delusion that we’re in charge of our own actions.

In any case, the suggestion concerning emergent causation may not explain anything in specific, but it does help to frame an alternative way to think about the relationship between that vexing triad I mentioned at the top, free will, consciousness and emergence.

Notes:

1. The interested reader might click here for RadioLab’s excellent show on the subject of emergence.
2. Ganson, T.S. (2008) Finding Freedom Through Complexity. Science; 319:104

On Walking

When I walk, it feels like a unified action. I mean this in contrast to something like climbing a ladder whence I am extremely aware of the left-right-left-right nature of the commands I must send to my limbs in order to achieve my ascent.

I was thus quite surprised to learn from a paper appearing in journal last year that there appear to be completely separate control mechanisms for operating each of one’s legs while walking. I had (somewhat naively) assumed that my coherent ambulatory experience implied a single underlying motor-program or brain-circuit.

The authors of this paper showed that human beings have no trouble at all walking on a pair of treadmills (one for the left leg and one for the right) moving in opposite directions. Further, they had people abruptly switch between various combinations of directions (forward and forward, backward and forward, forward and backward, backward and backward) and speeds, with short periods (5-10 minutes) of readjustment. Because we essentially never encounter these types of situations in out every day experience, and yet adapt to them very rapidly, the authors concluded (sensibly, I think) that we must have distinct regulators of leg movement for walking.

A figure from the paper mentioned above

On some level this is unsurprising, clearly it is possible to move one leg independently from the other. However my assumption above is not completely without basis; it was Charles Sherrington who won a nobel prize for the discovery that cats can execute a walking motion using only the neurons in their spinal cord. In a somewhat troublesome to consider series of experiments, he demonstrated that spinal-cord severed cats (no communication between spinal cord and brain) whose weight is mostly supported while their feet rest on a moving treadmill can go for a rudimentary stroll. These cats were in effect walking reflexively.

What is intriguing about this whole situation is the degree to which our consciousness has access to what is going on in our neurons. Obviously we don’t have to determine individual muscle tensions or relationships between contraction and flexion when we move, instead we have ideas like “kick the ball” or “walk up the stairs” and our subconscious translates those into motor output. But could it be possible to gain access to that information? Highly trained athletes and those who must be extremely in tune with their bodies probably have a much greater degree of control, but they probably never feel a motor neuron’s spike rate change as they command it to apply more force. Thus, at some level, we simply do not have conscious control of our bodies.

This is a bit unsettling, but it is also exciting because it means that we really must reframe the way we think about the relationship between minds and brains. At least, I must not take for granted that my consciousness is a total reflection of what is happening in my brain.

References

1. Choi JT, Bastian AJ. (2007) Adaptation reveals independent control networks for human walking. Nat Neurosci. 10(8):1055-62.

On Working Out The Details

Those interested in figuring out how the brain works have collected data using techniques which probe progressively smaller structures. From observing outward behaviors of the whole animal, to the voltages generated by areas of the brain using EEG, down to the activity of single cells using electrophysiology. The last is the term for any measurement of the electrical responses (more often voltage, but current as well) of individual neurons, and is the sine qua non of nervous system function in that it reflects the millisecond by millisecond goings-on of the brain’s most basic units.

Even at this level, however, there remains an essential ambiguity: while the spiking of the neurons in your eye definitely represents sensory information and the voltage in motor neurons connected to the muscles in your arm reflects motor output, the electrical bustle of units in so called “association areas” of the cerebral cortex, are much more difficult to categorize in such unequivocal fashion. These regions respond to stimulation in many sensory modalities and during motor output, to varying degrees.

The posterior parietal cortex (PPC) is one example which has been extensively examined. If a monkey is given a simple task which connects sensory to motor – say, reaching for or directing one’s gaze towards a target – the neurons in this area will light up. But it is unclear whether their vigor is a response to the stimulus, or is responsible for the invocation of the movement1.

At this point, it may be prudent to ask: couldn’t they be doing both? Yes. However, none of the single-cell electrophysiology experiments aimed at the PPC to-date have been specific enough to discriminate between the possibilities that it is responsible for sensory or motor, or truly a combination of the two.

Enter Richard Andersen, a CalTech researcher who has been working in the field of motor planning for quite some time. His view is in opposition to Columbia’sMickey Goldberg: that the PPC is about attention, not intention.

In Dr. Andersen’s recent work, meant to be the last word in their ongoing debate, and published in Neuron, he uses a new twist on old experiments: allowing the monkey, from whose brain the data are being gathered, to freely decide what type of movement he wants to execute2. The stimuli are always the same, either a red or green ball, and the monkey can choose whether to reach out and touch it, or simply move his gaze towards is (in the reaching case he must keep his gaze elsewhere). The intriguing finding is that there are cells which are selective for the type of movement but not the type of stimulus. It is in this sense which Dr. Andersen thinks he has demonstrated the motoric nature of the PPC.

The beauty of Dr. Andersen’s experiment is that this technique has been around for say 40 years now, and yet we are still able to learn much by savvy applications of its use. Human ingenuity in experimental design has always been the primary drive in scientific discovery, for what good are tools if one doesn’t know how to use them. Don’t get me wrong, technological advancements are essential to scientific progress, but it is simply astounding how simple tweaks on old ideas can open up new avenues of research.

References

1. Colby, CL & Goldberg, ME (1999) Space and Attention in Parietal Cortex. Annu. Rev. Neurosci. 22:319–349

2. Cui H, & Andersen, RA (2007) Posterior Parietal Cortex Encodes Autonomously Selected Motor Plans. Neuron, V56:552-559

On Believing Yourself

I have always been quite troubled by the fact that I can remember things that never happened. If I am confident that a childhood friend’s name was Paul when it was actually Roger, how am I to be certain that I correctly remember how to perform the act of addition, or my distaste for the texture of most mushrooms?

Perhaps even more troubling is the fact that studies devoted to exploring the interplay between confidence and memory have found that, in general, the memories we’re most confident in are most likely to be authentic1 (see figure, below).

The paradox is fairly clear: how can we be confident in a false memory, if confidence correlates with accuracy?

The authors of a recent study suggest, and go a ways towards demonstrating, that two distinct mechanisms are at work, one at work when we express confidence in veridical memories, and one for when we express confidence in false recollections2.

Specifically, these authors use fMRI, and a categorized word recall task, to demonstrate that distinct brain areas are active when we’re sure of veracious retrospection and another when we’re confident in specious anamnesis. The researchers speculate that the latter is due to the familiarity of certain events based on the anatomy of the active sites revealed by the scan (see figure, below).

As a final note, the two areas identified in this study are quite far apart in brain terms, once again pointing to the notion that memory is physiologically and anatomically diffuse. So when you can’t remember your first pet’s name, don’t get too worried, your brain is a big place to search.

References

1. Lindsay DS, Read JD, Sharma K (1998) Accuracy and confidence in person identification: the relationship is strong when witnessing conditions vary widely. Psychol Sci 9:215–218.
2. Kim H, & Cabeza R (2007) Trusting Our Memories: Dissociating the Neural Correlates of Confidence in Veridical versus Illusory Memories. J. Neurosci 27(45):12190-12197

On Emotion and Memory

from reference 1

Why is it that experiences imbued with emotion crystalize into easily recollected memories? Our memory is quite limited, so we need a system for deciding what to remember and what to forget. Emotions may thus act as a filter, marking certain experiences as being of particular importance. In this way, we have templates of states we felt were positive or negative, examples of the consequences of our behaviors, with significantly happy or sad outcomes featuring as the most poignant reminders.

None of this gestalt psychological explanation is informative as to the neurophysiological mechanisms underlying this phenomenon. However, some recent research does address what mechanisms may be at work on a molecular level. Joseph Ledoux and Robert Malinow have been working on memory for a quite a while, and they are the two most senior authors on a paper published in Cell concerning AMPA receptors, emotion, and memory (ref. 1). AMPA receptors are one of the major glutaminergic receptors in the brain. Glutamate is the neurotransmitter they recognize, and it is the major excitatory neurotransmitter in the brain. So if one neuron wants to send a signal to turn on another, it will almost invariably release glutamate at it’s axon terminal, and that glutamte will likely be recognized by an AMPA receptor on the post-synaptic cell (the target of the excitation). The major finding of this paper is that norepinephrine (more commonly known as adrenaline) facilitates the incorporation of AMPA receptors into the membranes of cells, during periods of high activity.

It is commonly known that adrenaline is released during times of emotional distress and happiness, these researchers have found that one specific effect of the adrenaline is to increase the number of receptors being incorporated into a synapse, again during periods of high activity. Let’s imagine a scenario where this might apply. An animal is being chased by a predator. His motor planning and execution areas are blasting away action potentials, they’re highly activated. He makes a decision about some route to take during his escape, activating a specific subset of pathways. It is these connections that will be strengthened by the application of adrenaline. Because more receptors are being integrated into the synapses in these circuits, they will be more likely to be activated the next time he is in the same situation. In this sense, he has formed a memory of the experience which is modulated by the amount of adrenaline, and by extension the intensity of the emotion experienced.

This is essentially what these researchers observed. While it is impossible to directly modulate the emotional state of the animal, they can apply norepinephrine during a learning task. What they found was that animals who received larger doses of applied norepinephrine were more likely to remember the task. The figure at the top of this piece illustrates the finding. The authors compared genetically altered (GA) mice – which lack the effects of adrenaline on AMPA receptor trafficking – to “normal” or wild-type (WT) mice. The graph on the left displays the responses of the WT mice, with the GA mice on the right. The key is that two data points are significantly different (marked with an asterisk) on the left, but not on the right.

While this work doesn’t do much to help understand the biosychological basis of Proust, it does illuminate one more minuscule thread in the web of conscious experience.

References

1. Hailan Hu, Eleonore Rea, Kogo Takamiya, Myoung-Goo Kang, Joseph Ledoux, Richard L. Huganir and Roberto Malinow, (2007) Emotion Enhances Learning via Norepinephrine Regulation of AMPA-Receptor Trafficking, Cell 131,1, pp 160-173 [doi:10.1016/j.cell.2007.09.017]

The Look of Touch

Consciousness feels whole. That is to say that the various sensory experiences that our brains process in parallel feel like one coherent thing, our own individual consciousness. However, the electrical activity generated by different sensory experiences are largely segregated to different parts of the brain and it is possible to turn them off selectively. For instance, form and motion are represented by different parts of the visual cortex. By using a technique such as TCMS, it would be possible to eliminate sensations of motion in an image while retaining static vision. This would no doubt be a very strange state to be in. There are also many pathologies, induced by head injury or otherwise, that produce abnormal combinations of sensory data and qualities of consciousness in general (Dr. Oliver Sacks has written extensively on this topic).

On the other hand, different cortical sensory areas are highly connected to each other; this is at least partly why our sensations feel so unitary. This means that simply hearing something move or feeling the touch (ref. 1) of something moving can produce measurable responses in the parts of the visual cortex most sensitive to movement. Some recent research has gone farther than this, since, as the authors of this work (ref. 2) point out: it is no surprise that the feeling of something moving can elicit such a reaction because merely imagining motion can have the same effect. These experiments demonstrate that a highly specialized area of the visual cortex called MST is sensitive to “vibrotacticle” stimuli: those incongruent with motion.

Because consciousness is often thought of as an emergent property of our massively interconnected system of neurons, understanding interactions between parts of the brain at many different scales (from single neurons to large collections or areas as in this case) is integral to understanding how this efflorescence works. The work highlighted here is one step in that direction.

References

  1. Hagen MC, Franzen O, McGlone F, Essick G, Dancer C, Pardo JV (2002) Tactile motion activates the human middle temporal/V5 (MT/V5) complex. Eur. J. Neurosci. 16:957–964.
  2. Beauchamp MS, Yasar NE, Kishan N, Ro T. (2007) Human MST but not MT responds to tactile stimulation. J. Neurosci. 27(31):8261-8267

The Feel of Space

(left:eRiK, right:me)

That’s my friend eRiK. My mother emphatically titles him “eRiK the Dane.” eRiK and I studied Physics and Math together as undergraduates at The University of California at Berkeley. We share a great love of understanding, and whenever something’s puzzling me, from Set Theory to counting cards in BlackJack, I turn to him.

He’s been singing the praises of the show RadioLab on NPR lately and he was particularly stricken by a comment made by the well known Columbia University Physicist Brain Greene. Dr. Greene was discussing the expansion of the universe, and, this is hearsay now, he said that there is no center to the expansion. No origin, no point away from which things are expanding. This is, as eRiK said, unsettling. If you were blowing a bubble with chewing gum, bubble swelling from your mouth, the rate of expansion would be greatest at your lips where the mass of sticky stuff was being stretched into a sheet. If you were pulling a rubber band with your index fingers the rate of expansion would be highest near your digits and lower elsewhere. The point I’m trying to get across is that it’s difficult to think of examples of isotropic expansion of objects. This means spatially and directionally uniform expansion. A pizza pie made from a lump of dough is expanded into a sheet in a roughly constant spatial manner, but the spread is not directionally uniform, it is expanded radially, out from the center. Dr. Greene’s comment means that there is no direction or origin associated with the universes growth. As astrophysicists and astronomers watch stars getting farther away from eachother, it appears to be happening in the same way everywhere. At the very least this means that the universe itself must behave differently from every object in it.

There is one example that is a bit comforting : imagine that you lived on a line, confined to one dimension. Further, lets say this line was connected at the ends, an infinite hoop of 1D existence. If “something” caused this circle to grow radially out from the center (which wouldn’t be a part of the line itself of course), to those living on the line segment, it would appear as if everything was expanding isotropically. We could extend this idea to a 4 dimensional space-time as a hoop embedded in a higher dimensional space, but this is pure speculation.

This brings us to the topic that motivated the title of this post. What would it feel like to come to the edge of a universe? I don’t know that such a boundary exists or not, but we can certainly conceptualize a space like our own with well defined boundaries. This is not like being in a room with boundaries. The repulsive forces that we experience as a result of encountering walls are just that: fields of force. A boundary of space must be very different. There wouldn’t necessarily be any repulsive force, I imagine it more like asking a person to reach into the 14th dimension or backward in time. It doesn’t even make sense to try and conceptualize it. There is simply nothing to try or do or a direction to move in or a place to point to or anything. This is the closest thing I can imagine to arriving at a boundary to space. Not only would there be nothing there, our perceptual abilities would probably be quite stymied by such a thing. Again I have found myself in the slippery slope of speculation, and I invite others to weigh in on this. I’m not sure that anybody has the required personal experience to comment on this but I am sure that somebody could, in the great tradition of doing so in physics, suggest a thought experiment which would shed some light on the subject.