Category Archives: vision

Dynamic Color Stimuli

I’m currently doing some human psychophysics experiments using some dynamic color stimuli that my PI Rich Krauzlis and I developed. The stimuli were inspired by a variant of Random Dot Kinematograms (RDKs), which were initially popularized by Bill Newsome. The variant was used by Rich and his former post-doc Alex Zénon in work that resulted in this paper. The underlying motivation for both is the use of stimuli that lend themselves to the application of Signal Detection Theory (SDT). Here’s a (somewhat low quality) example of the color version:

What’s going on here is that each check’s color is drawn from Gaussian distribution on the DKL plane. Further, each check only lives for 8 frames; once the check’s lifetime is over, it is recolored with a value drawn from the underlying distribution. Finally, halfway through the video, the mean saturation of the distribution increases.

In our experiments, we present these stimuli in the periphery while subjects fixate a spot in the center of the screen, and ask them to report these changes in saturation. We vary the magnitude of the saturation change and see how this affects their ability to detect the change. Here’s the code I used to generate the sequence of images in MATLAB (note that this is optimized for a particular display, in a later post I’ll explain how to modify for other displays).

On The Brain In Your Face

Figure 1 (from reference 1) showing the employed stimuli (a) and a schematic of the model they used (b).

Commonly held wisdom says that processing of visual features such as lines, forms, and motion is limited to higher cortical areas (for example, the medial temporal lobe, or area MT). Recent research shows, however, that the retina itself can extract motion signals, underscoring the subtle computational prowess of the bit of your brain that lives in your eye.

References:
1. Baccus, S. A., Olveczky, B. P., Manu, M. & Meister, M. (2008) A Retinal Circuit That Computes Object Motion. The Journal of Neuroscience, 28(27):6807-6817

On Time Perception & Sleep

Courbet’s Sleep

Why is it that our perception of the passage of time changes around and during periods of sleep? While it is known that there are diurnal variations in time perception1, and that insomniacs have irregular perception of duration of sleep2, this basic question remains.

In an article concerning regular and pathological conscious perception of time, Oliver Sacks speculates that “visual perception might in a very real way be analogous to cinematography, taking in the visual environment in brief, instantaneous, static frames, or ‘stills,’ and then, under normal conditions, fusing these to give visual awareness its usual movements and continuity3.

This suggests the possibility that our perception of time is a function of our ability to impose a sense of continuity on our own perceptions. Thus, in the absence of external stimuli for this continuity system to act on, we have no mechanism to calculate the passage of time, and instead estimate this variable in a noisy, post-hoc manner.

In any case, the fact that it’s possible to drastically misestimate how long one a bout of sleep has lasted implies that there is something fundamental about the state of consciousness (wakefullness) and judgement of time perception which remains to be understood.

References:

1. Pöppel E, Giedke H. (1970) Diurnal variation of time perception. Psychol Forsch. 34(2):182-98.
2. Knab B, Engel RR. (1988) Perception of waking and sleeping: possible implications for the evaluation of insomnia. Sleep. 11(3):265-72.
3. Sacks, O. (2004) In the River of Consciousness. The New York Review of Books. 51(1):

On Recognizing Faces

An example of the constructed faces
used by HR Wilson in his research.
(b) is derived from (a).

Although humans have no problem identifying them from all orientations, faces look extremely different depending on the angle from which one views them. That is to say that despite our facility in dealing with this challenge, it is an extremely difficult problem to solve with computers. Further, because we have some understanding of the way in which vision is hierarchically organized, progressively building up complex forms starting from small line segments, it points out a formidable computational ability that our brains possess.

Hugh R. Wilson spoke on this issue on Monday (6/16/2008), at the SUNY College of Optometry on 42nd Street. He showed that humans compute facial orientation as suggested by the fact that a subject’s sense of this orientation can be adapted by a brief (5 seconds) exposure to a face at a given angle. In other words, before adaptation, humans can fairly accurately identify facial orientations, and this sense is shifted by the adaptation procedure. This basic finding is probably employed to compute what the face would look like from the front, thusly making identification possible.

Of course this psychophysical experiment is only suggestive of such a mechanism, but it is an intriguing possibility.

On Visual Working Memory

Suppose you’re preparing dinner and you realize that you’d like to add some more tomatoes, onions, and mushrooms to the salad that you’re making. You have no trouble storing images of these items in your head and locating them in the fridge. Suppose instead that you’re eating dinner and you decide that you absolutely must have another bottle of red wine for your guests, but you really want the cote du rhone, and not the boring merlot. Again, you can quite easily go and grab the bottle based simply on its label, but in this latter situation, the amount of visual information: the detail that must be stored for comparison upon reaching your liquor cabinet is far greater.

In both cases, you’re employing some form of working visual memory. This form of memory is thought to be highly plastic, short-term storage. It is possible however, that it might have different characteristics. The forms that working memory might take are (1) a set of fixed “slots,” each having a discreet capacity or (2) a set of dynamic slots which can be more tailored to the specific use.

The fixed-slots concept is similar to the pictures taken by current digital cameras. The images are always the same number of (mega)pixels, no more, no less. While the dynamic-slots concept is more akin to being able to vary the number of pixels in an image depending on demand. For instance, if I was taking a picture of a pure red wall, I wouldn’t need any more than 1 pixel because every part of the image would be identical. However, if you were taking a picture of a Jackson Pollack painting, you might want to combine several digital camera images to get a really accurate rendering of the details held therein, a more flexible allocation of working-memory resources.

The scenarios I’ve highlighted above don’t really serve to answer the question of which form of working memory our brains employ because in both cases, we can imagine either form working just fine (as long as we accept the notion that the complexity of the wine bottle label is within the capacity limits of the fixed-slots). However, a recent paper published in Nature claims to have employed a savvy enough experimental approach to disentangle the two possibilities.

The research, published by Steven J. Luck (who has done quite a bit of excellent work in the field of Visual Neuroscience in general) and a colleague, Weiwei Zhang, is sadly quite brief, having clearly been edited for length by the editors at Nature. In fact, this curtailed version is quite difficult to follow, given the subtlety of the approach that the authors used, but their essential point comes through.

The approach they take is a common one. They measure human performance on a variety of working-memory tasks and attempt to fit these data to models with different assumptions. In this paper, they compare how well the data can be fit to a fixed-slots model versus a dynamic-slots model. Although they conclude that the dynamic-slots model simply doesn’t explain the data as well, and they thus discard the notion of flexible resource allocation, one of the final sentences betrays what they must admit: “This model does not completely eliminate the concept of resources, because the slots themselves are a type of resource.” In other words, it is possible to allocate multiple slots to the same item object-to-be-held-in-memory. However, it does appear that the slots themselves are of a fixed size.

This result places limits on the possible anatomical underpinnings of working-memory, and makes predictions about how one might expect human beings to perform in other, working-memory tasks. It will be interesting to see if the conclusions that these authors reached will be borne out in future work.

On Reading Minds

Nature1

Who hasn’t had the desire to see through another’s eyes? Some researchers at Berkeley think they’ve taken the first steps towards achieving such a goal.

Jack L. Gallant and his lab-mates have managed the feat of decoding human fMRI measurements in such a way that they can infer the image that generated the recorded neuronal activity1. fMRI as a technique assesses brain excitation indirectly, through blood-flow. The degree of excitation is clearly in some way related to the BOLD (Blood-oxygen-level dependent) signal obtained, but it is a bit crude in the sense that it isn’t very spatially or temporally precise2. The data can pinpoint activity to a few square millimeters, and within a window of about 6 seconds.

The paper detailing their results, appearing in Nature, describes how this remarkable trick was accomplished. First, the researchers consulted fMRI signals from subjects viewing a wide variety of natural images. They correlated this information with the pixels in the pictures themselves, and this allowed them to construct a model which predicted the pattern of blood-flow one might observe with fMRI in response to an arbitrary image. Once this was done, they essentially turned the model on it’s head so that they could ascertain the viewed image from the fMRI data. In fact, at present it’s quite a brute force approach that requires that the scientist have a set of images which are fed into the model to generate synthetic fMRI data to compare with the measured signals. However, it is possible that models of this form will eventually be sophisticated enough to avoid this.

If these techniques could, for example, be extended to other forms of mental reckoning, we might some day be able to see into the thoughts of those who are unable to communicate. Regardless of the practical applications, and however far from sneaking a peek the richly textured visual experience we each have, this type of savvy utilization of data and modeling techniques is exciting because it tickles the basic desire we all have to know another’s being.

Notes/References:

1. Kay KN, Naselaris T, Prenger RJ, Gallant JL. (2008) Identifying natural images from human brain activity. Nature, 452;352-355
2. Other technologies sacrifice the large volume of brain space that fMRI can cover for spatial precision (over 10000x better, single cells) and temporal precision (over 100000 times better, though that much is not necessary).

On Active Perception

Perception is more than the passive response to stimuli. When you focus your visual attention on something, it looks different. This is not about gaze, the orientation of the most sensitive part of your retina (the fovea), here I mean that somewhat intangible ability we have to devote our mental processing to an object in, or area of visual space without directly regarding it. Experimental data in the form of human verbal reports and the activity of single cells in the brains of monkeys demonstrate that this is quite concrete: visual attention makes you and the cells in your brain better able to distinguish a variety of properties such as color, the angle of lines and small distances1.

This viewpoint, observation as both active and reflexive, highlights a dichotomy present in debates concerning brain function in general. That is the distinction between seeing such neural exciation as occurring “bottom-up,” driven by the responses to incoming stimuli, versus “top-down,” controlled by higher level cognitive (dare I say conscious) processes.

Figure 1

For example, there is a Dalmatian (somewhat) hidden in figure 1. Without your knowledge of its presence, it is perhaps more natural to simply see a collection of dots, but once you’ve found that dog, its impossible to miss. This demonstrates that your perception of the object is not purely bottom-up. The image impinging on your photoreceptors alone doesn’t necessarily lead to the experience of seeing the object. Other examples include seeing faces in clouds, a result of our overactive face recognition areas, or hearing words in the sounds produced by a gaggle of geese. These are both top-down examples, your existing perceptive mechanisms imposing themselves on the incoming information.

Charles Gilbert, a professor of Neurobiology at Rockefeller University, has been researching visual perception by recording from single neurons in the brains of monkeys for quite some time. A recent paper from his laboratory was aimed at quantifying the role of attention in the perception and cortical processing of a specific visual stimulus: long contours made of small individual line segments3. Figure 2 contains examples of these contours made from more (A) to fewer (C) subsegments .

Figure 2

There are single neurons in your visual cortex (area V2) that will become active when exposed to these larger, constructed contours in certain parts of the visual field. This response is built up from those of cells (in area V1) that are excited by exposure to small, continuous lines in particular places like the ones making up the larger edges above. The reactions of these cells are in turn shaped from the combination of many small, pixel-like bits of information, coming from the retina. This is the hierarchy of the visual system, the responses of neurons that represent progressively more complex objects are formed from earlier, simpler patterns, until we end up finally with neurons that respond best to images of your grandmother, or Bill Clinton, or Jennifer Aniston2.

This hierarchy is important to the theme of top-down versus bottom-up because it informs us as to what is at the top and what is at the bottom. It also allows us to construct a simple example that we can apply to more complicated cases.

Figure 3

The shapes in figure 3 are called Kanisza figures, but many scientists do refer to them as the pac-men that they bear more than a passing resemblance to. It is hard to ignore the triangle that seems to be formed by this particular configuration of polygons, despite the fact that each edge is missing a large segment. What’s happening here is that there are enough cells in V1 turned on, collinearly, by the partial edge of the invisible triangle to excite the V2 cell that would respond to a whole triangle edge in the same location. This is not the whole story however, it turns out that in addition to the bottom-up connections mediating the hierarchy I described before, there are also extensive feedback projections from higher areas like V2 to lower ones like V1. Thus, when the cell in V2 relays information up to higher areas that there appears to be a line spanning two of the Kanisza figures, it also informs all of the V1 cells that might be making up that line, including both the ones which are in this case actually being stimulated, and the interstitial ones where there is no edge to detect. The cells not receiving any actual visual stimulus are activated, to a lesser extent than they might be, by the feedback or top-down signal.

This then is the paradigm for top-down perception. Something like this is most likely happening in the Dalmatian example as well, with some high-level neuron that responds to dogs being activated and sending feedback signals down to all of the neurons that would normally activate the percept, facilitating the “segmentation” of the dog from the background.

What Gilbert and his colleagues did in order to more thoroughly understand this process was to engage a monkey in a task related to the perception of these incomplete contours while measuring the responses of the neurons in their area V1.

The monkey was presented with two images like the ones in figure 2 simultaneously, one in which some (1-9) of the line segments were oriented to form a contour, and one in which their angles were random. It’s task was to simply look at the one with the contour. Maybe because of something intrinsic to their visual system, or maybe just because the monkey didn’t understand what the experimenters wanted them to do, they required extensive training before they became proficient at this game.

Figure 4

Before becoming experts at performing this chore, the cells in V1 which respond to the smaller constituent segments responded with a transient increase in their activity (figure 4, left). However, once they had become skilled at this task, the transient was followed by a prolonged bout of activity whose amplitude was proportional to the number of colinear segments making up the larger contour (figure 4, right). Intriguingly, even after the training, if the monkeys were anesthetized and exposed to the images the responses again showed only a transient increase, and no difference between the number of line segments making up the contour. What this suggests is that the engaged, top-down process of performing the task and attending to the stimuli is what generated the difference in the responses, thus active perception.

The idea of active perception brings to mind a question: what aspects of sensory experience are subject to this kind of cognitive control, and what are its limitations? Extreme cases are somewhat helpful: you can’t see a circle when presented with a square, although every circle you’ve ever seen on a computer monitor or television is simply a lot of pixels, and thus not really a circle. This sort of fuzziness is certainly present in somatosensation (touch). An example, if I arrange a situation where your hand is hidden, but there is a rubber hand approximately where yours might be, I can (with a bit of “training”) evoke a somatosensory experience in you by touching the rubber hand. Multiple pairings of poking your hidden hand while simultaneously having you watch me poke the rubber stand-in will lead to the feeling that any touch of the rubber hand is a touch of your hand. Further, I have certainly had the experience of enjoying the taste of something before knowing what it was, implying that the mere knowledge of the thing to be tasted can modify the experience of tasting it.

There must be some evolutionary/developmental aspect to all of this. At least in the vision example, if our evolution didn’t provide us with feedback connections the likes of which I described above, there would be no anatomy to mediate the top-down control. Similarly, if our development didn’t equip our visual systems with automatic detection systems for faces and lines and circles, there would be no high-level percept to feed-back down to lower systems, against which our brains might try and favorably compare incoming data.

The question then becomes, to what extent is this effect mediated by sheer brain circuitry and to what extent by the nebulous mystery that is conscious experience? I would have to argue that our brain circuitry is the only basis for our conscious experience, and thus any effect that we might attribute nonspecifically to our mental being represents our lack of knowledge about the connectivity in our skulls. However, my mother taught me that it is a great thing to be wrong, because that means you’ve got something to learn. So in any case, I look forward to deeper unravelling of these phenomena.

References

1. Maunsell JH, Treue S. (2006) Feature-based attention in visual cortex. Trends Neurosci. 29(6):317-22.
2. Quiroga RQ, Reddy L, Kreiman G, Koch C, Fried I. (2005) Invariant visual representation by single neurons in the human brain. Nature 23;435(7045):1102-7.
3. Li W, Piëch V, Gilbert CD. (2008) Learning to link visual contours. Neuron 7;57(3):442-51.

The Look of Touch

Consciousness feels whole. That is to say that the various sensory experiences that our brains process in parallel feel like one coherent thing, our own individual consciousness. However, the electrical activity generated by different sensory experiences are largely segregated to different parts of the brain and it is possible to turn them off selectively. For instance, form and motion are represented by different parts of the visual cortex. By using a technique such as TCMS, it would be possible to eliminate sensations of motion in an image while retaining static vision. This would no doubt be a very strange state to be in. There are also many pathologies, induced by head injury or otherwise, that produce abnormal combinations of sensory data and qualities of consciousness in general (Dr. Oliver Sacks has written extensively on this topic).

On the other hand, different cortical sensory areas are highly connected to each other; this is at least partly why our sensations feel so unitary. This means that simply hearing something move or feeling the touch (ref. 1) of something moving can produce measurable responses in the parts of the visual cortex most sensitive to movement. Some recent research has gone farther than this, since, as the authors of this work (ref. 2) point out: it is no surprise that the feeling of something moving can elicit such a reaction because merely imagining motion can have the same effect. These experiments demonstrate that a highly specialized area of the visual cortex called MST is sensitive to “vibrotacticle” stimuli: those incongruent with motion.

Because consciousness is often thought of as an emergent property of our massively interconnected system of neurons, understanding interactions between parts of the brain at many different scales (from single neurons to large collections or areas as in this case) is integral to understanding how this efflorescence works. The work highlighted here is one step in that direction.

References

  1. Hagen MC, Franzen O, McGlone F, Essick G, Dancer C, Pardo JV (2002) Tactile motion activates the human middle temporal/V5 (MT/V5) complex. Eur. J. Neurosci. 16:957–964.
  2. Beauchamp MS, Yasar NE, Kishan N, Ro T. (2007) Human MST but not MT responds to tactile stimulation. J. Neurosci. 27(31):8261-8267

Miniature Eye Movements

Your brain doesn’t care about brightness, it likes contrast. In fact, by the time signals generated by light impinging on your retina propagate through its 10 layers of cells, brightness information has largely been discarded in favor of contrast, both spatial and temporal (see paragraph two for a description). A very simple example of this is demonstrated below. Initially the contrast (in time) of the two dots is the same because they are surrounded by the same brightness. When you click on the thin or thick surrounds button, the contrasts are now inverted between the two as evidenced by the change in percept. I guarantee that nothing about the dots themselves change, only the surrounding area.

(Shapiro, A. G., D’Antona, A. D., Charles, J. P., Belano, L. A., Smith, J. B., & Shear-Heyman, M. (2004). Induced contrast asynchronies. Journal of Vision, 4(6):5, 459-468, http://journalofvision.org/4/6/5/ica.html)

Now let me disambiguate a bit what is meant by temporal and spatial contrast. A painting, say Seurat’s Sunday Afternoon on the Island of La Grande Jatte, has plenty of spatial contrast, but because nothing changes in time, there is no temporal contrast. A movie screen filled with white which fades to black and back to white, oscillating, has plenty of temporal contrast and no spatial contrast. Now, if there is no temporal contrast at all in your visual field, the world will fade away. Your visual system needs temporal contrast. This is one of the purposes of so called fixational eye movements. These are small involuntary eye movements which you make between the large point to point movements called saccades that we use to change the direction of our gaze. So even if you were standing in front of a painting such that it filled your vision completely and only stared at one point, your eyes would move slightly, around your fixation target, to prevent the image from fading away. If somebody drugged your eye muscles so that there was no way to execute these small movements and filled your vision with an image that had no temporal contrast, the world would fade away.

The idea that brains only encode change and not static values of sensory data is pretty ubiquitous, and there are a wealth of examples. What I’d like to continue with, however, is another function of fixational eye movements that has been speculated about but only demonstrated of late. In a recent paper in Nature*, researchers have discovered that these small eye movements serve to enhance our fine scale spatial resolution. That is to say that without small eye movements, we are less able to detect the presence of and report the properties of fine spatial scale visual stimuli.

One very useful analogy is the way we run our fingers over something textured to better comprehend the shape of it. For example, suppose you were blindfolded and I put a piece of wood in your lap. I tell you that this piece of wood has some number of very small adjacent grooves cut into it at some particular position. If I asked you to find them and count them, I suspect that you would run your fingers across the wood until you found them and then rub your index finger over them a couple of times to determine the number. It seems a very natural way to do it, and this is exactly akin to making small eye movements to improve spatial resolution. Not making small eye movements like that would be akin to simply pressing your finger down straight on the grooves in an attempt to count them. Perhaps you could do alright at this if there were only one or two, or if they were very big, but as the task got more and more difficult you’d need to use the sliding technique in order to discriminate. The commonality here is that both your sense of touch and sense of sight are mediated by an array of detectors of fixed size and position, and some stimuli are simply too small and/or finely spaced to be accurately detected by the particular array of detectors you’ve got.

Here’s another example: suppose you were using a number of long same-diameter, same length rods to determine the topographical features of a small area of the bottom a pool of water. One way to do this would be to take many rods in a bundle and push them each down (still in a bundle) until they stopped, recording each of their heights individually. The problem with this method is that the resolution of your image of the bottom is limited to the diameter of the sticks. Assuming you can’t use ever thinner sticks (we can’t make the receptor size in our eyes or hands arbitrarily small), you can get a better resolution image by running a single rod (or many rods) over the area to be mapped, continuously recording the height. In this way you have more information than if you simply assign each rod to a single point on the bottom, increasing your resolution.

*Rucci, M., Iovin1, R., Poletti1, M. & Santini, F. Miniature eye movements enhance fine spatial detail Nature 447, 852-855 (14 June 2007)

Memory

I was riding the NYC subway listening to my iPod the other day when it ran out of batteries (hard to relate to such an experience I know). I was a bit vexed because I had Massive Attack’s “Lately” stuck in my head and really wanted to scratch that itch. I realized that by focusing on the song, I was able to produce a damned good internal manifestation of it. I instantly tried specifically to do the same with a visual image, Max Ernst’s work (The Elephant Celebes, 1921) , but I could only remember object positions and placements; if I focused I could recall the pleasing quality of soft swaths of dark gray with silvery white punctuations that make up the central figure of the canvas, but never a detailed, full image. Perhaps some people can summon perfect pictures of a loved one’s face, but personally I’ve never been able to do that; only by relying on some other form of memory like a happy event am I able to better recall faces. I am explicitly, however, trying to avoid such considerations because this is one of the classic problems in confronting human memory, it’s capacity and quality are completely contingent on context. Despite all that is known and available to read on this subject, my inward exam led me to think about memories of unimodal (one sense at a time) sensory experiences in general.

I am really treading on thin philosophical and scientific ice by using introspection as my main mode of exploration, but this is meant to be neither of those things, merely thought provoking. Because this is such personal territory, it’s obvious that there will be some variation from person to person, for instance, in the extreme, a man blind from birth will find it decidedly impossible to recall any visual image, and can probably recall audio better than any person with sight. This person to person variation may have something to do with inherent differences in brain structure, including those completely lacking sensory apparati. So before I do a little run down of the various sensory systems, allow me a digression, starting from auditory stimuli, about brains that may facilitate the discussion to follow.

Music isn’t a very general example of an auditory stimulus, and this may have something to do with the fidelity of the remembered experience. There are a few factors which immediately come to mind that might be relevant: (1) the amount of cortex devoted to representing the type of stimulus in question, and (2) the involvement of mirror neurons, (3) the temporal quality of music. The Cerebral Cortex as it is “properly” referred to is the outermost few millimeters of the brain of higher organisms. The wrinkled quality that a brain has (if you’ve ever seen an image of one) is thought to be a way to increase the amount of cortex. This is where the brain does its most complex information processing. It is here that one can find single neurons (brain cells) which respond* to the various senses in such complex ways that single cells will react best when you are looking at a picture of Bill Clinton versus, say, a car or your grandmother. Mirror Neurons are wonderful little devices in your head which respond when you perform an action and when you observe another individual performing the same action. For example if you reach out and pick up an apple, the same mirror neuron will fire no matter how you do it. If you use a set of tongs for instance or daintily pick it up by the stem, and the same is true of the observed act, so that it seems mirror neurons encode intention of action. They’re very important for social interaction and learning and a host of other things, and they probably deserve their own post, but for now they are at the service of my argument about music and paintings.

The amount of cortex devoted to vision far outweighs any other sensory modality, certainly the visual cortex is larger than the primary auditory cortex. So it may simply be the case that it is more difficult for memories to light up all of the various parts of the visual cortex that are necessary to generate a truly accurate experience of sight.

When you hear somebody speak, mirror neurons potentiate, that is to say they make ready to use and facilitate the use of, the parts of your brain used for vocalising. This even goes so far as to provoke measurable electrical responses in the muscles of ones throat. When you watch somebody prick themselves it is thought that the mirror neuron system contributes to any sensation of pain or touch that you might experience as a result. It may thus be that when one is listening to music with singing, the mirror neuron system strengthens the auditory cortex’s memory based activity.

As to the temporal quality of music, this just doesn’t seem that relevant. I’m no more likely to be able to remember a series of images (unless it’s the final frames of Trufautt’s “The 400 Blows”) than I am to remember a single image.

Now, let’s see if these two ideas tell us anything when we try to examine other senses. Let’s consider the following 8 senses (What happened to five you ask? Well we need all these categories because the last three don’t really fit into the first five.). They are organized roughly by the amount of cortex devoted to them.

  1. Vision
  2. Somatosensation (touch)
  3. Audition
  4. Proprioception (muscle movement, posture)
  5. Gustation (taste)
  6. Olfaction (smell)
  7. Vestibular (balance, orientation)
  8. Interoception (hunger, thirst, drowsiness, air hunger, etc.)

This seems to immediately invalidate the suggestion that the amount of cortex devoted to a modality is what’s relevant. I have a very difficult if not impossible time remembering the experience of eating duck at WD-50, and yet the gustatory and olfactory cortical areas combined are smaller than the primary auditory cortex. As to the involvement of mirror neurons, it is incredibly difficult to asses. This is because one can’t really activate the mirror neuron system except by the use of vision or audition, so its potential utility in enhancing other unimodal memories is essentially nil. It might, however, facilitate the memory of a great LeBron James dunk or a beautiful Alvin Ailey dance piece. Despite this difficulty, it still seems to me that there is something extremely special about music. If I try to remember a series of isolated noises that I’ve heard it doesn’t even really make sense. I can think of specific sounds and noses: my fan blowing over my body on a hot summer night, a newer subway car’s increasing frequency whine as it picks up speed out of the station, a fluorescent bulb’s hum in the lab where I work. The problem with all of these is that I am unable to call these up without the associated visual experience as well, and then we’re back to the context/multi-modal issue. We must consider the possibility that our ability to both hear and make sounds facilitates a mirror neuron based enhancement of all music, vocal or otherwise; many instruments produce sounds well within the range of frequencies that we produce even if we can’t match their spectral qualities. I would really like to know if anybody out there feels that they have some sort of different experience of memory to the general one I’ve described here, or if they have theories of why we might perceive things in this way.

* I know respond is a weighted word, but I’ve got to cut this increasingly reductionist explanation off somewhere, if you’d like an explanation of what I mean by “respond” please email me and I’ll be happy to oblige.