Category Archives: functional circuitry

On Frequency Tuning

You are able to distinguish auditory frequencies smaller than the bandwidth (a measure of the range of frequencies to which a sensor will respond) of the cells in your inner ear that transduce sound from pressure waves into electrical impulses in your brain. As the authors of a recent paper appearing in Nature report, this is probably achieved through the use of population coding1. However, certain aspects of this phenomenon remain mysterious.

(Figure 1. Cartoon of what a sound sensor’s response might look like, these numbers are not physically realistic)

Suppose we wanted to determine the bandwidth of a sensor having the response properties depicted above. A standard way to do so is the following: one measures the maximum response of the sensor (in this case, 10), and divides that value by two (thus, 5). Then one finds the smallest frequency which will produce that half-max response of 5 (a bit less than 8), and the largest frequency which produces that response (a bit greater than 12). The difference between the larger and smaller frequencies is termedthe bandwidth. Using this method, it’s also called the full-width-at-half-max, for somewhat obvious reasons. We would then describe this sensor as having a central frequency of 10Hz, a Gaussian profile, and a bandwidth of 4Hz.

Now, let me reiterate: you are able to distinguish auditory frequencies smaller than the bandwidth (a measure of the range of frequencies to which a sensor will respond) of the cells that transduce sound from pressure waves into electrical impulses in your brain. This is odd for the following reason, suppose you were relying on the sensor above to tell you about what frequencies (pitches) of sound you were hearing. If I played you a sound at 8Hz and another at 12Hz, the response of the sensor (as you can see by the dotted lines on the figure above) would be identical. That sensor is unable to distinguish between sounds at 8Hz and sounds at 12Hz, yet somehow, your brain can. The way it achieves this feat is through population coding. What this means is that the brain almost always pools the responses of many sensory neurons in creating the conscious representations of sensory data that we experience.

A brief aside, you may be asking the question: Why don’t we just have sensors with different response properties, linear, say? Like the figure below:

That would work nicely since the responses at 8Hz and 12Hz (and any other pair of frequencies for that matter) are distinct. However, it’s very difficult to build biological sensors that have this kind of response profile, and in the interest of steering clear of unwieldy posts, I’ll leave it at that.

Returning to population coding, I’ve said that the brain pools responses, but what does this mean exactly?

Let us now imagine that we examined the responses of two cells, with central frequencies of 9Hz and 11Hz, respectively. At 8Hz, cell 1’s response is ~8.5, and cell 2’s is ~2.5, while at 12Hz, the situation is flipped, with cell 1’s response being 2.5, and cell 2’s being 8.5. This reversal of fortunes is not intentional, not an inherent feature of this system, rather it is the result of my simplified illustration. These two cells are able to achieve in concert what a sole actor cannot: tell the difference between two sounds separated by a difference smaller than their individual bandwidths. All that is needed now is a further cell (in reality another layer of cells) to read off this code. “Whenever cell 1 says ‘8.5’ and cell 2 says ‘2.5,’ I know that sound is being played at 9Hz,” this further cell says.

This simplified view is not so far off from what we think is happening in the transformation of signals from the sensory periphery (your ear) to central processing areas (primary auditory cortex).

And now on to the mysterious facet mentioned earlier. These intrepid explorers of frequency tuning in primary auditory cortex found cells there with vary small bandwidths compared to sensory cells, implying that these cells were performing a computation similar to the one I’ve outlined above, but in order to test this hypothesis, they had to employ a different strategy than the one used for building frequency tuning curves.

In constructing the auditory response profile of a single cell, one generally uses single frequency sounds, pure tones. However, the brain was built to represent the real world, a place where single frequency sounds are essentially never encountered. Thus, the definition of a cell’s response in this manner is necessarily lacking. Though it is possible, there is no reason to expect that one can predict the way a cell will respond to the simultaneous playback of 8Hz & 12Hz based on a simple summation of the individual responses elicited by 8Hz & 12Hz. Further, the heuristic version of population coding that I presented specifically does make that prediction, so recording the responses of these single cells to complex sounds allows these auditory neuroscientists to test their hypothesis concerning the underlying computation and the wiring of the brain.

Before I conclude, I want to mention that this research in particular is of a rare and important type, it is performed on humans. This is not some sort of needless invasion, it is unfortunately necessary to probe the electrical responses of the brains of epilepsy patients in order to remove certain small parts that cause their seizures.

It will probably come as no surprise that the responses predicted by the linear model I’ve discussed were quite distinct from those that the researchers found. This is exciting because it means that the brain has yet again provided a puzzle for us to solve. We know what the brain must be doing, but how, is the question presented. . Exploration of such quandaries can yield results that expand our general knowledge, be applied to other fields, and give us insight into the very nature of how we function. Such is the beauty of neuroscience.


1. Y. Bitterman, R. Mukamel, R. Malach, I. Fried & I. Nelken (2008) Ultra-fine frequency tuning revealed in single neurons of human auditory cortex. Nature 451, 197-201 | doi:10.1038/nature06476

Axo-Axo-Somatic Inhibition

Neurons communicate to eachother in the brain by chemical signals. They send out long ramifications called axons which synapse (connect) with other neurons (generally) on parts called dendritic trees. These connections are not physical in the sense that the cells do not share any inner-membrane space, but they do allow the communication of intercellular chemical signals with incredible speed. There is a small space called the synaptic cleft into which the signalling cell releases a chemical signal and at which the cell receiving the signal has receptors specific to that signal. Dendritic trees are places where signals from many other neurons are summed up, when a neuron recieves enough signals from those cells which synapse onto it, it fires an action potential at it’s soma or body. An action potential is a large transient increase in the voltage of the cell (don’t forget your brain is electric). That transient (called a spike) propagates down the axons of that cell much like an electrical signal does in a telephone wire or a television cable, to the synapses it forms with other cells and triggers the release of the chemical signals used to communicate. This is the story whether the signalling cell is telling it’s target to turn on (excitation) or turn off (inhibition). One difference is that excitatory signals generally go to the dendritic tree of another neuron while inhibitory connections can go to the dendrites or the soma (body). This is useful because the inhibition acts like opening a voltage drain and preventing the action potential from building up which (as I mentioned) it generally does at the soma. The specifics of the signalling molecules and receptors determines whether inhibition or excitation is being transmitted, and in general cells send either one or the other kind of signal. So if an excitatory cell wants to inhibit another excitatory cell, it must excite an inhibitory cell which in turn inhibits the target excitatory cell. When might the brain want to do that? Well lateral inhibition is a ubiquitous concept in brain circuitry. This is the idea that if I have a bunch of neurons all designed to report something appearing in different parts the visual field, it makes sense for them all to mutually inhibit eachother. Because the areas of the visual field to which single neurons respond overlap a bit, a line might excite a sort of fuzzy set of neurons in the cortex, if the ones that are responding most strongly inhibit the ones that are only responding weakly I can detect cleaner edges and have better spatial precision in general.

A new paper in Science presents evidence that inhibitory synapses can have another form1. In the image above you can see an illustration of each othese kinds of inhibitory synapses. Both the type I’ve already described (above) and the new kind (below). The difference is that in the new type, instead of having to send a signal to the inhibitory interneuron which in turn inhibits the target excitatory cell, the 1st excitatory can hijack the inhibitory synapse of the interneuron to rapidly and directly inhibit the target cell! Why is this so exciting? Well any time we can figure out something new about how that intricate mass of electrified flesh in our heads might accomplish some of the seemingly miraculous feats it does, I get excited. I take particular pleasure in understanding the mechanistic underpinnings of consciousness, and being a materialist (in the philosophical sense), I think that neuroscience is the way to do that. Beyond this, the specificity of this new mechanism is potentially greater than the earlier described variant. An inhibitory cell projects to many other excitatory cells so if an inhibitory cell gets turned on, it will turn off many other cells which may not be “what the 1st excitatory cell wants.” The other reason I’m excited by this is that I see it as a way to modify the receptive field properties of a cell while during processing rather than through some longer term “learning” or modification in general. Below is an illustration of the receptive field structure of a simple cell in the primary visual cortex.

These cells are the first place that visual information is represented in the visual cortex, and all visual percepts are built up from them. Consequently, it doesn’t make sense to change them over time too much. It’s like a computer monitor, pixels are a good universal way to represent many different types of images. You wouldn’t change the shape of pixels depending on the type of images you were going to see because we don’t have technology that could do it quickly and efficiently enough. Similarly, one keeps the receptive fields of cells in the primary visual cortex constant and then changes how that information is used at later stages. However, if there was some way to easily change the shape of pixels back and forth depending on the stimulus being displayed, it would be very useful. That’s one consequence of this, online modification of receptive field properties at a low level. We’ve got a lot to learn from the brain.


1. Ren, M., Yoshimura, Y., Takada, N., Horibe, S. & Komatsu, Y. (2007) Specialized Inhibitory Synaptic Actions Between Nearby Neocortical Pyramidal Neurons. Science 316, 758–761 (2007).