Category Archives: synapses

On the Importance of Single Spikes

from reference 1

As mentioned numerous times before in this forum, neurons in the brain communicate by action potentials: pulses of voltage that usually propagate from the cell body (soma) down a specialized outcropping of membrane called the axon which synapses onto other neurons. Usually, these synapses link pre-synaptic axons to post-synaptic dendrites, cellular structures specialized for recieving input.

Until recently, it was thought impossible for a single action potential, initiated in soma, to cause a second, post-synaptic neuron to fire an action potential; rather, as has been extensively documented, single neurons require many simultaneous dendritic inputs which are summed together to cause an action potential to be initiated in the soma. Recent research, however, has identified a cell type found exclusively in the cerebral cortex of human beings which seems to contradict this generalization. These neurons, termed “chandelier cells” are able to cause a chain of post-synaptic events (action potentials in several cells) lasting up to, on average, 37 milliseconds, ten times longer than had been previously assumed possible1.

The article reporting these findings, published in the estimable journal PLoS Biology, describes one feature that the authors feel is of paramount importance to this phenomenon. Apparently chandelier cells are much more likely to make axo-axonic connections. That is, they send their pulses of activity not to dentrites, but to other axons. The reason for this somewhat exotic type of connectivity is that chandelier cells normally turn off the output of other neurons by sending inhibitory signals that cancel action potentials being sent down axons of the chandelier’s targets. It seems then, that single chandelier cell action potentials inhibit other cells which are themselves inhibitory, indirectly exciting the targets of these secondary inhibitory cells.

The relevance of these findings to human cognition or consciousness is unclear, but this represents a significant advancement for our understanding of the functional connectivity of the human brain.

References:
1. Molnár G, Oláh S, Komlósi G, Füle M, Szabadics J, et al. (2008) Complex Events Initiated by Individual Spikes in the Human Cerebral Cortex. PLoS Biol 6(9): e222 doi:10.1371/journal.pbio.0060222

On The Wiring Diagram

From reference 1

The human brain has roughly 100 Billion neurons and each neuron has between 1000 and 10000 synapses (connections), thus approximately 500 Trillion synapses. This makes the problem of determining the connectivity, or the wiring diagram of the brain absurdly complex. This is one of the most fundamental problems confronting neuroscientists today because the solution to many problems of how the brain works would be made much easier if we simply knew the structure that it is built on.

A recent piece of computational research (published a wonderful PLOS journal) suggests a novel statistical method to identify which synapses of a given neuron are active at a given time. The author of this study simulated the output of many single neurons when a particular subset of it’s synapses were active. This characterization was based on the number of action potentials the neuron fired in response to the activity of these many specific synapses. Next, the author examined the changes in the output when a single additional synapse was activated along with the baseline subset. He found that if he simulated the addition of one synapse ~80 times, he could measure significant changes in the output of the simulated neuron such that it was possible in subsequent tests to reliably predict when this synapse was active.

The authors suggestion is that taking this technique out of the computer and into the world of real brains (or small slices of brain, as is commonly employed), would facilitate the task of elucidating the numerous connections in the brain. While this is true, it must be said that this method is good for asking the following question: Which neurons are connected to one neuron that I know very well? In other words, somebody interested in applying this work would have to have one neuron of interest and then stimulate every other neuron that might be connected to it in order to determine the connectivity. In this sense, the approach is a far cry from revealing the wiring of the brain, but it certainly does help.

References:
1. Bhalla US. (2008) How to record a million synaptic weights in a hippocampal slice. PLoS Comput Biol. 4(6):e1000098.

On Learning and Time-Scales

Moo Ming Poo, PhD

On Thursday (2/28/08), I attended a lecture, part of the Neurological Institute at Columbia University’s continuing Seminar series. The talk, titled Activity-Induced Modifications of Neural Circuits was given by Moo Ming Poo, perhaps the most active researcher in this fascinating sub-field, from UC Berkeley. The lecture hall, which has seats for perhaps 80 that are never filled at these events, was packed. Standing room was all that was available, with people spilling into the side aisles and a few intrepid souls (the principle investigator in the lab I work in amongst them) seating themselves in the center aisle as well. Eric R. Kandel, James H. Schwartz, and Thomas M. Jessel (the authors of the most widely used undergraduate and graduate textbook on Neuroscience and all professors at Columbia) were in attendance, as well as countless other researchers and graduate students like myself.

The talk was probably so well attended in part because of Dr. Poo’s notoriety, but also because his work is of some universal intrigue, having relevance in all brain areas and a diverse set research programs.

The title of the seminar refers to the property that individual neurons display in changing the strength of the connections (synapses) between each other in a way that depends on their relative activities. Specifically, if neuron A sends a connection to neuron B, the latter cell needs a way to update the importance of A’s input. Neurons communicate by sending aroundspikes, large pulses of voltage, and it is generally not the case (in the mammalian brain) that a single presynaptic neuron (A) can cause a postsynaptic neuron (B) to fire (spike), rather some several hundred or thousand presynaptic neurons must spike at almost the same time to cause a postsynaptic neuron to fire. I say this because it makes clear the subtle specificity that is needed in synaptic modification, to pick out which of these many inputs have useful information for the cell.

The sensible mechanism, called spike-timing dependent plasticity (STDP), works by up-weighting or strengthening synapses from presynaptic neurons that spiked a short time (20 milliseconds) before the postsynaptic neuron, and down-weighting those in which the presynaptic spike came during a short period after the postsynaptic spike. The large-scale analogy (mentioned by Dr. Poo in his talk) is that of Pavlov’s dog: the canine learned to associate the bell with the meat when the sound preceded the reward, but not when the order was reversed.

In his excellent lecture, Dr. Poo presented some convincing results concerning the molecular mechanisms that might be at work on a microscopic level to achieve this effect, but he concluded with a different and stimulating point.

He presented data from experiments that had been conducted by a post-doctoral researcher in his lab, German Sumbre, using Zebrafish. The results indicated that even very young members of this species are able to learn a predicable series of 60 or so light pulses, delivered every 0.5, 1 or 4 seconds, as indicated by their immaculately timed execution of an escape response called a tail-flick for two or so extra intervals beyond the conclusion of series, right when the pulses would have arrived.

That the fish were able to do this is not incredibly surprising, many animals display this type of predictive behavior. However, it is entirely mysterious how this might be happening at a neuronal level. Dr. Poo has made magnificent progress in understanding how learning proceeds on very short time-scales (10s – 100s of milliseconds), but it is still quite unclear how learning of longer-period phenomena might be achieved by the nervous system. In fact, Dr. Poo appealed to the audience, saying: “If anybody has any ideas how this might be studied, I am anxious to hear them.”

Mirroring the urgency apparent in Dr. Poo’s request, there was a paper published recently on this very topic showing that amoeba are capable of just this sort of learning of intervals1. No doubt this will be a hot topic of research in the near future. Further, this sort of example shows us both how far we’ve come in understanding brains, and the chasms yet to be bridged in moving forward.

References

1. Saigusa T, Tero A, Nakagaki T, Kuramoto Y. (2008) Amoebae anticipate periodic events. Phys Rev Lett. 100(1):018101

Axo-Axo-Somatic Inhibition

Neurons communicate to eachother in the brain by chemical signals. They send out long ramifications called axons which synapse (connect) with other neurons (generally) on parts called dendritic trees. These connections are not physical in the sense that the cells do not share any inner-membrane space, but they do allow the communication of intercellular chemical signals with incredible speed. There is a small space called the synaptic cleft into which the signalling cell releases a chemical signal and at which the cell receiving the signal has receptors specific to that signal. Dendritic trees are places where signals from many other neurons are summed up, when a neuron recieves enough signals from those cells which synapse onto it, it fires an action potential at it’s soma or body. An action potential is a large transient increase in the voltage of the cell (don’t forget your brain is electric). That transient (called a spike) propagates down the axons of that cell much like an electrical signal does in a telephone wire or a television cable, to the synapses it forms with other cells and triggers the release of the chemical signals used to communicate. This is the story whether the signalling cell is telling it’s target to turn on (excitation) or turn off (inhibition). One difference is that excitatory signals generally go to the dendritic tree of another neuron while inhibitory connections can go to the dendrites or the soma (body). This is useful because the inhibition acts like opening a voltage drain and preventing the action potential from building up which (as I mentioned) it generally does at the soma. The specifics of the signalling molecules and receptors determines whether inhibition or excitation is being transmitted, and in general cells send either one or the other kind of signal. So if an excitatory cell wants to inhibit another excitatory cell, it must excite an inhibitory cell which in turn inhibits the target excitatory cell. When might the brain want to do that? Well lateral inhibition is a ubiquitous concept in brain circuitry. This is the idea that if I have a bunch of neurons all designed to report something appearing in different parts the visual field, it makes sense for them all to mutually inhibit eachother. Because the areas of the visual field to which single neurons respond overlap a bit, a line might excite a sort of fuzzy set of neurons in the cortex, if the ones that are responding most strongly inhibit the ones that are only responding weakly I can detect cleaner edges and have better spatial precision in general.

A new paper in Science presents evidence that inhibitory synapses can have another form1. In the image above you can see an illustration of each othese kinds of inhibitory synapses. Both the type I’ve already described (above) and the new kind (below). The difference is that in the new type, instead of having to send a signal to the inhibitory interneuron which in turn inhibits the target excitatory cell, the 1st excitatory can hijack the inhibitory synapse of the interneuron to rapidly and directly inhibit the target cell! Why is this so exciting? Well any time we can figure out something new about how that intricate mass of electrified flesh in our heads might accomplish some of the seemingly miraculous feats it does, I get excited. I take particular pleasure in understanding the mechanistic underpinnings of consciousness, and being a materialist (in the philosophical sense), I think that neuroscience is the way to do that. Beyond this, the specificity of this new mechanism is potentially greater than the earlier described variant. An inhibitory cell projects to many other excitatory cells so if an inhibitory cell gets turned on, it will turn off many other cells which may not be “what the 1st excitatory cell wants.” The other reason I’m excited by this is that I see it as a way to modify the receptive field properties of a cell while during processing rather than through some longer term “learning” or modification in general. Below is an illustration of the receptive field structure of a simple cell in the primary visual cortex.

These cells are the first place that visual information is represented in the visual cortex, and all visual percepts are built up from them. Consequently, it doesn’t make sense to change them over time too much. It’s like a computer monitor, pixels are a good universal way to represent many different types of images. You wouldn’t change the shape of pixels depending on the type of images you were going to see because we don’t have technology that could do it quickly and efficiently enough. Similarly, one keeps the receptive fields of cells in the primary visual cortex constant and then changes how that information is used at later stages. However, if there was some way to easily change the shape of pixels back and forth depending on the stimulus being displayed, it would be very useful. That’s one consequence of this, online modification of receptive field properties at a low level. We’ve got a lot to learn from the brain.

References

1. Ren, M., Yoshimura, Y., Takada, N., Horibe, S. & Komatsu, Y. (2007) Specialized Inhibitory Synaptic Actions Between Nearby Neocortical Pyramidal Neurons. Science 316, 758–761 (2007).