On sticking out your tongue.

Muscles can only pull, not push, so how is it that you can stick out your tongue? In other words, since we have no muscle outside our mouths to pull our tongues out, the fact that muscles are incapable of pushing seems to imply that there is some sort of mechanism at work in producing this movement which doesn’t fit in with our general conception of all other movements (arms, legs, eyes, peristalsis, breathing, et cetera).

i am currently (4/29/08) attending the 18th Annual Neural Control of Movement (NCM) meeting in Naples, Florida, and this tongue question was brought up as a way to remind the attendants that we should be careful not to let dogma affect our thinking too much as such adherence to well-established ideas can prevent us from reaching new ways of understanding and new modes of analysis.


View Larger Map

The simple answer is that the tongue has constant volume, so if you sufficiently contract the muscles in your tongue laterally (from molar to molar), the tongue must increase its volume longitudinally (from throat to lips). Notice also that to really stick your tongue out, you must protrude your mandible significantly.

In any case, it was a perhaps frivolous but highly stimulating and poignant aside in a meeting otherwise devoted to the serious analysis of experimental data, thus kicking things off in a congenial tone.

Quote

ÖSTERREICHISCHE NATIONALBIBLIOTHEK

Personally, I see no contest between a belief in the existence of a deity and the study of science. Indeed, many of our greatest scientists have been strongly religious (Aristotle, Einstein, Newton, et cetera). Nonetheless, the world-wide conflict between the two rages on. Here I present an excerpt from an essay published in Nature about the very birth of science and it’s seemingly automatic perception as challenging religious faith.

“[William of Conches] argued that natural phenomena arise from forces that, although created by God, act under their own agency. William insisted, echoing Plato, that the divine system of nature is coherent and consistent, and therefore comprehensible: if we ask questions of nature, we can expect to get answers, and to be able to understand them.

That is a necessary belief for one even to imagine conducting science. If everything is subject to the whim of God, there is no guarantee that a phenomenon will happen tomorrow as it does today, therefore there is then no point in seeking any consistency in nature. But William of Conches could not countenance a Creator who was constantly intervening in the world. He saw the Universe as a divinely wrought mechanism: God simply set the wheels in motion. It is in the twelfth century that the first references to the Universe as machina begin to appear.

Some conservative theologians denounced this attempt to develop a Christian platonic natural philosophy. They felt that taking too strong an interest in nature as a physical entity was tantamount to second-guessing God’s plans. As everything was surely determined moment to moment by the will of God, it was futile and impious, they believed, to seek anything akin to what we now regard as physical law. The quest for laws of nature was also condemned because it seemed to limit God’s omnipotence. As the eleventh-century Italian cleric Peter Damian insisted, one could not know anything for certain, as God could alter it all in an instant.

Opposition to medieval rationalism was motivated in part by valid concerns about the dangers of bringing science into scripture. When, for example, William of Conches was denounced for seeking physical explanations for the creation of Eve from one of Adam’s ribs, conservatives were right to voice dismay at this apparent transformation of the Bible into a work of science. Read as a kind of moral mythology, holy books may have some social value. Deeming them sources of natural facts must lead to the absurdities of today’s creationism.

By making God a natural phenomenon, the medieval rationalists turned Him into an explicatory contingency for which there has since seemed ever less need. By degrees, such secular learning was found to have so much explanatory power that it rivalled, rather than rationalized, theology itself. The consequent rift between faith and reason has now left traditional religions so compromised they are susceptible to displacement by more naive and dogmatic varieties.”

from Triumph of the medieval mind by Philip Ball (Nature 452, 816-818 (17 April 2008) | doi:10.1038/452816a)

On Looking For Things That May Not Be There

The Flyer for the Series Mentioned Below

On Monday (4/14/08) I had occasion to attend an informal talk on dark matter and dark energy at the Picnic Café given by Alberto Nicolis as part of a continuing series called Café Science in collaboration with Columbia University. I think this sort of casual interaction between scientist and the general public is all to rare, and I was heartened by the lively discussion that resulted between the speaker and the capacity crowd of about 40 individuals spanning ages from 13 to 80. The talk itself lasted about 30 minutes, with Alberto taking sips of white wine from his stance in the middle of the room.

His talk was intended to provide, in broad strokes, a lay understanding of those mysterious terms I mentioned above, dark matter and dark energy. In general, the reason for their recent explosion of use in scientific literature and news media in general is a matter of length scales.

It goes like this: we have two ways of talking about gravity: Newton’s law of universal gravitation, and Einstein’s theory of general relativity. Within out solar system, they are essentially indistinguishable in they are both quite capable of explaining the orbits of the planets (although general relativitistic calculations are required for use of the global positioning system, GPS). When astronomers observe galaxies – where the effects of gravity occur over much larger distances – however, things are different.
Allow me a brief dalliance for the purpose of pedagogy: it is possible to estimate the mass of a galaxy by the number of stars and planets it contains. Even the presence (and indirectly the mass) of black holes can be inferred by watching more distant stars pass behind and become occluded by them, in this way we can calculate the approximate mass of all the matter that we can see. This is where we run into a problem. Based on these calculations, the stars in these galaxies are moving far too fast, there is not enough mass to explain their observed velocities. By rights, if these stars are moving this fast, their centrifugal acceleration should shoot them away from their galactic homes. This realization has led to the inference that there must be some additional mass in the galaxy that we cannot see; that simply doesn’t interact with light as all known matter does, to create enough force to bind these stellar conglomerations together. This is dark matter.

A graphical representation of dark matter

Another astronomical observation on a still larger length scale leads to further worries. It is believed that the universe is expanding because of the observation that individual galaxies appear to be moving away from one other. This wouldn’t be troubling at all if their relative velocities were constant, it could be explained as a remnant of the outward linear momentum created by the big bang, but it isn’t. In fact the rate of expansion seems to be increasing with time. Because all matter creates attractive gravitational force, one cannot here again invoke dark matter, this would only exacerbate the issue. To deal with this conundrum, we must instead postulate some other weird stuff that creates a repulsive gravitational force. This is where dark energy comes in. Because one well known feature of Einstein’s general relativity is the ultimate equivalence of matter and energy (E=mc2), it accommodates the existence of some form of energy that generates the requisite force to produce the acceleration of the universe.

These explanations are fairly tidy in the sense that they work well at patching up existing theories. However, in talking with Dr. Nicolis after the conclusion of his remarks and the Q&A session, a slightly different picture arose. I was questioning him about the recently raised possibility that the assumption of a homogeneous distribution of dark energy requires substantially different corrections to general relativity than some other, more exotic distributions might1. His sensible response to this was to point out that we have no reason, theoretical or otherwise, to favor any particular distribution of dark energy over any other (unlike dark matter, which must take on a very specific distribution in order for it to have the appropriate effect). In fact, his own work on gravitation goes even further, abandoning the concepts of dark matter and dark energy entirely in favor of a more fundamental reformulation of the laws of gravity which we hold so dear. This may sound radical, but it actually bears a resemblance to a similar sort of decision made by Einstein early in this century.

I am referring here to Einsteins rejection of the concept of æther. This enigmatic substance was originally proposed by Isaac Newton in order to explain what we now know to be the effects of gravity on light. He observed that light from distant sources was diffracted or bent in proximity to certain heavenly bodies, and proposed that there must be some all pervasive stuff collecting near heavy objects which was responsible for this effect. Further, and much later, it was suggested that the æther was needed to support the propagation of electromagnetic waves through space.

The short version of the story is that while some were performing painstaking and extremely sensitive experiments designed to distinguish between various theories of æther, Einstein was quietly developing his theory of special relativity, which did away with the need for such machinations entirely.

It is anybody’s guess what the outcome of all this dark matter & energy business will be, but one thing is sure, the most interesting science is born out of times like these; eras in which empirical observations challenge experimentalists and theoreticians alike to tinker with and explain things they don’t understand.

References:

1. Ellis, G. (2008) Cosmology: Patchy solutions. Nature 452, 158-161 | doi:10.1038/452158a;

On STDP from Behavior

Figure 6 from ref. 1

I’ve written several times about Spike-Timing-Dependent-Plasticity (STDP), one method by which the individual neurons in the mammalian brain learn; changing their responses to the signals sent from other neurons.

It is believed that STDP is a major route of such learning, both during development and in the adult animal; for instance potentially underlying the associative conditioning famously demonstrated by Pavlov. Indeed, it is just this sort of patterned external sensory stimulus (bell then food) that represents a candidate for learning through STDP. However, connecting the presence of structured environmental variables and underlying brain changes has proven a difficult experimental challenge.

A recent piece of research has achieved just such a feat in the optic tectum of a non-mammal, the developing frog Xenopus laevis1.

I found the figure above to be the most intriguing result from the paper sumarrizing these experiments, published in Nature Neuroscience. The image represents the finding that if the tadpoles are exposed to repetitive flashes of light with a specific time difference between them (top), the neurons in their optic tecta respond by adjusting the latency (time from stimulus to response) of their spike-reactions to single, isolated flashes (middle: neural activity, bottom: histograms of spike latencies).

I am encouraged by this work because it bridges what is currently a rather formidable gap. That between understanding the brain at the level of single neurons and the behavior of an animal as a whole.

References:

1. Pratt, KG, Dong, W & Aizenman, CD (2008) Development and spike timing–dependent plasticity of recurrent excitation in the Xenopus optic tectum Nature Neuroscience 11, 467-475 | doi:10.1038/nn2076

On Hippocampal Memory

The Hippocampus

Lydia Kibiuk, copyright © 2002 Lydia Kibiuk

The hippocampus is the area of the mammalian brain most closely associated with memory, particularly spatial memory, meaning internal maps. It is common for individuals with hippocampal lesions (and surrounding related regions) to have amnesia, as in the famous case of H.M., and more recently E.P. Also, it has been shown that London cab-drivers, who presumably require large internal maps for navigation, have enlarged hippocampi1. However, it is has become clear that the hippocampus isn’t required for all forms of memory. A set of looming questions, then, is: where memories are stored, how are they formed, and what role various structures in the brain play in these activities.

I attended a seminar (3/26/08) given by Larry Squire, who has been studying the role of the hippocampus in memory formation, retrieval and storage for quite some time. He summarized results comparing normal patients to those with hippocampal lesions. In general, it seems as though this structure is required for the formation of certain types of new memories only, not for recall or storage, since the lesioned patients had no trouble remembering thigns from their past (including how to navigate their childhood neighborhoods). However, the truly fascinating result he presented was from an experiment designed to reveal what role the hippocampus plays in building new memories.

Performance Graphs

(from reference 2)

The figure above summarizes the data gathered from normal and hippocampal-lesion patients during a memory task. The task was quite straightforward, subjects were presented with 8 pairs of objects, one pair at a time, in which one of the pair was always “correct.” In a given trial, the subject was presented with a pair, and chose one by reaching out and picking it up. On the underside of each object was a sticker that either read “correct,” or “incorrect.” As represented in the left panel of the figure above, normal humans were able to reach perfect performance in this task by repeated presentation of these pairs3. In addition, these subjects were easily able to cope with a variation on the task. All 16 objects were presented at once, and the subject was asked to separate out the correct from the incorrect items (indicated by the grey bar to the right of the trace).

The hippocampal-lesion patients, however, showed dramatically different results. They required 12 times as much training, didn’t get up to the same level of performance as the normal subjects, and were unable to perform the task-variant. This is to say nothing of the fact that they didn’t recall having ever attempting the task before when queried on the 2nd through 36th sessions.

It is fascinating that these patients were able to learn this task at all, and it is clear that they are using some completely different strategy from the normal subjects (one which relies heavily on the paired-object context, as revealed by their confusion at being presented with all the objects at once). There are several questions that are raised by this research. First, if there is an alternative pathway for learning such tasks, how does the brain choose to use the hippocampal path to record a particular type of memory? One might suggest that the brain uses the hippocampus for all memories unless it isn’t useable as in these patients, but we know that many types of motor learning, learning to play the piano for instance, do not require the hippocampus. A further set of interrelated questions are where and how the memory is being stored, and how these differ from those patients with intact hippocampi.

This type of research shows us definitively that we have the capacity for different types of memory, and that we have a ways to go in understanding how and where it operates.

Notes & References:

1. Maguire EA, Frackowiak RS, Frith CD. (1997) Recalling routes around london: activation of the right hippocampus in taxi drivers. J Neurosci; 17(18):7103-10.
2. Bayley PJ, Frascino JC, Squire LR. (2005) Robust habit learning in the absence of awareness and independent of the medial temporal lobe. Nature; 436(7050):550-3.
3. A “session” in this experiment consisted of 40 trials, 5 presentations of each of the 8 pairs of objects.

Wellcome Images 2008

Fly on Sugar Crystals

The Wellcome Trust is the largest charity in the UK. They are well described by this quote from thier website: “We fund innovative biomedical research, in the UK and internationally, spending around £650 million each year to support the brightest scientists with the best ideas.” Further, “Wellcome Images is one of the Wellcome Library’s major visual collections. Part of Wellcome Collection, a major new £30 million public venue developed by the Wellcome Trust, the Library has over 750 000 books and journals, an extensive range of manuscripts, archives and films, and more than 250 000 paintings, prints and drawings.” And, “The annual Wellcome Images awards (previously known as Biomedical Images Awards) reward contributors for their outstanding work and winners are chosen by a panel of experts. The resulting public exhibitions are always extremely popular and receive widespread aclaim.”

Above is one image from the 2008 awards collection, you can view the rest here, and many other images from their library here.

This sort of thing has actually become de rigeur for Nature & Science/NSF who give our awards on a slightly larger scope, including interactive media, and breaking down imagery into real and virtual categories.