I study Saccade Adaptation, the process by which our saccades (rapid, point-to-point eye movements) are kept accurate. I am proud to report that something that I wrote (with a Post-Doc in the lab I currently do my work in) was published today in the Journal of Neuroscience. Take a look at the PDF if you’re so inclined.
An emerging idea in neuroscience is that noise is a good thing, in moderation. Neural activity is very noisy, there is a large degree of variability in the temporal and frequency domains of the spiking of brain cells. It is thought that this variability actually contributes to the robustness of the system. One concrete example is stochastic resonance. In that phenomenon, randomly perturbing neural activity can push it over some threshold such that a sensory event is detected, or an ambiguous decision is made, one way or the other. It may be difficult to see this as beneficial, but especially as we are fantastic learning machines, simply making a decision, or having a perceptual event occur (even when there has been none) contributes to the system’s learning abilities far more than indecision or non-detection.
In a more macroscopic example, a recent paper analyzing variability in brain activity across several age groups has found it to be quite positive. “Behaviorally, children showed slower, more variable response times (RT), and less accurate recognition than adults. However, brain signal variability increased with age, and showed strong negative correlations with intrasubject RT variability and positive correlations with accuracy. Thus, maturation appears to lead to a brain with greater functional variability, which is indicative of enhanced neural complexity. This variability may reflect a broader repertoire of metastable brain states and more fluid transitions among them that enable optimum responses. Our results suggest that the moment-to-moment variability in brain activity may be a critical index of the cognitive capacity of the brain.1“
1. McIntosh AR, Kovacevic N, Itier RJ. (2008) Increased brain signal variability accompanies lower behavioral variability in development. PLoS Comput Biol. 4(7):e1000106.
Does the language we speak influence our non-verbal thoughts? This question is a stratifying one: some think that language is synonymous with thought, while others consider it a component of our mental abilities or a type of output, no more representative of underlying cogitation than the way we walk or move our arms.
A paper published in the Proceedings of the National Academy of Science weighs in on this matter with experimental results indicating that individuals who speak very different languages (English, Turkish, Chinese, & Spanish) seem to non-verbally represent events in similar ways.
Specifically, in one task, the researchers asked their subjects to communicate an event such as “boy tilts glass” (read in each individual’s native language) with gestures. In a second task, they were asked to reconstruct an event using pictures. In order to quantify performance in these tasks, the researchers examined the ordering that gestures or pictures were used. They reasoned that because grammatical structures dictate that words be used in potentially divergent ways depending on language, that this structure might extend to the order in which gestures or pictures are used as well. Interestingly, they found that there was no quantitative difference in performance between speakers of different languages, suggesting that there is a common underlying mode of event representation which is minimally influenced by spoken language.
1. Goldin-Meadow S, So WC, Ozyürek A, Mylander C. (2008) The natural order of events: how speakers of different languages represent events nonverbally. Proc Natl Acad Sci USA. 105(27):9163-8.
The human brain has roughly 100 Billion neurons and each neuron has between 1000 and 10000 synapses (connections), thus approximately 500 Trillion synapses. This makes the problem of determining the connectivity, or the wiring diagram of the brain absurdly complex. This is one of the most fundamental problems confronting neuroscientists today because the solution to many problems of how the brain works would be made much easier if we simply knew the structure that it is built on.
A recent piece of computational research (published a wonderful PLOS journal) suggests a novel statistical method to identify which synapses of a given neuron are active at a given time. The author of this study simulated the output of many single neurons when a particular subset of it’s synapses were active. This characterization was based on the number of action potentials the neuron fired in response to the activity of these many specific synapses. Next, the author examined the changes in the output when a single additional synapse was activated along with the baseline subset. He found that if he simulated the addition of one synapse ~80 times, he could measure significant changes in the output of the simulated neuron such that it was possible in subsequent tests to reliably predict when this synapse was active.
The authors suggestion is that taking this technique out of the computer and into the world of real brains (or small slices of brain, as is commonly employed), would facilitate the task of elucidating the numerous connections in the brain. While this is true, it must be said that this method is good for asking the following question: Which neurons are connected to one neuron that I know very well? In other words, somebody interested in applying this work would have to have one neuron of interest and then stimulate every other neuron that might be connected to it in order to determine the connectivity. In this sense, the approach is a far cry from revealing the wiring of the brain, but it certainly does help.
1. Bhalla US. (2008) How to record a million synaptic weights in a hippocampal slice. PLoS Comput Biol. 4(6):e1000098.
Commonly held wisdom says that processing of visual features such as lines, forms, and motion is limited to higher cortical areas (for example, the medial temporal lobe, or area MT). Recent research shows, however, that the retina itself can extract motion signals, underscoring the subtle computational prowess of the bit of your brain that lives in your eye.
1. Baccus, S. A., Olveczky, B. P., Manu, M. & Meister, M. (2008) A Retinal Circuit That Computes Object Motion. The Journal of Neuroscience, 28(27):6807-6817
Part of the paradox of free will is that it not only liberates us by giving us control over our own actions, it also requires us to take responsibility for decisions which we make subconsciously. Thus, it is important to be vigilant in monitoring and understanding ones own psychology, one’s implicit rationale and underlying systematic reasoning so that taking responsibility for all acts is useful in correcting or maintaining patterns of behavior. However, we must also remember that there are things that will effect the way we interact with the world that may lie beyond our awareness. In this sense we must be able to forgive ourselves when those factors play a role in decisions we deem inappropriate in retrospect. Metaphysics aside, there is a great deal of research documenting the effects of context and priming on human behavior. The latest is from a study on how voting location can affect the way people vote1. Specifically, psychology researchers found that those voting on a education tax increase initiative were significantly more likely to vote for the initiative if they were voting in a school. The values (in the table above) indicate that the differences were small, but the statistics indicate that these differences are real.
1. Berger J, Meredith M, Wheeler SC. (2008) Contextual priming: Where people vote affects how they vote. Proc Natl Acad Sci U S A. 105(26):8846 – 8849
Using signals from the brain to control robotic arms is no longer cutting edge, having been achieved several times in the last decade. However, in the latest research into this topic, the authors present a novelty: rapidly training monkeys to control an anthropomorphic (having a shoulder, elbow, and wrist joint as well as a gripper) in order to feed themselves1. This kind of technology promises to eventually revolutionize prosthetics and give untold freedom to those who can no longer use their own limbs but do retain the brain areas that generate the signals that once controlled them. In order for this to become feasible, the implants used to record brain activity must be vastly improved (at present they are reliable for only a matter of weeks or months), the processing power must be reduced in size (it presently requires several computers), and the process must be automated (the systems must at present be tuned by a technician online).
1. Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB. (2008) Cortical control of a prosthetic arm for self-feeding. Nature. 453(7198):1098-101.