Archive

Archive for February, 2011

Two Universes, Same Structure

February 21, 2011 Leave a comment

 

 

 

This image is not of a neuron.

This image is of the other universe; the one outside our heads.

 

 

 

 

 

It depicts the “evolution of the matter distribution in a cubic region of the Universe over 2 billion light-years”, as computed by the Millennium Simulation. (Click the image above for a better view.)

The next image, of a neuron, is included for comparison.

It is tempting to wax philosophical on this structure equivalence. How is it that both the external and internal universes can have such similar structure, and at such vastly different physical scales?

If we choose to go philosophical, we may as well ponder something even more fundamental: Why is it that all complex systems seem to have a similar underlying network-like structure?

For illustration of this point, just take a glance at the front page of VisualComplexity.com (partially reproduced below).

These neural-network-like visual images represent complex systems and relations for domains as diverse as academic citations, the blogosphere, scientific knowledge, genealogy, iTunes music collections, and Italian wine production.

Does this imply some deep equivalence exists between all complex systems? Is it the nature of complex systems to be network-like?

Alternatively, this is perhaps simply how we, as neural networks, are able to conceptualize the external universe. Could it be that the external universe is vastly different in form from our internal universe, but we simply perceive that which happens to be compatible with our neural network knowledge structure?

It seems there are some situations where we have trouble representing reality for this reason. However, evolutionary pressures for survival likely drove the human brain to represent the world as accurately as possible. (Otherwise, e.g. our ancestors may have believed lions disappeared when hiding behind bushes; an obviously maladaptive representation of reality.) This suggests that even though our brains don’t represent the world with complete accuracy, it is nonetheless quite accurate in most cases.

Ultimately I think the equivalence between complex systems is due to the underlying nature of such systems. They must all involve massive integrated differentiation. In other words, there must be many different things (nodes), with many different relations among them (links) for a system to be complex. Thus integrated differentiation, the very basis of complexity, is inherently network-like (i.e., has the equivalent of nodes and links).

It is compelling to consider if neural systems, with their numerous nodes (neurons) and links (synapses) providing integrated differentiation, might have evolved complexity in order to represent other complex systems. In other words, neural systems may have evolved in order to mirror the complexity presented by the external universe, which helped each organism adapt and survive in its environment.

Thus the similarity between the internal and external universes may not be due to coincidence, but design.

Advertisements

Levels of Analysis and Emergence: The Neural Basis of Memory

February 12, 2011 Leave a comment

Cognitive neuroscience constantly works to find the appropriate level of description (or, in the case of computational modeling, implementation) for the topic being studied.  The goal of this post is to elaborate on this point a bit and then illustrate it with an interesting recent example from neurophysiology.

As neuroscientists, we can often  choose to talk about the brain at any of a number of levels: atoms/molecules, ion channels and other proteins, cell compartments, neurons, networks, columns, modules, systems, dynamic equations, and algorithms.

However, a description at too low a level might be too detailed, causing one to lose the forest for the trees.  Alternatively, a description at too high a level might miss valuable information and is less likely to generalize to different situations.

For example, one might theorize that cars work by propelling gases from their exhaust pipes.  Although this might be consistent with all of the observed data, by looking “under the hood” one would find evidence that this model of a car’s function is incorrect.

On the other hand, a model may be formulated at too low a level.  For example, a description of the interactions between molecules of wood and atoms of metal is not essential for a complete, thorough understanding of how a door works.

Emergence

One particularly exciting aspect of multi-level research is when one synthesizes enough observations to move from one level to a higher one.  Emergence is a term used to describe what occurs when simpler rules interact to form complex behavior.   It’s when a particular combination of properties or (typically nonlinear) processes gives rise to something surprising and/or non-obvious.  To give a basic example, hydrogen and oxygen both support fire.  Surprisingly, their combination — water — puts fires out and expands when frozen.

An Example of Emergence: The Neural Basis of Memory

A recent article by Raymond and Redman (Journal of Neurophysiology, 2002) takes a close look at three separate subcellular mechanisms that appear to support LTP (reminder: LTP is long-term potentiation, which is one of the best candidates to-date for the neural basis of memory.

Raymond and Redman replicate the earlier finding that longer bouts of electrical stimulation can cause LTP to be more powerful (resulting in larger postsynaptic responses), and last longer.  They demonstrated three different levels of LTP in their experiment by using three different length trains of electrical stimulation.  This stimulation-dependent property of LTP has been taken as the basis for synaptic modification rules used in neural network models; (Neural Network “Learning Rules”)

Interestingly, the researchers then demonstrated that by blocking three different cellular mechanisms –  ryanodine receptors, IP3 receptors and L-type VDCCs respectively –  they were able to selectively block LTP from the shortest, intermediate or longest stimulation trains.

Taken together, these results suggest that the high-level phenomenon of LTP is actually composed of (at least) three separate underlying processes.  These separate processes appear to cover different timespans, contributing to an exponential curve relating LTP to the time and strength of neuronal activity.

Insight Gained

The study mentioned in this post contributes to the field by helping to lending additional evidence to our current theoretical understanding of a mechanism which is likely to underpin memory.  From a theoretical perspective LTP appears to be a meaningful construct which emerges from mutliple, dissociable subcellular processes.

More generally, the study is an excellent demonstration of emergence: three separate processes from a particular level (subcellular receptor proteins) appear to jointly support a more abstract, single processes at a higher level (LTP in cellular electrophysiology). As a result, computational modelers can feel more comfortable with assumptions of an LTP-like assumption in their simulations.

A final thought is that this type of research also clearly highlights the importance of interdisciplinary research in the neurosciences.

Neural Network “Learning Rules”

February 11, 2011 1 comment

Most neurocomputational models are not hard-wired to perform a task. Instead, they are typically equipped with some kind of learning process.  In this post, I’ll introduce some notions of how neural networks can learn. Understanding learning processes is important for cognitive neuroscience because they may underly the development of cognitive ability.

 

Let’s begin with a theoretical question that is of general interest to cognition: how can a neural system learn sequences, such as the actions required to reach a goal?

Consider a neuromodeler who hypothesizes that a particular kind of neural network can learn sequences. He might start his modeling study by “training” the network on a sequence. To do this, he stimulates (activates) some of its neurons in a particular order, representing objects on the way to the goal.

After the network has been trained through multiple exposures to the sequence, the modeler can then test his hypothesis by stimulating only the neurons from the beginning of the sequence and observing whether the neurons in the rest sequence activate in order to finish the sequence.

Successful learning in any neural network is dependent on how the connections between the neurons are allowed to change in response to activity. The manner of change is what the majority of researchers call “a learning rule“.  However, we will call it a “synaptic modification rule” because although the network learned the sequence, it is not clear that the *connections* between the neurons in the network “learned” anything in particular.

The particular synaptic modification rule selected is an important ingredient in neuromodeling because it may constrain the kinds of information the neural network can learn.

There are many categories of mathematical synaptic modification rule which are used to describe how synaptic strengths should be changed in a neural network.  Some of these categories include: backpropgration of error, correlative Hebbian, and temporally-asymmetric Hebbian.

  • Backpropogation of error states that connection strengths should change throughout the entire network in order to minimize the difference between the actual activity and the “desired” activity at the “output” layer of the network.
  • Correlative Hebbian states that any two interconnected neurons that are active at the same time should strengthen their connections, so that if one of the neurons is activated again in the future the other is more likely to become activated too.
  • Temporally-asymmetric Hebbian is described in more detail in the example below, but essentially emphasizes the importants of causality: if a neuron realiably fires before another, its connection to the other neuron should be strengthened. Otherwise, it should be weakened.

Why are there so many different rules?  Some synaptic modification rules are selected because they are mathematically convenient.  Others are selected because they are close to currently known biological reality.  Most of the informative neuromodeling is somewhere in between.

An Example

Let’s look at a example of a learning rule used in a neural network model that I have worked with: imagine you have a network of interconnected neurons that can either be active or inactive.    If a neuron is active, its value is 1, otherwise its value is 0. (The use of 1 and 0 to represent simulated neuronal activity is only one of the many ways to do so; this approach goes by the name “McCulloch-Pitts”).

Consider two neurons in the network, named PRE and POST, where the neuron PRE projects to neuron POST. A temporally-asymmetric Hebbian rule looks at a snapshot in time and says that the strength of the connection from PRE to POST, a value W between 0 and 1, should change according to:

W(future) = W(now) + learningRate x  POST(now)  x  [PRE(past) – W(now)]

This learning rule closely mimics known biological neuronal phenomena such as long-term potentiation (LTP) and spike-timing dependent plasticity (STPD) which are thought to underlie memory and will be subjects of future Neurevolution posts. (By the way, the learning rate is a small number less than 1 that allows the connection strengths to gradually change.)

Let’s take a quick look at what this synaptic modification rule actually means.  If the POST neuron is not active “now”, the connection strength W does not change in the future because everything after the first + becomes zero:

W(future) = W(now) + 0

Suppose on the other hand that the POST neuron is active now.  Then POST(now) equals 1.  To see what happens to the connection strength in this case, let’s assume the connection strength is 0.5 right now.

W(future) = 0.5 + learningRate x [PRE(past) – 0.5]

As you can see, two different things can happen: if the PRE was active, then PRE(past) = 1, and we will get a stronger connection in the future because we have:

W = 0.5 + learningRate x (1 – 0.5)

But if the PRE was inactive, then PRE(past) = 0, and we will get a weaker connection in the future because we have:

W = 0.5 + learningRate x (0 – 0.5)

So this is all good — but what can this rule do in a network simulation?  It turns out that this kind of synaptic modification rule can do a whole lot.  Let’s review an experiment that shows one property of a network equipped with this learning mechanism.

Experiment: Can a Network of McCulloch-Pitts Neurons Learn a Sequence with this Rule?

Let’s review part of a simulation experiment we carried out a few years ago (Mitman, Laurent and Levy, 2003).  Imagine you connect 4000 McCulloch-Pitts neurons together randomly at 8% connectivity.  This means that each neuron has connections coming in from about 300 other neurons in this mess.

When a neuron is active, it passes this activity to other neurons through its weighted connections.  The strengths of all the connections start out the same, but over time they change according to the temporally-asymmetric Hebbian rule discussed above.  In order to keep all the neurons from being active at once, there’s a cutoff so that only about 7% of the neurons are active (see the paper full details).

To train the network, we then turned on groups of neurons in order: this was a sequence of 10 neurons at a time, turned on for 10 time ticks each.  This is like telling the network the sequence “A,B,C,D,E,F,G,H,I,J”.

Here’s a picture showing what happens in the network at first when training the network as we activated blocks of neurons.  In these figures, time moves from left to right.  Each little black line means that a neuron was active at that time; to save space, only the part of the network where we “stimulated” is shown.  The neurons that we weren’t forcing to be active went on and off randomly because of the random connections.

After we exposed the network to the sequence several times, something interesting began to happen.  The firing at other times was no longer completely random.  In fact, it looked like the network was learning to anticipate that we were going to turn on the blocks of neurons:

Did the network learn the sequence?  Suppose we jumpstart it with “A”.  Will it remember “B”, “C”,… “J”?

Indeed it did!  A randomly-connected neural network equipped with this synaptic modification rule is able to learn sequences!

 

This was an example of a “learning rule” and just one function — the ability to learn sequences of patterns — that emerges in a network governed by this rule.  In future posts, I’ll talk about more about these kinds of models and mechanisms, and their emphasize their relevance to cognitive neuroscience.

Redefining Mirror Neurons

February 10, 2011 2 comments

In 1992 Rizzolatti and his colleagues found a special kind of neuron in the premotor cortex of monkeys (Di Pellegrino et al., 1992).

 

These neurons, which respond to perceiving an action whether it’s performed by the observed monkey or a different monkey (or person) it’s watching, are called mirror neurons.


Many neuroscientists, such as V. S. Ramachandran, have seized upon mirror neurons as a potential explanatory ‘holy grail’ of human capabilities such as imitation, empathy, and language. However, to date there are no adequate models explaining exactly how such neurons would provide such amazing capabilities.

Perhaps related to the lack of any clear functional model, mirror neurons have another major problem: Their functional definition is too broad.

Typically, mirror neurons are defined as cells that respond selectively to an action both when the subject performs it and when that subject observes another performing it. A basic assumption is that any such neuron reflects a correspondence between self and other, and that such a correspondence can turn an observation into imitation (or empathy, or language).

However, there are several other reasons a neuron might respond both when an action is performed and observed.

First, there may be an abstract concept (e.g., open hand), which is involved in but not necessary for the action, the observation of the action, or any potential imitation of the action.

Next, there may be a purely sensory representation (e.g., of hands / objects opening) which becomes involved independently of action by an agent.

Finally, a neuron may respond to another subject’s action not because it is performing a mapping between self and other but because the other’s action is a cue to load up the same action plan. In this case the ‘mirror’ mapping is performed by another set of neurons, and this neuron is simply reflecting the action plan, regardless of where the idea to load that plan originated. For instance, a tasty piece of food may cause that neuron to fire because the same motor plan is loaded in anticipation of grasping it.

It is clear that mirror neurons, of the type first described by Rizzolati et al., exist (how else could imitation occur?). However, the practical definition for these neurons is too broad.

How might we improve the definition of mirror neurons? Possibly by verifying that a given cell (or population of cells) responds only while observing a given action and while carrying out that same action.

Alternatively, subtractive methods may be more effective at defining mirror neurons than response properties. For instance, removing a mirror neuron population should make imitation less accurate or impossible. Using this kind of method avoids the possibility that a neuron could respond like a mirror neuron but not actually contribute to behavior thought to depend on mirror neurons.

Of course, the best approach would involve both observing response properties and using controlled lesions. Even better would be to do this with human mirror neurons using less invasive techniques (e.g., fMRI, MEG, TMS), since we are ultimately interested in how mirror neurons contribute to higher-level behaviors most developed in homo sapiens, such as imitation, empathy, and language.

 

REFERENCES:

Ferrari, P.F., Gallese, V., Rizzolatti, G., & Fogassi, L. (2003). Mirror Neurons Responding to the Observation of Ingestive and Communicative Mouth Actions in the Monkey Ventral Premotor Cortex. European Journal of Neuroscience, 17, 1703-1714.

Rizzolatti, G., & Craighero, L. (2004). The Mirror-Neuron System. Annual Review of Neuroscience, 27, 169-192.

Categories: Neuroscience Tags:

The Neuroscience of Music

February 5, 2011 Leave a comment

Why does music make us feel? On the one hand, music is a purely abstract art form, devoid of language or explicit ideas. The stories it tells are all subtlety and subtext. And yet, even though music says little, it still manages to touch us deep, to tickle some universal nerves. When listening to our favorite songs, our body betrays all the symptoms of emotional arousal. The pupils in our eyes dilate, our pulse and blood pressure rise, the electrical conductance of our skin is lowered, and the cerebellum, a brain region associated with bodily movement, becomes strangely active. Blood is even re-directed to the muscles in our legs. (Some speculate that this is why we begin tapping our feet.) In other words, sound stirs us at our biological roots. As Schopenhauer wrote, “It is we ourselves who are tortured by the strings.”

We can now begin to understand where these feelings come from, why a mass of vibrating air hurtling through space can trigger such intense states of excitement. A brand new paper in Nature Neuroscience by a team of Montreal researchers marks an important step in revealing the precise underpinnings of “the potent pleasurable stimulus” that is music. Although the study involves plenty of fancy technology, including fMRI and ligand-based positron emission tomography (PET) scanning, the experiment itself was rather straightforward. After screening 217 individuals who responded to advertisements requesting people that experience “chills to instrumental music,” the scientists narrowed down the subject pool to ten. (These were the lucky few who most reliably got chills.) The scientists then asked the subjects to bring in their playlist of favorite songs – virtually every genre was represented, from techno to tango – and played them the music while their brain activity was monitored.

Because the scientists were combining methodologies (PET and fMRI) they were able to obtain an impressively precise portrait of music in the brain. The first thing they discovered (using ligand-based PET) is that music triggers the release of dopamine in both the dorsal and ventral striatum. This isn’t particularly surprising: these regions have long been associated with the response to pleasurable stimuli. It doesn’t matter if we’re having sex or snorting cocaine or listening to Kanye: These things fill us with bliss because they tickle these cells. Happiness begins here.

The more interesting finding emerged from a close study of the timing of this response, as the scientists looked to see what was happening in the seconds before the subjects got the chills. I won’t go into the precise neural correlates – let’s just say that you should thank your right NAcc the next time you listen to your favorite song – but want to instead focus on an interesting distinction observed in the experiment:

In essence, the scientists found that our favorite moments in the music were preceeded by a prolonged increase of activity in the caudate. They call this the “anticipatory phase” and argue that the purpose of this activity is to help us predict the arrival of our favorite part:

“Immediately before the climax of emotional responses there was evidence for relatively greater dopamine activity in the caudate. This subregion of the striatum is interconnected with sensory, motor and associative regions of the brain and has been typically implicated in learning of stimulus-response associations and in mediating the reinforcing qualities of rewarding stimuli such as food.”

In other words, the abstract pitches have become a primal reward cue, the cultural equivalent of a bell that makes us drool. Here is their summary:

“The anticipatory phase, set off by temporal cues signaling that a potentially pleasurable auditory sequence is coming, can trigger expectations of euphoric emotional states and create a sense of wanting and reward prediction. This reward is entirely abstract and may involve such factors as suspended expectations and a sense of resolution. Indeed, composers and performers frequently take advantage of such phenomena, and manipulate emotional arousal by violating expectations in certain ways or by delaying the predicted outcome (for example, by inserting unexpected notes or slowing tempo) before the resolution to heighten the motivation for completion. The peak emotional response evoked by hearing the desired sequence would represent the consummatory or liking phase, representing fulfilled expectations and accurate reward prediction. We propose that each of these phases may involve dopamine release, but in different subcircuits of the striatum, which have different connectivity and functional roles.”

The question, of course, is what all these dopamine neurons are up to. What aspects of music are they responding to? And why are they so active fifteen seconds before the acoustic climax? After all, we typically associate surges of dopamine with pleasure, with the processing of actual rewards. And yet, this cluster of cells in the caudate is most active when the chills have yet to arrive, when the melodic pattern is still unresolved.

One way to answer these questions is to zoom out, to look at the music and not the neuron. While music can often seem (at least to the outsider) like a labyrinth of intricate patterns – it’s art at its most mathematical – it turns out that the most important part of every song or symphony is when the patterns break down, when the sound becomes unpredictable. If the music is too obvious, it is annoyingly boring, like an alarm clock. (Numerous studies, after all, have demonstrated that dopamine neurons quickly adapt to predictable rewards. If we know what’s going to happen next, then we don’t get excited.) This is why composers introduce the tonic note in the beginning of the song and then studiously avoid it until the end. The longer we are denied the pattern we expect, the greater the emotional release when the pattern returns, safe and sound. That is when we get the chills.

To demonstrate this psychological principle, the musicologist Leonard Meyer, in his classic  book Emotion and Meaning in Music (1956), analyzed the 5th movement of Beethoven’s String Quartet in C-sharp minor, Op. 131. Meyer wanted to show how music is defined by its flirtation with – but not submission to – our expectations of order. To prove his point, Meyer dissected fifty measures of Beethoven’s masterpiece, showing how Beethoven begins with the clear statement of a rhythmic and harmonic pattern and then, in an intricate tonal dance, carefully avoids repeating it. What Beethoven does instead is suggest variations of the pattern. He is its evasive shadow. If E major is the tonic, Beethoven will play incomplete versions of the E major chord, always careful to avoid its straight expression. He wants to preserve an element of uncertainty in his music, making our brains beg for the one chord he refuses to give us. Beethoven saves that chord for the end.

According to Meyer, it is the suspenseful tension of music (arising out of our unfulfilled expectations) that is the source of the music’s feeling. While earlier theories of music focused on the way a noise can refer to the real world of images and experiences (its “connotative” meaning), Meyer argued that the emotions we find in music come from the unfolding events of the music itself.  This “embodied meaning” arises from the patterns the symphony invokes and then ignores, from the ambiguity it creates inside its own form. “For the human mind,” Meyer writes, “such states of doubt and confusion are abhorrent. When confronted with them, the mind attempts to resolve them into clarity and certainty.” And so we wait, expectantly, for the resolution of E major, for Beethoven’s established pattern to be completed. This nervous anticipation, says Meyer, “is the whole raison d’etre of the passage, for its purpose is precisely to delay the cadence in the tonic.” The uncertainty makes the feeling – it is what triggers that surge of dopamine in the caudate, as we struggle to figure out what will happen next. And so our neurons search for the undulating order, trying to make sense of this flurry of pitches. We can predict some of the notes, but we can’t predict them all, and that is what keeps us listening, waiting expectantly for our reward, for the errant pattern to be completed. Music is a form whose meaning depends upon its violation.