Archive

Posts Tagged ‘learning’

Scientists Map Process by Which Brain Cells Form Long-Term Memories

July 2, 2013 2 comments

Scientists at the Gladstone Institutes have deciphered how a protein called Arc regulates the activity of neurons – providing much-needed clues into the brain’s ability to form long-lasting memories.

These findings, reported in Nature Neuroscience, also offer newfound understanding as to what goes on at the molecular level when this process becomes disrupted.

Led by Gladstone senior investigator Steve Finkbeiner, MD, PhD, this research delved deep into the inner workings of synapses. Synapses are the highly specialized junctions that process and transmit information between neurons. Most of the synapses our brain will ever have are formed during early brain development, but throughout our lifetimes these synapses can be made, broken and strengthened. Synapses that are more active become stronger, a process that is essential for forming new memories.

However, this process is also dangerous, as it can overstimulate the neurons and lead to epileptic seizures. It must therefore be kept in check.

The researchers’ experiments revealed that Arc acted as a master regulator of the entire homeostatic scaling process. During memory formation, certain genes must be switched on and off at very specific times in order to generate proteins that help neurons lay down new memories. The image is an arc immunohistochemical staining of a rat dentate gyrus. This is used for illustrative purposes and is not connected to the research.

Neuroscientists recently discovered one important mechanism that the brain uses to maintain this important balance: a process called “homeostatic scaling.” Homeostatic scaling allows individual neurons to strengthen the new synaptic connections they’ve made to form memories, while at the same time protecting the neurons from becoming overly excited. Exactly how the neurons pull this off has eluded researchers, but they suspected that the Arc protein played a key role.

“Scientists knew that Arc was involved in long-term memory, because mice lacking the Arc protein could learn new tasks, but failed to remember them the next day,” said Finkbeiner, who is also a professor of neurology and physiology at UC San Francisco, with which Gladstone is affiliated. “Because initial observations showed Arc accumulating at the synapses during learning, researchers thought that Arc’s presence at these synapses was driving the formation of long-lasting memories.”

But Finkbeiner and his team thought there was something else in play.

The Role of Arc in Homeostatic Scaling

In laboratory experiments, first in animal models and then in greater detail in the petri dish, the researchers tracked Arc’s movements. And what they found was surprising.

“When individual neurons are stimulated during learning, Arc begins to accumulate at the synapses – but what we discovered was that soon after, the majority of Arc gets shuttled into the nucleus,” said Erica Korb, PhD, the paper’s lead author who completed her graduate work at Gladstone and UCSF.

“A closer look revealed three regions within the Arc protein itself that direct its movements: one exports Arc from the nucleus, a second transports it into the nucleus, and a third keeps it there,” she said. “The presence of this complex and tightly regulated system is strong evidence that this process is biologically important.”

In fact, the team’s experiments revealed that Arc acted as a master regulator of the entire homeostatic scaling process. During memory formation, certain genes must be switched on and off at very specific times in order to generate proteins that help neurons lay down new memories. From inside the nucleus, the authors found that it was Arc that directed this process required for homeostatic scaling to occur. This strengthened the synaptic connections without overstimulating them – thus translating learning into long-term memories.

Implications for a Variety of Neurological Diseases

“This discovery is important not only because it solves a long-standing mystery on the role of Arc in long-term memory formation, but also gives new insight into the homeostatic scaling process itself – disruptions in which have already been implicated in a whole host of neurological diseases,” said Finkbeiner. “For example, scientists recently discovered that Arc is depleted in the hippocampus, the brain’s memory center, in Alzheimer’s disease patients. It’s possible that disruptions to the homeostatic scaling process may contribute to the learning and memory deficits seen in Alzheimer’s.”

Dysfunctions in Arc production and transport may also be a vital player in autism. For example, the genetic disorder Fragile X syndrome – a common cause of both mental retardation and autism, directly affects the production of Arc in neurons.

“In the future,” added Dr. Korb, “we hope further research into Arc’s role in human health and disease can provide even deeper insight into these and other disorders, and also lay the groundwork for new therapeutic strategies to fight them.”

Journal reference: Abstract for “Arc in the nucleus regulates PML-dependent GluA1 transcription and homeostatic plasticity” by Erica Korb, Carol L Wilkinson, Ryan N Delgado, Kathryn L Lovero and Steven Finkbeiner in Nature Neuroscience. Published online June 9 2013 doi:10.1038/nn.3429

The above story is reprinted from materials provided by UCSF press release.

Discovery of Gatekeeper Nerve Cells Explains the Effect of Nicotine on Learning and Memory

December 16, 2012 Leave a comment

Swedish researchers at Uppsala University have, together with Brazilian collaborators, discovered a new group of nerve cells that regulate processes of learning and memory. These cells act as gatekeepers and carry a receptor for nicotine, which can help explain our ability to remember and sort information.

The discovery of the gatekeeper cells, which are part of a memory network together with several other nerve cells in the hippocampus, reveal new fundamental knowledge about learning and memory. The study is published today in Nature Neuroscience.

The hippocampus is an area of the brain that is important for consolidation of information into memories and helps us to learn new things. The newly discovered gatekeeper nerve cells, also called OLM-alpha2 cells, provide an explanation to how the flow of information is controlled in the hippocampus. Read more…

Researchers Show that Memories Reside in Specific Brain Cells

December 16, 2012 Leave a comment

Simply activating a tiny number of neurons can conjure an entire memory.

Our fond or fearful memories — that first kiss or a bump in the night — leave memory traces that we may conjure up in the remembrance of things past, complete with time, place and all the sensations of the experience. Neuroscientists call these traces memory engrams.

But are engrams conceptual, or are they a physical network of neurons in the brain? In a new MIT study, researchers used optogenetics to show that memories really do reside in very specific brain cells, and that simply activating a tiny fraction of brain cells can recall an entire memory — explaining, for example, how Marcel Proust could recapitulate his childhood from the aroma of a once-beloved madeleine cookie.

“We demonstrate that behavior based on high-level cognition, such as the expression of a specific memory, can be generated in a mammal by highly specific physical activation of a specific small subpopulation of brain cells, in this case by light,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience at MIT and lead author of the study reported online today in the journal Nature. “This is the rigorously designed 21st-century test of Canadian neurosurgeon Wilder Penfield’s early-1900s accidental observation suggesting that mind is based on matter.” Read more…

Neural Networks Forget Information Quickly

December 16, 2012 Leave a comment

Researchers have figured out the speed that neural networks in the cerebral cortex can delete sensory information is a bit of information per active neuron per second. The activity patterns of the neural network models are deleted nearly as soon as they are passed on from sensory neurons.

The scientists used neural network models based on real neuronal properties for the first time for these calculations. Neuronal spike properties were figured into the models which also helped show that the cerebral cortex processes were extremely chaotic.

Neural networks and this type of research in general are all helping researchers better understand learning and memory processes. With better knowledge about learning and memory, researchers can work toward treatments for Alzheimer’s disease, dementia, learning disabilities, PTSD related memory loss and many other problems.

More details are provided in the release below. Read more…

Memories May Skew Visual Perception

December 16, 2012 3 comments

Taking a trip down memory lane while you are driving could land you in a roadside ditch, new research indicates.

Vanderbilt University psychologists have found that our visual perception can be contaminated by memories of what we have recently seen, impairing our ability to properly understand and act on what we are currently seeing.

“This study shows that holding the memory of a visual event in our mind for a short period of time can ‘contaminate’ visual perception during the time that we’re remembering,” Randolph Blake, study co-author and Centennial Professor of Psychology, said.

“Our study represents the first conclusive evidence for such contamination, and the results strongly suggest that remembering and perceiving engage at least some of the same brain areas.” Read more…

Researchers Partially Control a Memory


Scripps Research Institute Team Wrests Partial Control of a Memory

The work advances understanding of how memories form and offers new insight into disorders such as schizophrenia and post traumatic stress disorder.

Scripps Research Institute scientists and their colleagues have successfully harnessed neurons in mouse brains, allowing them to at least partially control a specific memory. Though just an initial step, the researchers hope such work will eventually lead to better understanding of how memories form in the brain, and possibly even to ways to weaken harmful thoughts for those with conditions such as schizophrenia and post traumatic stress disorder.

The results are reported in the March 23, 2012 issue of the journal Science.

Researchers have known for decades that stimulating various regions of the brain can trigger behaviors and even memories. But understanding the way these brain functions develop and occur normally—effectively how we become who we are—has been a much more complex goal.

“The question we’re ultimately interested in is: How does the activity of the brain represent the world?” said Scripps Research neuroscientist Mark Mayford, who led the new study. “Understanding all this will help us understand what goes wrong in situations where you have inappropriate perceptions. It can also tell us where the brain changes with learning.”

On-Off Switches and a Hybrid Memory

As a first step toward that end, the team set out to manipulate specific memories by inserting two genes into mice. One gene produces receptors that researchers can chemically trigger to activate a neuron. They tied this gene to a natural gene that turns on only in active neurons, such as those involved in a particular memory as it forms, or as the memory is recalled. In other words, this technique allows the researchers to install on-off switches on only the neurons involved in the formation of specific memories.

For the study’s main experiment, the team triggered the “on” switch in neurons active as mice were learning about a new environment, Box A, with distinct colors, smells and textures.

Next the team placed the mice in a second distinct environment—Box B—after giving them the chemical that would turn on the neurons associated with the memory for Box A. The researchers found the mice behaved as if they were forming a sort of hybrid memory that was part Box A and part Box B. The chemical switch needed to be turned on while the mice were in Box B for them to demonstrate signs of recognition. Alone neither being in Box B nor the chemical switch was effective in producing memory recall.

“We know from studies in both animals and humans that memories are not formed in isolation but are built up over years incorporating previously learned information,” Mayford said. “This study suggests that one way the brain performs this feat is to use the activity pattern of nerve cells from old memories and merge this with the activity produced during a new learning session.”

Future Manipulation of the Past

The team is now making progress toward more precise control that will allow the scientists to turn one memory on and off at will so effectively that a mouse will in fact perceive itself to be in Box A when it’s in Box B.

Once the processes are better understood, Mayford has ideas about how researchers might eventually target the perception process through drug treatment to deal with certain mental diseases such as schizophrenia and post traumatic stress disorder. With such problems, patients’ brains are producing false perceptions or disabling fears. But drug treatments might target the neurons involved when a patient thinks about such fear, to turn off the neurons involved and interfere with the disruptive thought patterns.

Notes about this memory research article

In addition to Mayford, other authors of the paper, “Generation of a Synthetic Memory Trace,” are Aleena Garner, Sang Youl Hwang, and Karsten Baumgaertel from Scripps Research, David Rowland and Cliff Kentros from the University of Oregon, Eugene, and Bryan Roth from the University of North Carolina (UNC), Chapel Hill.

This work is supported by the National Institute of Mental Health, the National Institute on Drug Abuse, the California Institute for Regenerative Medicine, and the Michael Hooker Distinguished Chair in Pharmacology at UNC.

Source:

Source: The Scripps Research Institute press release

Original Research: Abstract for “Generation of a Synthetic Memory Trace” by Aleena R. Garner, David C. Rowland, Sang Youl Hwang, Karsten Baumgaertel, Bryan L. Roth, Cliff Kentros & Mark Mayford in Science

Neural Network “Learning Rules”

February 11, 2011 1 comment

Most neurocomputational models are not hard-wired to perform a task. Instead, they are typically equipped with some kind of learning process.  In this post, I’ll introduce some notions of how neural networks can learn. Understanding learning processes is important for cognitive neuroscience because they may underly the development of cognitive ability.

 

Let’s begin with a theoretical question that is of general interest to cognition: how can a neural system learn sequences, such as the actions required to reach a goal?

Consider a neuromodeler who hypothesizes that a particular kind of neural network can learn sequences. He might start his modeling study by “training” the network on a sequence. To do this, he stimulates (activates) some of its neurons in a particular order, representing objects on the way to the goal.

After the network has been trained through multiple exposures to the sequence, the modeler can then test his hypothesis by stimulating only the neurons from the beginning of the sequence and observing whether the neurons in the rest sequence activate in order to finish the sequence.

Successful learning in any neural network is dependent on how the connections between the neurons are allowed to change in response to activity. The manner of change is what the majority of researchers call “a learning rule“.  However, we will call it a “synaptic modification rule” because although the network learned the sequence, it is not clear that the *connections* between the neurons in the network “learned” anything in particular.

The particular synaptic modification rule selected is an important ingredient in neuromodeling because it may constrain the kinds of information the neural network can learn.

There are many categories of mathematical synaptic modification rule which are used to describe how synaptic strengths should be changed in a neural network.  Some of these categories include: backpropgration of error, correlative Hebbian, and temporally-asymmetric Hebbian.

  • Backpropogation of error states that connection strengths should change throughout the entire network in order to minimize the difference between the actual activity and the “desired” activity at the “output” layer of the network.
  • Correlative Hebbian states that any two interconnected neurons that are active at the same time should strengthen their connections, so that if one of the neurons is activated again in the future the other is more likely to become activated too.
  • Temporally-asymmetric Hebbian is described in more detail in the example below, but essentially emphasizes the importants of causality: if a neuron realiably fires before another, its connection to the other neuron should be strengthened. Otherwise, it should be weakened.

Why are there so many different rules?  Some synaptic modification rules are selected because they are mathematically convenient.  Others are selected because they are close to currently known biological reality.  Most of the informative neuromodeling is somewhere in between.

An Example

Let’s look at a example of a learning rule used in a neural network model that I have worked with: imagine you have a network of interconnected neurons that can either be active or inactive.    If a neuron is active, its value is 1, otherwise its value is 0. (The use of 1 and 0 to represent simulated neuronal activity is only one of the many ways to do so; this approach goes by the name “McCulloch-Pitts”).

Consider two neurons in the network, named PRE and POST, where the neuron PRE projects to neuron POST. A temporally-asymmetric Hebbian rule looks at a snapshot in time and says that the strength of the connection from PRE to POST, a value W between 0 and 1, should change according to:

W(future) = W(now) + learningRate x  POST(now)  x  [PRE(past) – W(now)]

This learning rule closely mimics known biological neuronal phenomena such as long-term potentiation (LTP) and spike-timing dependent plasticity (STPD) which are thought to underlie memory and will be subjects of future Neurevolution posts. (By the way, the learning rate is a small number less than 1 that allows the connection strengths to gradually change.)

Let’s take a quick look at what this synaptic modification rule actually means.  If the POST neuron is not active “now”, the connection strength W does not change in the future because everything after the first + becomes zero:

W(future) = W(now) + 0

Suppose on the other hand that the POST neuron is active now.  Then POST(now) equals 1.  To see what happens to the connection strength in this case, let’s assume the connection strength is 0.5 right now.

W(future) = 0.5 + learningRate x [PRE(past) – 0.5]

As you can see, two different things can happen: if the PRE was active, then PRE(past) = 1, and we will get a stronger connection in the future because we have:

W = 0.5 + learningRate x (1 – 0.5)

But if the PRE was inactive, then PRE(past) = 0, and we will get a weaker connection in the future because we have:

W = 0.5 + learningRate x (0 – 0.5)

So this is all good — but what can this rule do in a network simulation?  It turns out that this kind of synaptic modification rule can do a whole lot.  Let’s review an experiment that shows one property of a network equipped with this learning mechanism.

Experiment: Can a Network of McCulloch-Pitts Neurons Learn a Sequence with this Rule?

Let’s review part of a simulation experiment we carried out a few years ago (Mitman, Laurent and Levy, 2003).  Imagine you connect 4000 McCulloch-Pitts neurons together randomly at 8% connectivity.  This means that each neuron has connections coming in from about 300 other neurons in this mess.

When a neuron is active, it passes this activity to other neurons through its weighted connections.  The strengths of all the connections start out the same, but over time they change according to the temporally-asymmetric Hebbian rule discussed above.  In order to keep all the neurons from being active at once, there’s a cutoff so that only about 7% of the neurons are active (see the paper full details).

To train the network, we then turned on groups of neurons in order: this was a sequence of 10 neurons at a time, turned on for 10 time ticks each.  This is like telling the network the sequence “A,B,C,D,E,F,G,H,I,J”.

Here’s a picture showing what happens in the network at first when training the network as we activated blocks of neurons.  In these figures, time moves from left to right.  Each little black line means that a neuron was active at that time; to save space, only the part of the network where we “stimulated” is shown.  The neurons that we weren’t forcing to be active went on and off randomly because of the random connections.

After we exposed the network to the sequence several times, something interesting began to happen.  The firing at other times was no longer completely random.  In fact, it looked like the network was learning to anticipate that we were going to turn on the blocks of neurons:

Did the network learn the sequence?  Suppose we jumpstart it with “A”.  Will it remember “B”, “C”,… “J”?

Indeed it did!  A randomly-connected neural network equipped with this synaptic modification rule is able to learn sequences!

 

This was an example of a “learning rule” and just one function — the ability to learn sequences of patterns — that emerges in a network governed by this rule.  In future posts, I’ll talk about more about these kinds of models and mechanisms, and their emphasize their relevance to cognitive neuroscience.