Archive

Posts Tagged ‘neural network’

Neural Networks Forget Information Quickly

December 16, 2012 Leave a comment

Researchers have figured out the speed that neural networks in the cerebral cortex can delete sensory information is a bit of information per active neuron per second. The activity patterns of the neural network models are deleted nearly as soon as they are passed on from sensory neurons.

The scientists used neural network models based on real neuronal properties for the first time for these calculations. Neuronal spike properties were figured into the models which also helped show that the cerebral cortex processes were extremely chaotic.

Neural networks and this type of research in general are all helping researchers better understand learning and memory processes. With better knowledge about learning and memory, researchers can work toward treatments for Alzheimer’s disease, dementia, learning disabilities, PTSD related memory loss and many other problems.

More details are provided in the release below. Read more…

Advertisements

A Glance at the Brain’s Circuit Diagram

December 16, 2012 3 comments

A new method facilitates the mapping of connections between neurons.

The human brain accomplishes its remarkable feats through the interplay of an unimaginable number of neurons that are interconnected in complex networks. A team of scientists from the Max Planck Institute for Dynamics and Self-Organization, the University of Göttingen and the Bernstein Center for Computational Neuroscience Göttingen has now developed a method for decoding neural circuit diagrams. Using measurements of total neuronal activity, they can determine the probability that two neurons are connected with each other.

The human brain consists of around 80 billion neurons, none of which lives or functions in isolation. The neurons form a tight-knit network that they use to exchange signals with each other. The arrangement of the connections between the neurons is far from arbitrary, and understanding which neurons connect with each other promises to provide valuable information about how the brain works. At this point, identifying the connection network directly from the tissue structure is practically impossible, even in cell cultures with only a few thousand neurons. In contrast, there are currently well-developed methods for recording dynamic neuronal activity patterns. Such patterns indicate which neuron transmitted a signal at what time, making them a kind of neuronal conversation log. The Göttingen-based team headed by Theo Geisel, Director at the Max Planck Institute for Dynamics and Self-Organization, has now made use of these activity patterns. Read more…

Two Universes, Same Structure

February 21, 2011 Leave a comment

 

 

 

This image is not of a neuron.

This image is of the other universe; the one outside our heads.

 

 

 

 

 

It depicts the “evolution of the matter distribution in a cubic region of the Universe over 2 billion light-years”, as computed by the Millennium Simulation. (Click the image above for a better view.)

The next image, of a neuron, is included for comparison.

It is tempting to wax philosophical on this structure equivalence. How is it that both the external and internal universes can have such similar structure, and at such vastly different physical scales?

If we choose to go philosophical, we may as well ponder something even more fundamental: Why is it that all complex systems seem to have a similar underlying network-like structure?

For illustration of this point, just take a glance at the front page of VisualComplexity.com (partially reproduced below).

These neural-network-like visual images represent complex systems and relations for domains as diverse as academic citations, the blogosphere, scientific knowledge, genealogy, iTunes music collections, and Italian wine production.

Does this imply some deep equivalence exists between all complex systems? Is it the nature of complex systems to be network-like?

Alternatively, this is perhaps simply how we, as neural networks, are able to conceptualize the external universe. Could it be that the external universe is vastly different in form from our internal universe, but we simply perceive that which happens to be compatible with our neural network knowledge structure?

It seems there are some situations where we have trouble representing reality for this reason. However, evolutionary pressures for survival likely drove the human brain to represent the world as accurately as possible. (Otherwise, e.g. our ancestors may have believed lions disappeared when hiding behind bushes; an obviously maladaptive representation of reality.) This suggests that even though our brains don’t represent the world with complete accuracy, it is nonetheless quite accurate in most cases.

Ultimately I think the equivalence between complex systems is due to the underlying nature of such systems. They must all involve massive integrated differentiation. In other words, there must be many different things (nodes), with many different relations among them (links) for a system to be complex. Thus integrated differentiation, the very basis of complexity, is inherently network-like (i.e., has the equivalent of nodes and links).

It is compelling to consider if neural systems, with their numerous nodes (neurons) and links (synapses) providing integrated differentiation, might have evolved complexity in order to represent other complex systems. In other words, neural systems may have evolved in order to mirror the complexity presented by the external universe, which helped each organism adapt and survive in its environment.

Thus the similarity between the internal and external universes may not be due to coincidence, but design.

Neural Network “Learning Rules”

February 11, 2011 1 comment

Most neurocomputational models are not hard-wired to perform a task. Instead, they are typically equipped with some kind of learning process.  In this post, I’ll introduce some notions of how neural networks can learn. Understanding learning processes is important for cognitive neuroscience because they may underly the development of cognitive ability.

 

Let’s begin with a theoretical question that is of general interest to cognition: how can a neural system learn sequences, such as the actions required to reach a goal?

Consider a neuromodeler who hypothesizes that a particular kind of neural network can learn sequences. He might start his modeling study by “training” the network on a sequence. To do this, he stimulates (activates) some of its neurons in a particular order, representing objects on the way to the goal.

After the network has been trained through multiple exposures to the sequence, the modeler can then test his hypothesis by stimulating only the neurons from the beginning of the sequence and observing whether the neurons in the rest sequence activate in order to finish the sequence.

Successful learning in any neural network is dependent on how the connections between the neurons are allowed to change in response to activity. The manner of change is what the majority of researchers call “a learning rule“.  However, we will call it a “synaptic modification rule” because although the network learned the sequence, it is not clear that the *connections* between the neurons in the network “learned” anything in particular.

The particular synaptic modification rule selected is an important ingredient in neuromodeling because it may constrain the kinds of information the neural network can learn.

There are many categories of mathematical synaptic modification rule which are used to describe how synaptic strengths should be changed in a neural network.  Some of these categories include: backpropgration of error, correlative Hebbian, and temporally-asymmetric Hebbian.

  • Backpropogation of error states that connection strengths should change throughout the entire network in order to minimize the difference between the actual activity and the “desired” activity at the “output” layer of the network.
  • Correlative Hebbian states that any two interconnected neurons that are active at the same time should strengthen their connections, so that if one of the neurons is activated again in the future the other is more likely to become activated too.
  • Temporally-asymmetric Hebbian is described in more detail in the example below, but essentially emphasizes the importants of causality: if a neuron realiably fires before another, its connection to the other neuron should be strengthened. Otherwise, it should be weakened.

Why are there so many different rules?  Some synaptic modification rules are selected because they are mathematically convenient.  Others are selected because they are close to currently known biological reality.  Most of the informative neuromodeling is somewhere in between.

An Example

Let’s look at a example of a learning rule used in a neural network model that I have worked with: imagine you have a network of interconnected neurons that can either be active or inactive.    If a neuron is active, its value is 1, otherwise its value is 0. (The use of 1 and 0 to represent simulated neuronal activity is only one of the many ways to do so; this approach goes by the name “McCulloch-Pitts”).

Consider two neurons in the network, named PRE and POST, where the neuron PRE projects to neuron POST. A temporally-asymmetric Hebbian rule looks at a snapshot in time and says that the strength of the connection from PRE to POST, a value W between 0 and 1, should change according to:

W(future) = W(now) + learningRate x  POST(now)  x  [PRE(past) – W(now)]

This learning rule closely mimics known biological neuronal phenomena such as long-term potentiation (LTP) and spike-timing dependent plasticity (STPD) which are thought to underlie memory and will be subjects of future Neurevolution posts. (By the way, the learning rate is a small number less than 1 that allows the connection strengths to gradually change.)

Let’s take a quick look at what this synaptic modification rule actually means.  If the POST neuron is not active “now”, the connection strength W does not change in the future because everything after the first + becomes zero:

W(future) = W(now) + 0

Suppose on the other hand that the POST neuron is active now.  Then POST(now) equals 1.  To see what happens to the connection strength in this case, let’s assume the connection strength is 0.5 right now.

W(future) = 0.5 + learningRate x [PRE(past) – 0.5]

As you can see, two different things can happen: if the PRE was active, then PRE(past) = 1, and we will get a stronger connection in the future because we have:

W = 0.5 + learningRate x (1 – 0.5)

But if the PRE was inactive, then PRE(past) = 0, and we will get a weaker connection in the future because we have:

W = 0.5 + learningRate x (0 – 0.5)

So this is all good — but what can this rule do in a network simulation?  It turns out that this kind of synaptic modification rule can do a whole lot.  Let’s review an experiment that shows one property of a network equipped with this learning mechanism.

Experiment: Can a Network of McCulloch-Pitts Neurons Learn a Sequence with this Rule?

Let’s review part of a simulation experiment we carried out a few years ago (Mitman, Laurent and Levy, 2003).  Imagine you connect 4000 McCulloch-Pitts neurons together randomly at 8% connectivity.  This means that each neuron has connections coming in from about 300 other neurons in this mess.

When a neuron is active, it passes this activity to other neurons through its weighted connections.  The strengths of all the connections start out the same, but over time they change according to the temporally-asymmetric Hebbian rule discussed above.  In order to keep all the neurons from being active at once, there’s a cutoff so that only about 7% of the neurons are active (see the paper full details).

To train the network, we then turned on groups of neurons in order: this was a sequence of 10 neurons at a time, turned on for 10 time ticks each.  This is like telling the network the sequence “A,B,C,D,E,F,G,H,I,J”.

Here’s a picture showing what happens in the network at first when training the network as we activated blocks of neurons.  In these figures, time moves from left to right.  Each little black line means that a neuron was active at that time; to save space, only the part of the network where we “stimulated” is shown.  The neurons that we weren’t forcing to be active went on and off randomly because of the random connections.

After we exposed the network to the sequence several times, something interesting began to happen.  The firing at other times was no longer completely random.  In fact, it looked like the network was learning to anticipate that we were going to turn on the blocks of neurons:

Did the network learn the sequence?  Suppose we jumpstart it with “A”.  Will it remember “B”, “C”,… “J”?

Indeed it did!  A randomly-connected neural network equipped with this synaptic modification rule is able to learn sequences!

 

This was an example of a “learning rule” and just one function — the ability to learn sequences of patterns — that emerges in a network governed by this rule.  In future posts, I’ll talk about more about these kinds of models and mechanisms, and their emphasize their relevance to cognitive neuroscience.