Archive

Posts Tagged ‘computational neuroscience’

Neural Networks Forget Information Quickly

December 16, 2012 Leave a comment

Researchers have figured out the speed that neural networks in the cerebral cortex can delete sensory information is a bit of information per active neuron per second. The activity patterns of the neural network models are deleted nearly as soon as they are passed on from sensory neurons.

The scientists used neural network models based on real neuronal properties for the first time for these calculations. Neuronal spike properties were figured into the models which also helped show that the cerebral cortex processes were extremely chaotic.

Neural networks and this type of research in general are all helping researchers better understand learning and memory processes. With better knowledge about learning and memory, researchers can work toward treatments for Alzheimer’s disease, dementia, learning disabilities, PTSD related memory loss and many other problems.

More details are provided in the release below. Read more…

A Glance at the Brain’s Circuit Diagram

December 16, 2012 3 comments

A new method facilitates the mapping of connections between neurons.

The human brain accomplishes its remarkable feats through the interplay of an unimaginable number of neurons that are interconnected in complex networks. A team of scientists from the Max Planck Institute for Dynamics and Self-Organization, the University of Göttingen and the Bernstein Center for Computational Neuroscience Göttingen has now developed a method for decoding neural circuit diagrams. Using measurements of total neuronal activity, they can determine the probability that two neurons are connected with each other.

The human brain consists of around 80 billion neurons, none of which lives or functions in isolation. The neurons form a tight-knit network that they use to exchange signals with each other. The arrangement of the connections between the neurons is far from arbitrary, and understanding which neurons connect with each other promises to provide valuable information about how the brain works. At this point, identifying the connection network directly from the tissue structure is practically impossible, even in cell cultures with only a few thousand neurons. In contrast, there are currently well-developed methods for recording dynamic neuronal activity patterns. Such patterns indicate which neuron transmitted a signal at what time, making them a kind of neuronal conversation log. The Göttingen-based team headed by Theo Geisel, Director at the Max Planck Institute for Dynamics and Self-Organization, has now made use of these activity patterns. Read more…

Human Thought Can Voluntarily Control Neurons in Brain


Neuroscience research involving epileptic patients with brain electrodes surgically implanted in their medial temporal lobes shows that patients learned to consciously control individual neurons deep in the brain with thoughts.

Subjects learned to control mouse cursors, play video games and alter focus of digital images with their thoughts. The patients were each using brain computer interfaces, deep brain electrodes and software designed for the research.

Controlling Individual Cortical Nerve Cells by Human Thought

Five years ago, neuroscientist Christof Koch of the California Institute of Technology (Caltech), neurosurgeon Itzhak Fried of UCLA, and their colleagues discovered that a single neuron in the human brain can function much like a sophisticated computer and recognize people, landmarks, and objects, suggesting that a consistent and explicit code may help transform complex visual representations into long-term and more abstract memories.

Now Koch and Fried, along with former Caltech graduate student and current postdoctoral fellow Moran Cerf, have found that individuals can exert conscious control over the firing of these single neurons—despite the neurons’ location in an area of the brain previously thought inaccessible to conscious control—and, in doing so, manipulate the behavior of an image on a computer screen.

The work, which appears in a paper in the October 28 issue of the journal Nature, shows that “individuals can rapidly, consciously, and voluntarily control neurons deep inside their head,” says Koch, the Lois and Victor Troendle Professor of Cognitive and Behavioral Biology and professor of computation and neural systems at Caltech.

The study was conducted on 12 epilepsy patients at the David Geffen School of Medicine at UCLA, where Fried directs the Epilepsy Surgery Program. All of the patients suffered from seizures that could not be controlled by medication. To help localize where their seizures were originating in preparation for possible later surgery, the patients were surgically implanted with electrodes deep within the centers of their brains. Cerf used these electrodes to record the activity, as indicated by spikes on a computer screen, of individual neurons in parts of the medial temporal lobe—a brain region that plays a major role in human memory and emotion.

Prior to recording the activity of the neurons, Cerf interviewed each of the patients to learn about their interests. “I wanted to see what they like—say, the band Guns N’ Roses, the TV show House, and the Red Sox,” he says. Using that information, he created for each patient a data set of around 100 images reflecting the things he or she cares about. The patients then viewed those images, one after another, as Cerf monitored their brain activity to look for the targeted firing of single neurons. “Of 100 pictures, maybe 10 will have a strong correlation to a neuron,” he says. “Those images might represent cached memories—things the patient has recently seen.”

The four most strongly responding neurons, representing four different images, were selected for further investigation. “The goal was to get patients to control things with their minds,” Cerf says. By thinking about the individual images—a picture of Marilyn Monroe, for example—the patients triggered the activity of their corresponding neurons, which was translated first into the movement of a cursor on a computer screen. In this way, patients trained themselves to move that cursor up and down, or even play a computer game.

But, says Cerf, “we wanted to take it one step further than just brain–machine interfaces and tap into the competition for attention between thoughts that race through our mind.”

To do that, the team arranged for a situation in which two concepts competed for dominance in the mind of the patient. “We had patients sit in front of a blank screen and asked them to think of one of the target images,” Cerf explains. As they thought of the image, and the related neuron fired, “we made the image appear on the screen,” he says. That image is the “target.” Then one of the other three images is introduced, to serve as the “distractor.”

“The patient starts with a 50/50 image, a hybrid, representing the ‘marriage’ of the two images,” Cerf says, and then has to make the target image fade in—just using his or her mind—and the distractor fade out. During the tests, the patients came up with their own personal strategies for making the right images appear; some simply thought of the picture, while others repeated the name of the image out loud or focused their gaze on a particular aspect of the image. Regardless of their tactics, the subjects quickly got the hang of the task, and they were successful in around 70 percent of trials.

“The patients clearly found this task to be incredibly fun as they started to feel that they control things in the environment purely with their thought,” says Cerf. “They were highly enthusiastic to try new things and see the boundaries of ‘thoughts’ that still allow them to activate things in the environment.”

Notably, even in cases where the patients were on the verge of failure—with, say, the distractor image representing 90 percent of the composite picture, so that it was essentially all the patients saw—”they were able to pull it back,” Cerf says. Imagine, for example, that the target image is Bill Clinton and the distractor George Bush. When the patient is “failing” the task, the George Bush image will dominate. “The patient will see George Bush, but they’re supposed to be thinking about Bill Clinton. So they shut off Bush—somehow figuring out how to control the flow of that information in their brain—and make other information appear. The imagery in their brain,” he says, “is stronger than the hybrid image on the screen.”

According to Koch, what is most exciting “is the discovery that the part of the brain that stores the instruction ‘think of Clinton’ reaches into the medial temporal lobe and excites the set of neurons responding to Clinton, simultaneously suppressing the population of neurons representing Bush, while leaving the vast majority of cells representing other concepts or familiar person untouched.”

The work in the paper, “On-line voluntary control of human temporal lobe neurons,” is part of a decade-long collaboration between the Fried and Koch groups, funded by the National Institute of Neurological Disorders and Stroke, the National Institute of Mental Health, the G. Harold & Leila Y. Mathers Charitable Foundation, and Korea’s World Class University program.

Source: California Institute of Technology (Caltech)

Research suggests humans can learn to consciously control individual neurons in the brain. Image credit: Moran Cerf and Maria Moon/Caltech

Technique for Letting Brain Talk to Computers Now Tunes in Speech


Patients with a temporary surgical implant have used regions of the brain that control speech to “talk” to a computer for the first time, manipulating a cursor on a computer screen simply by saying or thinking of a particular sound.

“There are many directions we could take this, including development of technology to restore communication for patients who have lost speech due to brain injury or damage to their vocal cords or airway,” says author Eric C. Leuthardt, MD, of Washington University School of Medicine in St. Louis.

Scientists have typically programmed the temporary implants, known as brain-computer interfaces, to detect activity in the brain’s motor networks, which control muscle movements.

“That makes sense when you’re trying to use these devices to restore lost mobility — the user can potentially engage the implant to move a robotic arm through the same brain areas he or she once used to move an arm disabled by injury,” says Leuthardt, assistant professor of neurosurgery, of biomedical engineering and of neurobiology, “But that has the potential to be inefficient for restoration of a loss of communication.”

Patients might be able to learn to think about moving their arms in a particular way to say hello via a computer speaker, Leuthardt explains. But it would be much easier if they could say hello by using the same brain areas they once engaged to use their own voices.

The research appears April 7 in The Journal of Neural Engineering.

(up) Scientists at Washington University School of Medicine in St. Louis have adapted brain-computer interfaces like the one shown above to listen to regions of the brain that control speech. The development may help restore capabilities lost to brain injury or disability. Credit: Eric Leuthardt, MD, permission by Michael Purdy

The devices under study are temporarily installed directly on the surface of the brain in epilepsy patients. Surgeons like Leuthardt use them to identify the source of persistent, medication-resistant seizures and map those regions for surgical removal. Researchers hope one day to install the implants permanently to restore capabilities lost to injury and disease.

Leuthardt and his colleagues have recently revealed that the implants can be used to analyze the frequency of brain wave activity, allowing them to make finer distinctions about what the brain is doing. For the new study, Leuthardt and others applied this technique to detect when patients say or think of four sounds:

  • oo, as in few
  • e, as in see
  • a, as in say
  • a, as in hat

When scientists identified the brainwave patterns that represented these sounds and programmed the interface to recognize them, patients could quickly learn to control a computer cursor by thinking or saying the appropriate sound.

In the future, interfaces could be tuned to listen to just speech networks or both motor and speech networks, Leuthardt says. As an example, he suggests that it might one day be possible to let a disabled patient both use his or her motor regions to control a cursor on a computer screen and imagine saying “click” when he or she wants to click on the screen.

“We can distinguish both spoken sounds and the patient imagining saying a sound, so that means we are truly starting to read the language of thought,” he says. “This is one of the earliest examples, to a very, very small extent, of what is called ‘reading minds’ — detecting what people are saying to themselves in their internal dialogue.”

The next step, which Leuthardt and his colleagues are working on, is to find ways to distinguish what they call “higher levels of conceptual information.”

“We want to see if we can not just detect when you’re saying dog, tree, tool or some other word, but also learn what the pure idea of that looks like in your mind,” he says. “It’s exciting and a little scary to think of reading minds, but it has incredible potential for people who can’t communicate or are suffering from other disabilities.”

 

Notes about this brain-computer interface research article

A portion of this research was funded in collaboration with Gerwin Schalk, PhD, of the New York State Department of Health’s Wadsworth Center. The goal is to explore additional potential uses of this technology in people who are disabled and in those who are not.

Leuthardt et al. 2011 Journal of Neural Engineering. 8 036004 online at: http://iopscience.iop.org/1741-2552/8/3/036004

Funding from the National Institutes of Health and the Department of Defense supported this research.

Contact: Michael Purdy – Senior Medical Science Writer

Source:  Washington University in St. Louis Newsroom article – permission given by Michael Purdy

Image Source: Image adapted from image in article above. Permission given by Michael Purdy