Archive

Archive for April, 2011

Dorsoventral vs. Septotemporal hippocampus


Everybody knows what the hippocampus is for: memory. And…maybe something about anxiety or depression? Yes – over the last 10 years or so many studies have been published showing that the hippocampus has these two roles and that the mnemonic and emotional functions of the hippocampus are associated with its septal (dorsal) and temporal (ventral) ends, respectively. This new knowledge means that we’ve had to reorient our perspective. What we see when we consider the septal hippocampus may not be the same if we only consider its temporal end. My goal here is not to provide a review of the memory vs. emotional functions of the hippocampus (btw this dichotomy is a vast oversimplification). Instead, I’d like to talk about how people have differentiated these two ends of the hippocampus in their analyses. I’m also happy to showcase a bunch of pretty anatomical images that will probably never be published in a traditional journal article.

Some studies showing different functions of septal and temporal hippocampus

  • Some of the best reviews of the topic are by Bannerman et al from 2004 and 2011.
  • A recent and free review article by Fanselow and Dong.
  • Classic Moser papers showing spatial memory is more dependent on dorsal hippocampus and anxiety/fear behavior on ventral hippocampus
  • recent paper suggesting that spatial processing in the septal hippocampus meets the behavioral-control functions of the temporal hippocampus to enable rapid spatial learning

History of neurogenesis quantification. So, back in the day, before I even knew what a neuron was, and before it was well-established that there was functional differentiation along the hippocampal axis, people would pick a few sections from the dorsal hippocampus (it’s much more photogenic, gets all the glory), count new neurons, and make it a density measurement. Then the stereology police arrived (seriously, that’s what they’re called) and pointed out that changes in tissue volume or cell packing could change density measurements without there being any differences in numbers of cells. Stereological analyses also prevent any biases that might arise from creating arbitrary boundaries when examining only part of the hippocampus. And so people started doing stereological counts, which require a systematic quantification throughout the entire hippocampus. My guess is that this probably delayed the appreciation that neurogenesis could vary in magnitude and function along the hippocampal axis. Now that we know that stereology is pointless we can get back to business (this is a joke – please don’t arrest me).

Difficulty of quantifying subregions due to curvature of the hippocampus. One of the reasons the hippocampus is such a popular neurobiological model is its anatomy – the dentate gyrus, CA3 and CA1 subfields are all composed of tightly packed cells that are easy to identify. Thinking of the hippocampus along its long axis, one end projects to the septum and the other abuts the temporal lobe, hence “septotemporal” is technically the most accurate way to refer to the different ends of the hippocamus. The hippocampus is curved in such a way that you can actually cut it along any of the 3 spatial planes (X, Y, Z aka coronal, horizontal, saggital) and hit the hippocampus perpendicular to the septotemporal axis somewhere, giving rise to the classic the trisynaptic circuit. However, because of this same curvature, sectioning the brain in only one of the three planes means that some portion of the hippocampus is not going to be cut perpendicular to the long axis, producing sections in which septotemporal coordinates are hard to define.

The 3D nature of the hippocampus using images from the Allen Brain Explorer:

Figure 1: The dentate gyrus subfield of the hippocampus (i.e. green banana), from its septal pole, extends caudally and laterally and then ventrally. Green axis=dorsoventral, red=rostrocaudal, yellow=mediolateral.

Figure 2: A relatively caudal coronal section with the 3D dentate gyrus shown in the left panel, for comparison. This section contains ventral dentate gyrus (at the bottom, by “temporal”) but, at the top of the section, it also contains a portion of the dentate gyrus that is very dorsal, despite being far from the septal pole.

Figure 3: This section is more caudal than the previous example, yet the dentate granule cells (white patches within the bright green region) do not extend as far in the ventral direction. So, more caudal ≠ more ventral.

Others on the curvature problem:

Schlessinger et al., 1975: Since the dentate gyrus follows the general curvature of the hippocampal formation, it is difficult to apply the usual topographical terms to its various parts. The rostral third or half of the gyrus is more-or-less horizontally disposed within the cerebral hemisphere…At about the junction of its rostral and caudal halves the gyrus is sharply flexed upon itself, and comes to be vertically disposed….Again, because of the flexure of the hippocampal formation, it is inappropriate to refer to the dentate gyrus as having a dorsal (or rostral) and a ventral (or caudal) part. Following Gottlieb and Cowan (’73) we shall refer to the long axis of the gyrus, extending from the temporal pole of the hemisphere to just behind the septal region, as its temporalseptal axis.

Amaral & Witter, 1989: Because of its complex three-dimensional shape, normal sections of the hippocampus, i.e. those oriented perpendicular to the long axis, are obtained for only a small part of its septotemporal extent in standard coronal or horizontal sections. This situation severely complicates the analysis of the connections within the hippocampal formation.

De Hoz et al., 2003: In discussing different regions of the hippocampus, we use the terms “septal” and “temporal” to refer to the rostralmost and the ventralmost poles of the longitudinal axis, respectively, because this terminology allows an even division of this axis into septal and temporal halves. The terms “dorsal” and “ventral” are sometimes used to refer to the same areas; the dorsal hippocampus is, however, more extensive than the ventral.

So how can we divide the hippocampus? Many people work with coronal sections. Can we delineate boundaries between different hippocampal subregions in coronal sections? Banasr et al. has described a reproducible method for separating dorsal from ventral hippocampus using coronal sections. Here, the dorsal regions would contain a fair bit of mid-septotemporal hippocampus but indeed, only the dorsal sections would contain septal hippocampus and only ventral sections would contain temporal hippocampus:

Figure 4: Separating dorsal and ventral hippocampus in coronal sections

Jayatissa et al. has horizontally sectioned the rat brain and then used anatomical coordinates to divide dorsal from ventral. This seems to be a good way to isolate pure, septal hippocampus but dorsal measures would again blur together the septal and mid-septal regions.

What if we wanted to separate the septal and temporal ends of the hippocampus? One method, described in Amaral & Witter, 1989 offers a solution:

We have adopted a strategy first described by Gaarskjaer that obviates this problem. In short…the fixed hippocampal formation is dissected from the brain and gently extended before histological processing. In this way the extended hippocampus can be positioned such that normal sections are obtained from much of the septotemporal extent of the structure.

A similar approach had been used (see here and here). One drawback is that you ruin much of the rest of the brain during the dissection process (insert but-who-cares-about-the-rest-of-the-brain joke here). Here’s a figure from thesis that illustrates the similar-shaped hippocampal slices obtained with this method:

Figure 5: DAPI counterstained sections, evenly spaced across the septotemporal axis. Sampling scheme illustrated at the top. Shaded regions indicate how different septotemporal regions could be binned. S=suprapyramidal blade of the dentate gyrus, I=infrapyramidal blade, DG=dentate gyrus.

Another strategy isn’t too different from the method of Banasr, above. To get at the septal hippocampus you’re just being a bit more selective and only examining portions of the dorsal hippocampus that extend quite far rostrally. For the caudal sections that contain both dorsal and ventral hippocampus the rhinal fissure seems like a good guide – anything falling on the ventral side I’m counting as ventral.

But if you’re lazy…

A fast, revolutionary new method for examining the hippocampus along its full septotemporal axis in a single section! It almost sounds too good to be true. In fact, it is. But it provides some interesting pictures for those of you who have stuck with me this far.

Recently, a lot of rats has been irradiated to eliminate adult neurogenesis. Before coming to any conclusions about the behavioral data it was needed to know whether neurogenesis was completely blocked AND whether it was blocked throughout the entire dentate gyrus. Due to laziness to cut hundreds of sections for each rat, the hippocampus was extracted but instead of sectioning perpendicular to its septotemporal axis, it was sectioned parallel to, or along, its septotemporal axis by flattening and freezing it on a microtome stage. With this approach the entire dentate gyrus could be cut in about 30 sections and sections that had the entire septotemporal length of the dentate gyrus became present. Then they were stained for NeuN and DCX to visualize neurons and immature neurons, respectively. I think every other section was stained; one example is shown below.

Figure 6: Hippocampal sections stained for NeuN and DCX. The dentate gyrus can be identified as the layer of tightly-packed orange cells on the left, that are bordered by green DCX+ cells. Sections were cut from the side of the infrapyramidal blade towards the suprapyramidal blade (direction of cutting = section 1→9). Images were taken with a 20x objective and subsequently stitched together.

Is it really necessary to divide septotemporally? I guess it depends. Many studies that have focussed more on dorsal vs. ventral have made significant findings. If the anatomical method is well-described and reproducible, what more could you ask for? It’s possible, however, that combining different septotemporal regions into the same analysis could obscure a result. For example, when the activation of new neurons was examined after water maze training, it was found a steadily-increasing amount of activation as going from septal to temporal (see Figure 7). Had the 2 septal quartiles been pooled together and the 2 temporal quartiles pooled together, the observed difference would have been much smaller than when comparing the septalmost quartile with the temporalmost quartile.

Figure 7: The density of ‘activated’ new neurons (i.e. PSA-NCAM+ and Fos+) increased from septal to temporal. Note the mid-septal and mid-temporal regions were similar. Also note that I used D and V nomenclature, for ‘dorsal’ and ‘ventral’, despite repeatedly emphasizing in this post that ’septal’ and ‘temporal’ is more accurate.

——————————————————————————-

and now…

Pretty pictures from these sections!

Messy.

Just a nice example of some DCX dendrites.

DCX labeling outside of the dentate gyrus. I think this was in the subiculum but who can say for sure with these weird sections.

Septotemporal sample #1

Septotemporal sample #2

Septotemporal sample #3

Septotemporal sample #4

CRAB

ALLIGATOR

PUPPY / BIRDIE

Special thanks and credit to http://www.functionalneurogenesis.com and to Sarah Ferrante for sectioning, staining and imaging the tissue.

Advertisements

Evolution of Human ‘Super-Brain’ Tied to Development of Bipedalism, Tool-Making


Scientists seeking to understand the origin of the human mind may want to look to honeybees — not ancestral apes — for at least some of the answers, according to a University of Colorado Boulder archaeologist.

(left)CU-Boulder researcher John Hoffecker, shown here working at a site in Russia dating back 45,000 years, believes there is mounting archaeological evidence for the evolution of a human “super-brain” no later than 75,000 years ago that spurred a modern capacity for novelty and invention. (Credit: Vance T. Holliday, University of Arizona)

CU-Boulder Research Associate John Hoffecker said there is abundant fossil and archaeological evidence for the evolution of the human mind, including its unique power to create a potentially infinite variety of thoughts expressed in the form of sentences, art and technologies. He attributes the evolving power of the mind to the formation of what he calls the “super-brain,” or collective mind, an event that took place in Africa no later than 75,000 years ago.

An internationally known archaeologist who has worked at sites in Europe and the Arctic, Hoffecker said the formation of the super-brain was a consequence of a rare ability to share complex thoughts among individual brains. Among other creatures on Earth, the honeybee may be the best example of an organism that has mastered the trick of communicating complex information — including maps of food locations and information on potential nest sites from one brain to another — using their intricate “waggle dance.”

“Humans obviously evolved a much wider range of communication tools to express their thoughts, the most important being language,” said Hoffecker, a fellow at CU’s Institute of Arctic and Alpine Research. “Individual human brains within social groups became integrated into a neurologic Internet of sorts, giving birth to the mind.”

While anatomical fossil evidence for the capability of speech is controversial, the archaeological discoveries of symbols coincides with a creative explosion in the making of many kinds of artifacts. Abstract designs scratched on mineral pigment show up in Africa about 75,000 years ago and are widely accepted by archaeologists as evidence for symbolism and language. “From this point onward there is a growing variety of new types of artifacts that indicates a thoroughly modern capacity for novelty and invention.”

The roots of the mind and the super-brain lie deep in our past and are likely tied to fundamental aspects of our evolution like bipedalism and making stone tools, he said. It was from the making of tools that early humans first developed their ability to project complex thoughts or mental representations outside the individual brain — our own version of the honeybee waggle dance, Hoffecker said.

While crude stone tools crafted by human ancestors beginning about 2.5 million years ago likely were an indirect consequence of bipedalism — which freed up the hands for new functions — the first inklings of a developing super-brain likely began about 1.6 million years ago when early humans began crafting stone hand axes, thought by Hoffecker and others to be one of the first external representations of internal thought.

Ancient hand axes achieved “exalted status” as mental representations since they bear little resemblance to the natural objects they were made from — generally cobbles or rock fragments. “They reflect a design or mental template stored in the nerve cells of the brain and imposed on the rock, and they seemed to have emerged from a strong feedback relationship among the hands, eyes, brains and the tools themselves,” he said.

The emerging modern mind in Africa was marked by a three-fold increase in brain size over 3-million-year-old human ancestors like Lucy, thought by some to be the matriarch of modern humans. Humans were producing perforated shell ornaments, polished bone awls and simple geometric designs incised into lumps of red ochre by 75,000 years ago. “With the appearance of symbols and language — and the consequent integration of brains into a super-brain — the human mind seems to have taken off as a potentially unlimited creative force,” he said.

The dispersal of modern humans from Africa to Europe some 50,000 to 60,000 years ago provides a “minimum date” for the development of language, Hoffecker speculated. “Since all languages have basically the same structure, it is inconceivable to me that they could have evolved independently at different times and places.”

A 2007 study led by Hoffecker and colleagues at the Russian Academy of Sciences pinpointed the earliest evidence of modern humans in Europe dating back 45,000 years ago. Located on the Don River 250 miles south of Moscow, the multiple sites, collectively known as Kostenki, also yielded ancient bone and ivory needles complete with eyelets, showing the inhabitants tailored furs to survive the harsh winters.

The team also discovered a carved piece of mammoth ivory that appears to be the head of a small figurine dating to more than 40,000 years ago. “If that turns out to be the case, it would be the oldest piece of figurative art ever discovered,” said Hoffecker, whose research at Kostenki is funded in part by the National Science Foundation.

The finds from Kostenki illustrate the impact of the creative mind of modern humans as they spread out of Africa into places that were sometimes cold and lean in resources, Hoffecker said. “Fresh from the tropics, they adapted to ice age environments in the central plain of Russia through creative innovations in technology.”

Ancient musical instruments and figurative art discovered in caves in France and Germany date to before 30,000 years ago, he said. “Humans have the ability to imagine something in the brain that doesn’t exist and then create it,” he said. “Whether it’s a hand axe, a flute or a Chevrolet, humans are continually recombining bits of information into novel forms, and the variations are potentially infinite.”

While the concept of a human super-brain is analogous to social insects like bees and ants that collectively behave as a super-organism by gathering, processing and sharing information about their environment, there is one important difference, Hoffecker said. “Human societies are not super-organisms — they are composed of people who are for the most part unrelated, and societies filled with competing individuals and families.”

Since the emergence of the modern industrial world beginning roughly 500 years ago, creativity driven by the human super-brain has grown by leaps and bounds, from the invention of mechanical clocks to space shuttles. Powerful artificial intelligence could blur the differences between humans and computers in the coming centuries, he said.

Hoffecker is the author of an upcoming book, titled “Landscape of the Mind: Human Evolution and the Archaeology of Thought,” to be published by Columbia University Press in May. For more information on Hoffecker’s book visit http://cup.columbia.edu/book/978-0-231-14704-0/landscape-of-the-mind.

Story Source:

The above story is reprinted from materials provided by University of Colorado at Boulder.

New Malaria Vaccine Depends on… Mosquito Bites?


The same menace that spreads malaria — the mosquito bite — could help wipe out the deadly disease, according to researchers working on a new vaccine at Tulane University.

(left) Nirbhay Kumar, professor of tropical medicine at Tulane University, is working on a vaccine that aims to wipe out malaria using the same menace that spreads it — the mosquito bite. (Credit: Image courtesy of Tulane University)


The PATH Malaria Vaccine Initiative (MVI), established in 1999 through a grant from the Bill & Melinda Gates Foundation, announced February 15 a collaboration with Tulane University School of Public Health and Tropical Medicine and India’s Gennova Biopharmaceuticals Ltd. to produce and test a novel vaccine that aims to inoculate mosquitoes when they bite people.

The vaccine would work by triggering an immune response in people so they produce antibodies that target a protein the malaria parasite needs to reproduce within a mosquito.

Malaria, which kills nearly 800,000 people every year worldwide, is caused by a microscopic parasite that alternates between human and mosquito hosts at various stages of its lifecycle. Once a mosquito bites a vaccinated person, the antibodies would neutralize the protein essential for malaria parasite’s reproduction, effectively blocking the parasite’s — and the mosquito’s — ability to infect others.

The vaccine relies on a protein — known as Pfs48/45 — which is very difficult to synthetically produce, says Nirbhay Kumar, professor of tropical medicine at Tulane.

“With MVI’s support we can now work with Gennova to produce sufficient quantity of the protein and develop a variety of vaccine formulations that can be tested in animals to determine which one give us the strongest immune response,” Kumar says.

Such transmission blocking vaccines, though not yet widely tested in humans, are attracting widespread interest due to their potential to be used in conjunction with more traditional malaria vaccines and other interventions — such as malaria drugs and bed nets — to make gradual elimination and even eradication of the disease a reality.

“We’re investing in developing transmission blocking malaria vaccines to support two long-term goals: introducing an 80 percent efficacious malaria vaccine by the year 2025 and eventually eradicating malaria altogether,” says Dr. Christian Loucq, director of MVI. “A vaccine that breaks the cycle of malaria transmission will be important to our success.”

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Tulane University.

Breakthrough in Malaria Treatment in the Run Up to World Malaria Day


Ahead of World Malaria Day (25 April), EU-funded researchers have discovered that drugs originally designed to inhibit the growth of cancer cells can also kill the parasite that causes malaria. They believe this discovery could open up a new strategy for combating this deadly disease, which, according to World Health Organisation statistics, infected around 225 million and killed nearly 800,000 people worldwide in 2009.

Efforts to find a treatment have so far been hampered by the parasite’s ability to quickly develop drug resistance. The research involved four projects funded by the EU (ANTIMAL, BIOMALPAR, MALSIG and EVIMALAR) and was led by laboratories in the UK, France and Switzerland with partners from Belgium, Germany, Denmark, Greece, Spain, Italy, Netherlands, Portugal, and Sweden, along with many developing nations severely affected by malaria.

Research, Innovation and Science Commissioner Máire Geoghegan-Quinn said: “This discovery could lead to an effective anti-malaria treatment that would save millions of lives and transform countless others. This demonstrates yet again the added value both of EU-funded research and innovation in general and of collaboration with researchers in developing countries in particular. The ultimate goal is the complete eradication of the global scourge of malaria and collaborative work across many borders is the only way of confronting such global challenges effectively.”

Cancer drugs to kill malaria parasite

Malaria is caused by a parasite called Plasmodium, which is transmitted via the bites of infected mosquitoes. In the human body, the parasites reproduce in the liver, and then infect and multiply in red blood cells. Joint research led by EU-funded laboratories at the Inserm-EPFL Joint Laboratory, Lausanne, (Switzerland/France), Wellcome Trust Centre for Molecular parasitology, University of Glasgow (Scotland), and Bern University (Switzerland) showed that, in order to proliferate, the malaria parasite depends upon a signalling pathway present in the host’s liver cells and in red blood cells. They demonstrated that the parasite hijacks the kinases (enzymes) that are active in human cells, to serve its own purposes. When the research team used cancer chemotherapy drugs called kinase inhibitors to treat red blood cells infected with malaria , the parasite was stopped in its tracks.

A new strategy opens up

Until now the malaria parasite has managed to avoid control by rapidly developing drug resistance through mutations and hiding from the immune system inside liver and red blood cells in the body of the host, where it proliferates. The discovery that the parasite needs to hijack some enzymes from the cell it lives in opens up a whole new strategy for fighting the disease. Instead of targeting the parasite itself, the idea is to make the host cell environment useless to it, by blocking the kinases in the cell. This strategy deprives the parasite of a major modus operandi for development of drug resistance.

Several kinase-inhibiting chemotherapy drugs are already used clinically in cancer therapy, and many more have already passed phase-I and phase II clinical trials. Even though these drugs have toxic side-effects, they are still being used over extended periods for cancer treatment. In the case of malaria, which would require a shorter treatment period, the problem of toxicity would be less acute. Researchers are proposing therefore that these drugs should be evaluated immediately for anti-malarial properties, drastically reducing the time and cost required to put this new malaria-fighting strategy into practice.

The next steps will include mobilising public and industrial partners to verify the efficacy of kinase inhibitors in malaria patients and to adjust the dose through clinical trials, before the new treatments can be authorised and made available to malaria patients worldwide.

Journal Reference:

  1. Audrey Sicard, Jean-Philippe Semblat, Caroline Doerig, Romain Hamelin, Marc Moniatte, Dominique Dorin-Semblat, Julie A. Spicer, Anubhav Srivastava, Silke Retzlaff, Volker Heussler, Andrew P. Waters, Christian Doerig. Activation of a PAK-MEK signalling pathway in malaria parasite-infected erythrocytesCellular Microbiology, 2011; DOI:10.1111/j.1462-5822.2011.01582.x
  2. The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by European Commission, Research & Innovation DG, viaAlphaGalileo.

Malaria’s Weakest Link: Class of Chemotherapy Drugs Also Kills the Parasite That Causes Malaria


A group of researchers from EPFL’s Global Health Institute (GHI) and Inserm (Institut National de la Santé et de la Recherche Médicale, the French government agency for biomedical research) has discovered that a class of chemotherapy drugs originally designed to inhibit key signaling pathways in cancer cells also kills the parasite that causes malaria. The discovery could quickly open up a whole new strategy for combating this deadly disease.

The research, published online in the journal Cellular Microbiology, shows that the malaria parasite depends upon a signaling pathway present in the host — initially in liver cells, and then in red blood cells — in order to proliferate. The enzymes active in the signaling pathway are not encoded by the parasite, but rather hijacked by the parasite to serve its own purposes. These same pathways are targeted by a new class of molecules developed for cancer chemotherapy known as kinase inhibitors. When the GHI/Inserm team treated red blood cells infected with malaria with the chemotherapy drug, the parasite was stopped in its tracks.

Professor Christian Doerig and his colleagues tested red blood cells infected with Plasmodium falciparum parasites and showed that the specific PAK-MEK signaling pathway was more highly activated in infected cells than in uninfected cells. When they disabled the pathway pharmacologically, the parasite was unable to proliferate and died. Applied in vitro, the chemotherapy drug also killed a rodent version of malaria (P. berghei), in both liver cells and red blood cells. This indicates that hijacking the host cell’s signaling pathway is a generalized strategy used by malaria, and thus disabling that pathway would likely be an effective strategy in combating the many strains of the parasite known to infect humans.

Malaria infects 250 million and kills 1-3 million people every year worldwide. Efforts to find a treatment have been marred by the propensity of the parasite to quickly develop drug resistance through selection of mutations. Once in the body, it hides from the immune system inside liver and blood cells, where it proliferates. The discovery that the parasite hijacks a signaling pathway in the host cell opens up a whole new strategy for fighting the disease. Instead of targeting the parasite itself, we could make the host cell environment useless to it, thus putting an end to the deadly cycle. Because this strategy uniquely targets host cell enzymes, the parasite will be deprived of a major modus operandi for development of drug resistance — selection of mutations in the drug target.

Several kinase-inhibiting chemotherapy drugs are already used clinically, and many more have passed stage 1 and stage 2 clinical trials. Even though these drugs have toxic effects, they are still being used or considered for use over extended periods for cancer treatment. Using them to combat malaria would involve a much shorter treatment period, making the problem of toxicity less acute. The authors of the study suggest evaluating these drugs for antimalarial properties, thus drastically reducing the time and cost required to put this new malaria-fighting strategy into practice.

Journal Reference:

  1. Audrey Sicard, Jean-Philippe Semblat, Caroline Doerig, Romain Hamelin, Marc Moniatte, Dominique Dorin-Semblat, Julie A. Spicer, Anubhav Srivastava, Silke Retzlaff, Volker Heussler, Andrew P. Waters, Christian Doerig. Activation of a PAK-MEK signalling pathway in malaria parasite-infected erythrocytesCellular Microbiology, 2011; DOI:10.1111/j.1462-5822.2011.01582.x
  2. The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Ecole Polytechnique Federale de Lausanne (EPFL), viaAlphaGalileo.

Technique for Letting Brain Talk to Computers Now Tunes in Speech


Patients with a temporary surgical implant have used regions of the brain that control speech to “talk” to a computer for the first time, manipulating a cursor on a computer screen simply by saying or thinking of a particular sound.

“There are many directions we could take this, including development of technology to restore communication for patients who have lost speech due to brain injury or damage to their vocal cords or airway,” says author Eric C. Leuthardt, MD, of Washington University School of Medicine in St. Louis.

Scientists have typically programmed the temporary implants, known as brain-computer interfaces, to detect activity in the brain’s motor networks, which control muscle movements.

“That makes sense when you’re trying to use these devices to restore lost mobility — the user can potentially engage the implant to move a robotic arm through the same brain areas he or she once used to move an arm disabled by injury,” says Leuthardt, assistant professor of neurosurgery, of biomedical engineering and of neurobiology, “But that has the potential to be inefficient for restoration of a loss of communication.”

Patients might be able to learn to think about moving their arms in a particular way to say hello via a computer speaker, Leuthardt explains. But it would be much easier if they could say hello by using the same brain areas they once engaged to use their own voices.

The research appears April 7 in The Journal of Neural Engineering.

(up) Scientists at Washington University School of Medicine in St. Louis have adapted brain-computer interfaces like the one shown above to listen to regions of the brain that control speech. The development may help restore capabilities lost to brain injury or disability. Credit: Eric Leuthardt, MD, permission by Michael Purdy

The devices under study are temporarily installed directly on the surface of the brain in epilepsy patients. Surgeons like Leuthardt use them to identify the source of persistent, medication-resistant seizures and map those regions for surgical removal. Researchers hope one day to install the implants permanently to restore capabilities lost to injury and disease.

Leuthardt and his colleagues have recently revealed that the implants can be used to analyze the frequency of brain wave activity, allowing them to make finer distinctions about what the brain is doing. For the new study, Leuthardt and others applied this technique to detect when patients say or think of four sounds:

  • oo, as in few
  • e, as in see
  • a, as in say
  • a, as in hat

When scientists identified the brainwave patterns that represented these sounds and programmed the interface to recognize them, patients could quickly learn to control a computer cursor by thinking or saying the appropriate sound.

In the future, interfaces could be tuned to listen to just speech networks or both motor and speech networks, Leuthardt says. As an example, he suggests that it might one day be possible to let a disabled patient both use his or her motor regions to control a cursor on a computer screen and imagine saying “click” when he or she wants to click on the screen.

“We can distinguish both spoken sounds and the patient imagining saying a sound, so that means we are truly starting to read the language of thought,” he says. “This is one of the earliest examples, to a very, very small extent, of what is called ‘reading minds’ — detecting what people are saying to themselves in their internal dialogue.”

The next step, which Leuthardt and his colleagues are working on, is to find ways to distinguish what they call “higher levels of conceptual information.”

“We want to see if we can not just detect when you’re saying dog, tree, tool or some other word, but also learn what the pure idea of that looks like in your mind,” he says. “It’s exciting and a little scary to think of reading minds, but it has incredible potential for people who can’t communicate or are suffering from other disabilities.”

 

Notes about this brain-computer interface research article

A portion of this research was funded in collaboration with Gerwin Schalk, PhD, of the New York State Department of Health’s Wadsworth Center. The goal is to explore additional potential uses of this technology in people who are disabled and in those who are not.

Leuthardt et al. 2011 Journal of Neural Engineering. 8 036004 online at: http://iopscience.iop.org/1741-2552/8/3/036004

Funding from the National Institutes of Health and the Department of Defense supported this research.

Contact: Michael Purdy – Senior Medical Science Writer

Source:  Washington University in St. Louis Newsroom article – permission given by Michael Purdy

Image Source: Image adapted from image in article above. Permission given by Michael Purdy

 

‘Virus-eater’ discovered in Antarctic lake


First of the parasitic parasites to be discovered in a natural environment points to hidden diversity.

Viruses from Organic lake, including the virophage (bottom left) and its prey (top). From reference 1 (left)

 

A genomic survey of the microbial life in an Antarctic lake has revealed a new virophage — a virus that attacks viruses. The discovery suggests that these life forms are more common, and have a larger role in the environment, than was once thought.

An Australian research team found the virophage while surveying the extremely salty Organic Lake in eastern Antarctica. While sequencing the collective genome of microbes living in the surface waters, they discovered the virus, which they dubbed the Organic Lake Virophage (OLV).

The OLV genome was identified nestling within the sequences of phycodnaviruses — a group of giant viruses that attack algae. Evidence of gene exchange, and possible co-evolution, between the two suggests that OLV preys on the phycodnavirus. Although OLV is the dominant virophage in the lake, the work suggests others might be present.

By killing phycodnaviruses, the OLV might allow algae to thrive. Ricardo Cavicchioli, a microbiologist at the University of New South Wales in Sydney, Australia, and his colleagues found that mathematical models of the Organic Lake system that took account of the virophage’s toll on its host showed lower algal mortality and more blooms during the lake’s two ice-free summer months.

“Our work reveals not only an amazing diversity in microbial life in this lake, but also how little we understand about the complexity of the biological functions at work,” says Cavicchioli. The findings are published in the Proceedings of the National Academies of Science1.

Giant killer

Another virophage described this month has similar ecological effects. The marine Mavirus attacks the giant Cafeteria roenbergensis virus, which preys on Cafeteria roenbergensis, one of the world’s most widespread species of zooplankton2.

“The Mavirus is able to rescue the infected zooplankton — which, in a way, confers immunity from infection,” says Curtis Suttle, a marine microbiologist at the University of British Columbia in Vancouver, Canada, and leader of the team that discovered the Mavirus.

“We unknowingly had Mavirus in culture with our Cafeteria system since the early 1990s,” says Suttle. But the virophage was not identified until the Cafeteria genome was sequenced.

The Mavirus genome is similar to DNA sequences called eukaryotic transposons, which insert themselves within the genomes of multicellular organisms such as plants and animals. These ‘jumping genes’ may be descended from a virophage, says Suttle. “One can imagine evolutionary pressure for hosts to somehow cultivate virophages to protect themselves from infection by giant viruses,” he says.

French Sputnik

The first virophage, dubbed Sputnik, was discovered in a water-cooling tower in Paris in 20083.

“We have been waiting for others to find virophages, to confirm our discovery wasn’t an artefact,” says Christelle Desnues, a microbiologist at the National Centre of Scientific Research in Marseilles, France, and a member of the team that described Sputnik. She now anticipates “an exponential discovery of virophages”.

The hosts of all three known virophages belong to a group of giant viruses known as nucleocytoplasmic large DNA viruses (NCLDV). “NCLDV viruses have large and complex genomes that allow them to incorporate the smaller virophages, something smaller viruses may not be able to do,” says Desnues.

The OLV was discovered when Cavicchioli’s graduate student, Sheree Yau, noticed that some of the sequences from microbes in Organic Lake were similar to those encoding Sputnik’s protein shell. Mavirus has similar sequences, so the trend might help to identify other virophages.

OLV, or virophages like it, may be widespread. The gene for its protein shell matches sequences already found in a host of other aquatic environments, including nearby Ace Lake in Antarctica, a saline lagoon in the Galapagos, an oceanic upwelling zone near the Galapagos, an estuary in New Jersey, and a freshwater lake in Panama.

The high number of matches reflects the fact that the OLV is the first virophage to be found in its natural environment, says Federico Lauro, also a molecular biologist at the University of New South Wales and a co-author of the paper.

Organic Lake, formed 6,000 years ago when sea levels were higher, is a natural laboratory, says Lauro. “These marine-derived lakes are great labs to work in because they are isolated, yet dynamic systems.”

 

References

Yau, S. et al. Proc. Natl Acad. Sci. USA advance online publication doi: 10.1073/pnas.1018221108 (2011).

Fischer, M. G. et al. Science advance online publication doi: 10.1126/science.1199412 (2011).

La Scola, B. et al. Nature 455, 100-104 (2008). | ArticleISIChemPort |