Archive

Archive for September, 2010

Interaction With Neighbors: Neuronal Field Simulates Brain Activity

September 30, 2010 Leave a comment

Voltage-sensitive dye imaging across the surface of visual cortex revealed propagating activity waves which may be conveyed by long horizontal neuronal connections. (Credit: Image courtesy of Ruhr-Universitaet-Bochum)

The appearance of a spot of light on the retina causes sudden activation of millions of neurons in the brain within tenths of milliseconds. At the first cortical processing stage, the primary visual cortex, each neuron thereby receives thousands of inputs from both close neighbors and further distant neurons, and also sends out an equal amount of output to others. In recent decades, individual characteristics of these widespread network connections and the specific transfer characteristics of single neurons have been widely derived. However, a coherent population model approach that provides an overall picture of the functional dynamics, subsuming interactions across all these individual channels, is still lacking.

RUB Scientists of the Bernstein Group for Computational Neuroscience developed a computational model which allows a mathematical description of far reaching interactions between cortical neurons. The results are published in the open-access journal PLoS Computational Biology.

Cortical activity waves and their possible consequences for visual perception

By means of fluorescent dye that reports voltage changes across neuronal membranes it has been shown how a small spot of light, presented in the visual field, leads to initially local brain activation followed by far distant traveling waves of activity. At first, these waves remain sub-threshold and hence, cannot be perceived consciously. However, a briefly following elongated bar stimulus leads to facilitation of the initiated activity wave. Instead perceiving the bar at once in its full length, it appears to be drawn-out from the location of the previously flashed spot. In psychology this phenomenon has been named ‘line-motion illusion’ since motion is perceived even though both stimuli are displayed stationary. Thus, brain processes that initiate widespread activity propagation may be partly responsible for this motion illusion.

Neural Fields

RUB Scientists around Dr. Dirk Jancke, Institut für Neuroinformatik, have now successfully implemented these complex interaction dynamics within a computational model. A so-called neural field was used in which the impact of each model neuron is defined by its distant-dependent interaction radius: close neighbors are strongly coupled and further distant neurons are gradually less interacting. Two layers one excitatory, one inhibitory, are recurrently connected such that a local input leads to transient activity that emerges focally followed by propagating activity. Therefore, the entire field dynamics are no longer determined by the sensory input alone but governed to a wide extent by the interaction profile across the neural field. Consequently, within such a model, the overall activity pattern is characterized by interactions that facilitate distant pre-activation far away from any local input.

Such pre-activation may play an important role during processing of moving objects. Given that processing takes time starting from the retina, the brain receives information about the external world with a permanent delay. In order to counterbalance such delays, pre-activation may serve a “forewarning” of neurons that represent locations ahead of an object trajectory and thus, may enable a more rapid crossing of firing thresholds to save important processing times.

What can we generally learn from such a field model regarding brain function? Neural fields allow for a mathematical framework of how the brain operates beyond a simple passive mapping of external events but conducts inter-“active” information processing leading, in limit cases, to what we call illusions. The future challenge will be to implement neural fields for more complex visual stimulus scenarios. Here, it may be an important advantage that this model class allows abstraction from single neuron activity and provides a mathematically handy description in terms of interactive cortical network functioning.

Provided by Ruhr-Universitaet-Bochum, via AlphaGalileo.

Journal Reference:

Olaf Sporns, Valentin Markounikau, Christian Igel, Amiram Grinvald, Dirk Jancke. A Dynamic Neural Field Model of Mesoscopic Cortical Activity Captured with Voltage-Sensitive Dye Imaging. PLoS Computational Biology, 2010; 6 (9): e1000919 DOI: 10.1371/journal.pcbi.1000919

Watching a Living Brain in the Act of Seeing – With Single-Synapse Resolution

September 29, 2010 Leave a comment

Dendrites of a nerve cell in brain appear like branches of a tree. Left: A patch clamp pipette injects fluorescent dye into the cell. (Credit: Image courtesy of Technische Universitaet Muenchen)


Pioneering a novel microscopy method, neuroscientist Arthur Konnerth and colleagues from the Technische Universitaet Muenchen (TUM) have shown that individual neurons carry out significant aspects of sensory processing: specifically, in this case, determining which direction an object in the field of view is moving. Their method makes it possible for the first time to observe individual synapses, nerve contact sites that are just one micrometer in size, on a single neuron in a living mammalian brain.

Focusing on neurons known to play a role in processing visual signals related to movement, Konnerth’s team discovered that an individual neuron integrates inputs it receives via many synapses at once into a single output signal — a decision, in essence, made by a single nerve cell. The scientists report these results in the latest issue of the journal Nature. Looking ahead, they say their method opens a new avenue for exploration of how learning functions at the level of the individual neuron.

When light falls on the retina of the human eye, it hits 126 million sensory cells, which transform it into electrical signals. Even the smallest unit of light, a photon, can stimulate one of these sensory cells. As a result, enormous amounts of data have to be processed for us to be able to see. While the processing of visual data starts in the retina, the finished image only arises in the brain or, to be more precise, in the visual cortex at the back of the cerebrum. Scientists working with Arthur Konnerth — professor of neurophysiology at TUM and Carl von Linde Senior Fellow at the TUM Institute for Advanced Study — are interested in a certain kind of neuron in the visual cortex that fires electrical signals when an object moves in front of our eyes — or the eyes of a mouse.

When a mouse is shown a horizontal bar pattern in motion, specific neurons in its visual cortex consistently respond, depending on whether the movement is from bottom to top or from right to left. The impulse response pattern of these “orientation” neurons is already well known. What was not previously known, however, is what the input signal looks like in detail. This was not easy to establish, as each of the neurons has a whole tree of tiny, branched antennae, known as dendrites, at which hundreds of other neurons “dock” with their synapses.

To find out more about the input signal, Konnerth and his colleagues observed a mouse in the act of seeing, with resolution that goes beyond a single nerve cell to a single synapse. They refined a method called two-photon fluorescence microscopy, which makes it possible to look up to half a millimeter into brain tissue and view not only an individual cell, but even its fine dendrites. Together with this microscopic probe, they conducted electrical signals to individual dendrites of the same neuron using tiny glass pipettes (patch-clamp technique). “Up to now, similar experiments have only been carried out on cultured neurons in Petri dishes,” Konnerth says. “The intact brain is far more complex. Because it moves slightly all the time, resolving individual synaptic input sites on dendrites was extremely difficult.”

The effort has already rewarded the team with a discovery. They found that in response to differently oriented motions of a bar pattern in the mouse’s field of vision, an individual orientation neuron receives input signals from a number of differently oriented nerve cells in its network of connections but sends only one kind of output signal. “And this,” Konnerth says, “is where things get really exciting.” The orientation neuron only sends output signals when, for example, the bar pattern moves from bottom to top. Evidently the neuron weighs the various input signals against each other and thus reduces the glut of incoming data to the most essential information needed for clear perception of motion.

In the future, Konnerth would like to extend this research approach to observation of the learning process in an individual neuron. Neuroscientists speculate that a neuron might be caught in the act of learning a new orientation. Many nerve endings practically never send signals to the dendritic tree of an orientation neuron. Presented with visual input signals that represent an unfamiliar kind of movement, formerly silent nerve endings may become active. This might alter the way the neuron weighs and processes inputs, in such a way that it would change its preferred orientation; and the mouse might learn to discern certain movements better or more rapidly. “Because our method enables us to observe, down to the level of a single synapse, how an individual neuron in the living brain is networked with others and how it behaves, we should be able to make a fundamental contribution to understanding the learning process,” Konnerth asserts. “Furthermore, because here at TUM we work closely with physicists and engineers, we have the best possible prospects for improving the spatial and temporal resolution of the images.”

This work was supported by grants from Deutsche Forschungsgemeinschaft (DFG) and Friedrich-Schiedel-Stiftung.

Provided by Technische Universitaet Muenchen.

Journal Reference:

Hongbo Jia, Nathalie L. Rochefort, Xiaowei Chen, Arthur Konnerth. Dendritic organization of sensory input to cortical neurons in vivo. Nature, 2010; 464 (7293): 1307 DOI: 10.1038/nature08947

The empathetic vegetarian brain

September 28, 2010 Leave a comment

It is often the case that meatless lifestyles are chosen for ethical reasons related to valuing animal rights. As a consequence of their food choices, vegetarians and vegans are often accused of and taunted for loving animals more than people. But do most vegetarians care less for fellow humans than animals, care for humans and animals equally, or care more for humans than animals but still care more for animals than omnivores do?

A study published yesterday in PLoS ONE has attempted to parse out differences among omnivores, vegetarians and vegans in brain responses to human and animal suffering. The three groups were first given the Empathy quotient questionnaire, and it was determined that vegans and vegetarians scored significantly higher in empathy than omnivores. Next, the subjects had their brains scanned with fMRI as they viewed images of human suffering, animal suffering and “neutral” natural landscapes. Many differences were found among the brains of those with different feedings habits.

Firstly, vegetarians and vegans had higher engagement than omnivores of “empathy related areas,” such as the anterior cingulate cortex (ACC) and left inferior frontal gyrus (IFG), while observing both human and animal suffering. This seems to suggest that there is a neural basis for those with meatless lifestyles having greater empathy for all living beings.

However, when viewing animal suffering but not human suffering, meat-free subjects recruited additional empathy related areas in prefrontal and visual areas and reduced their right amygdala activity. This may be interpreted as evidence that vegetarians and vegans care more about the emotions of animals than those of humans. It is important to consider how the study was conducted, though, before reaching such a conclusion.

The authors themselves note a couple of weaknesses in their design. The subjects’ brain activities while they viewed human or animal suffering were compared to the baseline/control condition of “neutral” scenes that did not include living beings, faces, or suffering of any kind, which are all factors that should have been considered. The subjects were also simply asked to look at the images of the different conditions without being asked about their thoughts or feelings, so it is impossible to confidently attribute their brain responses to specific emotions. But even if the demonstrated brain activity represented empathy, there is also the possibility that the subjects were desensitized to images of human suffering that appear daily on the news. Desensitization to an image does not necessarily reflect empathetic feelings toward fellow humans. So a claim that vegetarians/vegans love animals more than humans because they have more empathetic neural activity while viewing suffering animals than suffering humans is unsubstantiated at this point.

Another major finding in this study was differences in neural representations of cognitive empathy between vegetarians and vegans. All of these subjects had chosen not to eat meat for ethical reasons, but the authors suggest that these differences in vegan and vegetarian brain responses indicate that the groups experience empathy for suffering differently, possibly due to differences in reasons for their diet choices. Again, these results should be taken as preliminary because of weaknesses in the study’s design.

Overall the study is interesting, but it remains to be seen whether this will spark further research that will ultimately demonstrate findings significant enough to affect public policy and animal cruelty regulations. For now, we have a bit of a clearer picture of the brain’s representation of empathy and a lot of extra material for the never-ending ethical debate over man’s right to meat.

Reference:

Filippi M, Riccitelli G, Falini A, Di Salle F, Vuilleumier P, Comi G, & Rocca MA (2010). The Brain Functional Networks Associated to Human and Animal Suffering Differ among Omnivores, Vegetarians and Vegans PLoS ONE

Your brain on shrooms

September 27, 2010 2 comments

For the first time, people under the influence of psilocybin, the psychoactive ingredient in magic mushrooms, laid down in what appeared to be an fMRI brain scanner.

However, unlike an fMRI machine, the device didn’t generate any magnetic fields. In fact the device didn’t even generate an image of the brain or measure brain activity at all. The device was made out of wood.

In a study on the safety of administering psilocybin intravenously and conducting an fMRI scan, nine subjects who had previous experience with hallucinogenic drugs were injected with 2 milligrams of psilocybin and were then asked to lie down in the wooden mock-fMRI setting. The researchers determined that this dose of psilocybin should be considered tolerable and safe for conducting a brain scan.

It was important that this study be conducted before any real fMRI study on psilocybin because psychedelic drug experiences tend to be sensitive to the surrounding environment of the treated individual. Furthermore, it is difficult get good data out of fMRI. The subject has to keep their head as still as possible for the duration of the scan, since slight movements can ruin the quality of the acquired data. The subjects in the mock-fMRI scanner were able to keep very still despite reporting that they were strongly affected by the drug.

Research on psilocybin has been gaining a respectable reputation in scientific and medical communities, as outlined in a New York Times article. Guidelines for safety in human hallucinogen research already exist, and the findings from this pilot study on mock-fMRI will build upon these guidelines. With fMRI studies, the reputation of psilocybin in research will likely improve, as will our understanding of how the drug exerts its baffling effects. There are currently two ongoing studies investigating whether psilocybin can ease psychological suffering associated with cancer. If there is an effect on mental well-being, studies of the brain could help us uncover the mechanism. And of course, news agencies will likely jump on the opportunity to describe the mystical experiences associated with psilocybin use as a simple product of neural patterns.

As in all aspects of neuroscience, however, fMRI will not tell us the whole story. The cellular and molecular level of psilocybin’s effects should be considered in conjunction with information obtained from macro-level brain activity studies.

It is also important to realize that just because psilocybin is being taken seriously in research, this does not justify irresponsible use of the drug. Whenever a research study identifies a positive effect of cannabis or another illicit substance, proponents of using that drug often take the findings out of proportion and context. Learning how psilocybin works may help us understand how to best use it, but harmful effects as well as the limitations of research studies should always be considered.

Expect to hear a lot more about psilocybin brain scans in the near future.

Reference:
Carhart-Harris RL, Williams TM, Sessa B, Tyacke RJ, Rich AS, Feilding A, & Nutt DJ (2010). The administration of psilocybin to healthy, hallucinogen-experienced volunteers in a mock-functional magnetic resonance imaging environment: a preliminary investigation of tolerability. Journal of psychopharmacology (Oxford, England) PMID: 20395317

For neurons to work as a team, it helps to have a beat

September 22, 2010 Leave a comment

This is an illustration of how brain rhythms organize distributed groups of neurons into functional cell assemblies. The colors represent different cell assemblies. Neurons in widely separated brain areas often need to work together without interfering with other, spatially overlapping groups. Each assembly is sensitive to different frequencies, producing independent patterns of coordinated neural activity, depicted as color traces to the right of each network. Credit: Ryan Canolty, UC Berkeley

When it comes to conducting complex tasks, it turns out that the brain needs rhythm, according to researchers at the University of California, Berkeley.

Specifically, cortical rhythms, or oscillations, can effectively rally groups of neurons in widely dispersed regions of the brain to engage in coordinated activity, much like a conductor will summon up various sections of an orchestra in a symphony.

Even the simple act of catching a ball necessitates an impressive coordination of multiple groups of neurons to perceive the object, judge its speed and trajectory, decide when it’s time to catch it and then direct the muscles in the body to grasp it before it whizzes by or drops to the ground.

Until now, neuroscientists had not fully understood how these neuron groups in widely dispersed regions of the brain first get linked together so they can work in concert for such complex tasks.

The UC Berkeley findings are to be published the week of Sept. 20 in the online early edition of the journal Proceedings of the National Academy of Sciences.

“One of the key problems in neuroscience right now is how you go from billions of diverse and independent neurons, on the one hand, to a unified brain able to act and survive in a complex world, on the other,” said principal investigator Jose Carmena, UC Berkeley assistant professor at the Department of Electrical Engineering and Computer Sciences, the Program in Cognitive Science, and the Helen Wills Neuroscience Institute. “Evidence from this study supports the idea that neuronal oscillations are a critical mechanism for organizing the activity of individual neurons into larger functional groups.”

The idea behind anatomically dispersed but functionally related groups of neurons is credited to neuroscientist Donald Hebb, who put forward the concept in his 1949 book “The Organization of Behavior.”

“Hebb basically said that single neurons weren’t the most important unit of brain operation, and that it’s really the cell assembly that matters,” said study lead author Ryan Canolty, a UC Berkeley postdoctoral fellow in the Carmena lab.

It took decades after Hebb’s book for scientists to start unraveling how groups of neurons dynamically assemble. Not only do neuron groups need to work together for the task of perception – such as following the course of a baseball as it makes its way through the air – but they then need to join forces with groups of neurons in other parts of the brain, such as in regions responsible for cognition and body control.

At UC Berkeley, neuroscientists examined existing data recorded over the past four years from four macaque monkeys. Half of the subjects were engaged in brain-machine interface tasks, and the other half were participating in working memory tasks. The researchers looked at how the timing of electrical spikes – or action potentials – emitted by nerve cells was related to rhythms occurring in multiple areas across the brain.

Among the squiggly lines, patterns emerged that give literal meaning to the phrase “tuned in.” The timing of when individual neurons spiked was synchronized with brain rhythms occurring in distinct frequency bands in other regions of the brain. For example, the high-beta band – 25 to 40 hertz (cycles per second) – was especially important for brain areas involved in motor control and planning.

“Many neurons are thought to respond to a receptive field, so that if I look at one motor neuron as I move my hand to the left, I’ll see it fire more often, but if I move my hand to the right, the neuron fires less often,” said Carmena. “What we’ve shown here is that, in addition to these traditional ‘external’ receptive fields, many neurons also respond to ‘internal’ receptive fields. Those internal fields focus on large-scale patterns of synchronization involving distinct cortical areas within a larger functional network.”

The researchers expressed surprise that this spike dependence was not restricted to the neuron’s local environment. It turns out that this local-to-global connection is vital for organizing spatially distributed neuronal groups.

“If neurons only cared about what was happening in their local environment, then it would be difficult to get neurons to work together if they happened to be in different cortical areas,” said Canolty. “But when multiple neurons spread all over the brain are tuned in to a specific pattern of electrical activity at a specific frequency, then whenever that global activity pattern occurs, those neurons can act as a coordinated assembly.”

The researchers pointed out that this mechanism of cell assembly formation via oscillatory phase coupling is selective. Two neurons that are sensitive to different frequencies or to different spatial coupling patterns will exhibit independent activity, no matter how close they are spatially, and will not be part of the same assembly. Conversely, two neurons that prefer a similar pattern of coupling will exhibit similar spiking activity over time, even if they are widely separated or in different brain areas.

“It is like the radio communication between emergency first responders at an earthquake,” Canolty said. “You have many people spread out over a large area, and the police need to be able to talk to each other on the radio to coordinate their action without interfering with the firefighters, and the firefighters need to be able to communicate without disrupting the EMTs. So each group tunes into and uses a different radio frequency, providing each group with an independent channel of communication despite the fact that they are spatially spread out and overlapping.”

The authors noted that this local-to-global relationship in brain activity may prove useful for improving the performance of brain-machine interfaces, or lead to novel strategies for regulating dysfunctional brain networks through electrical stimulation. Treatment of movement disorders through deep brain stimulation, for example, usually targets a single area. This study suggests that gentler rhythmic stimulation in several areas at once may also prove effective, the authors said.

Provided by University of California – Berkeley

PhysOrg.com http://www.physorg.com/news204220208.html

Categories: Neuroscience Tags: ,

A Single Neuron Can Change the Activity of the Whole Brain

September 21, 2010 1 comment

The pulsing of a single neuron can switch a brain’s waves from the equivalent of a big ocean swell to ripples on a pond, according to new research from Howard Hughes Medical Institute investigator Yang Dan of the University of California, Berkeley.


The study reveals important new information about how the brain controls large-scale activity patterns and suggests that an individual cell has more influence than previously thought. The findings, published in the May 1, 2009, issue of the journal Science, could ultimately shed light on how chaotic brain patterns can lead to sleep disorders such as sleepwalking.

Brain cells use electrical pulses to talk with one another and guide functions ranging from heart rate and breathing to decision-making and navigation. Like the din of a crowd, the chatter of 100 billion neuronal cells in the human brain creates larger patterns of activity commonly called brain waves.

These patterns reveal the brain’s general state of arousal. For instance, large, slow brain waves that are synchronized throughout the brain are indicative of deep sleep. “Many neurons are doing the same thing at the same time,” says Dan. During so-called rapid eye movement (REM) sleep, on the other hand, different brain areas are less synchronized, firing in smaller and more frequent oscillations. And in an awake person, the brain broadcasts a rapid, uncoordinated pattern.

Dan and her colleagues wanted to understand how large-scale wave patterns influence the connection between two neurons. They knew that neuronal connections could strengthen or weaken over time, and these changes seem to underlie learning and memory. They wondered whether the overall pattern of brain activity altered nerve cells’ ability to change their connection strength.

Studying anesthetized rats, they used one electrode to spur a neuron to fire rapidly and used another electrode nearby to activate the local neuronal connections. A third electrode was used to pick up the larger pattern emitted by all the neurons in the area. They wanted the overall brain state to remain constant during the experiment, but instead found that tickling one neuron could cause the entire brain state to change.

“Initially, this was very inconvenient,” says Dan. But then the researchers realized that the phenomenon deserved more attention. Looking more closely, they verified that a neuron firing at high frequency could switch the brain from a “non-REM pattern” of activity to a “REM pattern,” and vice versa.

The result was counterintuitive. “Every neuron makes connections to roughly 1,000 other neurons, but most of those are quite weak,” says Dan. A target cell won’t respond unless many, many neurons that connect to it fire at the same time and therefore she says it’s surprising that a single neuron could change the activity of the whole brain. “Single neurons have more weight than we used to think,” she says.

Dan doesn’t yet know how one cell could exert such power. The researchers had to repeatedly and rapidly fire a cell to cause the pattern to switch, so they might be emulating the effect of many cells firing at once. A neuron doesn’t normally fire in that way, so it is an open question whether the activity of a single neuron could change overall brain pattern under normal circumstances.

The findings add a new twist to how brain patterns are established. Researchers know that certain brain structures, such as the hypothalamus and the brain stem, play a part in setting the pace of global brain activity. In this study, Dan and her team were tickling brain cells in a different area: the cortex, the thin sheet of neurons on the surface of the brain involved in such abilities as moving and seeing.

Dan isn’t certain how cells in the cortex might control brain state, but she posits that signaling there could link back to the thalamus and spur it to set up a new pattern. “We know that a lot of circuits are involved in controlling brain state,” says Dan. “We’re saying that cortex is also part of that loop.”

By providing new information about how brain states are controlled, the study might ultimately lead to new knowledge about what causes certain sleep disorders. “In sleepwalking, there is a mixed-up boundary between slow-wave sleep and the awake state,” says Dan. “Your muscles move, but you aren’t consciously aware of your surroundings.” Understanding the circuitry that establishes brain states could ultimately reveal how that mixed-up situation is established.

Next, Dan wants to study animals that are naturally awake or sleeping, rather than anesthetized, to see if under normal conditions, a single neuron or a few neurons really can turn the tide on the entire brain.

Provided by Howard Hughes Medical Institute (news : web)

PhysOrg.com http://www.physorg.com/news160407260.html

Categories: Neuroscience Tags: ,

The neural basis of déjà vu

September 20, 2010 3 comments

Reading a book for the second time can be an enlightening experience. At the same time, aspects of this experience can be confusing. During a second visit to a work previously read I suspect that to a degree everyone plays a game that involves trying to determine which parts of the story you remember well and which parts you completely forgot. But there are also parts that lie somewhere in the middle; it is these parts that boggle our minds by leaving us uncertain of whether or not the details are familiar to us. Perhaps a nuance in the storyline that strikes you as familiar is something you actually skimmed over and ignored the first time you read the book. You have some awareness of your ignorance and begin to question your feeling of familiarity.

This clash of evaluations lies at the heart of déjà vu, the experience of recognizing a situation as familiar while simultaneously being aware that the situation may not have actually occurred in the past. Chris Moulin and Akira O’Connor, researchers who have attempted to study déjà vu in their laboratory, have recently published a paper outlining the current state and challenges of scientific research on this inherently subjective phenomenon. They discuss two broad categories of recent research: déjà vu in clinical populations (e.g. associations with epilepsy and dementia), and déjà vu in nonclinical populations.

Importantly, Moulin and O’Connor point out that these categories may be distinct, and that caution should be exerted when making comparisons between the two. A lot of research on déjà vu in clinical populations is not actually study of déjà vu, but of a slightly different experience called déjà vecu (also known as ‘recollective confabulation’) in older adults with dementia. Déjà vecu instances involve inappropriate feelings of familiarity, like in déjà vu, but the feelings are not necessarily accompanied by awareness that it is inappropriate. The validity of extending evidence from studies on déjà vecu to the casual experience of déjà vu is questionable.

However, there has been a movement in experimental cognitive psychology toward studying déjà vu by generating the phenomenon in nonclinical populations. These studies use techniques such as hypnotic suggestion and familiarity questionnaires about images that are previously shown or not shown to subjects. There are few studies using these techniques, and the applicability to déjà vu experiences in everyday life is still being questioned.

So a solid scientific theory of déjà vu is still nonexistent. But there have been some recent neuroscientific investigations that have shed light on the neural basis of déjà vu. Deep brain stimulation and brain lesions studies both implicate areas in the mesial temporal cortex in the generation of déjà vu. Moulin and O’Connor argue that this doesn’t necessarily mean we can label this region as the ‘déjà vu cortex’ of the brain; rather, if we are to make progress in understanding déjà vu in the brain, we should examine how mesial temporal structures interact with whole neural networks during instances of déjà vu. For example, the authors hypothesize that “mesial temporal structures may aberrantly indicate a sensation of familiarity despite the rest of the hippocampo-cortical network indicating the overarching nonrecognition state that ultimately presides.”

In other words, when you’re re-reading a book or article that was edited with a minor detail after you read it the first time, your mesial temporal regions are telling you that the minor detail is familiar, but the rest of your brain is telling you that you never read that minor detail the first time. What’s happening in the rest of the brain, and why the brain decides to confuse you like this in the first place are questions for further research. That means that in the labs of Moulin, O’Connor and déjà vu researchers alike, it will be, in the famous words of Yogi Berra, “déjà vu all over again.”