Archive

Posts Tagged ‘vision’

Memories May Skew Visual Perception

December 16, 2012 3 comments

Taking a trip down memory lane while you are driving could land you in a roadside ditch, new research indicates.

Vanderbilt University psychologists have found that our visual perception can be contaminated by memories of what we have recently seen, impairing our ability to properly understand and act on what we are currently seeing.

“This study shows that holding the memory of a visual event in our mind for a short period of time can ‘contaminate’ visual perception during the time that we’re remembering,” Randolph Blake, study co-author and Centennial Professor of Psychology, said.

“Our study represents the first conclusive evidence for such contamination, and the results strongly suggest that remembering and perceiving engage at least some of the same brain areas.” Read more…

Researchers Explore How the Brain Perceives Direction and Location

December 16, 2012 Leave a comment

The Who asked “who are you?” but Dartmouth neurobiologist Jeffrey Taube asks “where are you?” and “where are you going?” Taube is not asking philosophical or theological questions. Rather, he is investigating nerve cells in the brain that function in establishing one’s location and direction.

Taube, a professor in the Department of Psychological and Brain Sciences, is using microelectrodes to record the activity of cells in a rat’s brain that make possible spatial navigation—how the rat gets from one place to another—from “here” to “there.” But before embarking to go “there,” you must first define “here.”

Survival Value

“Knowing what direction you are facing, where you are, and how to navigate are really fundamental to your survival,” says Taube. “For any animal that is preyed upon, you’d better know where your hole in the ground is and how you are going to get there quickly. And you also need to know direction and location to find food resources, water resources, and the like.”

Not only is this information fundamental to your survival, but knowing your spatial orientation at a given moment is important in other ways, as well. Taube points out that it is a sense or skill that you tend to take for granted, which you subconsciously keep track of. “It only comes to your attention when something goes wrong, like when you look for your car at the end of the day and you can’t find it in the parking lot,” says Taube. Read more…

The Impossible Staircase in Our Heads: How We Visualize the World Around Us

March 28, 2012 1 comment

Our interpretation of the world around us may have more in common with the impossible staircase illusion than it does the real world, according to research published today in the open access journal PLoS ONE.

A ‘Penrose stairs’ optical illusion, or impossible staircase. Image adapted from public domain image shared by Sakurambo on Wikimedia

The study, which was funded by the Wellcome Trust, suggests that we do not hold a three-dimensional representation of our surroundings in our heads as was previously thought.

Artists, such as Escher, have often exploited the paradoxes that emerge when a 3D scene is depicted by means of a flat, two-dimensional picture. In Escher’s famous picture ‘Waterfall’, for example, it is impossible to tell whether the start of the waterfall is above or below its base.

Paradoxes like this can be generated in a drawing, but it is not possible to create such a 3D structure. The illusion is possible because drawings of 3D scenes are inherently ambiguous, so there is no one-to-one relationship between the picture and 3D locations in space.

Most theories of 3D vision and how we represent space in our visual system assume that we generate a one-to-one 3D model of space in our brains, where each point in real space maps to a unique point in our model. However, there is an ongoing debate about whether this is really the case.

To test this idea, researchers at the University of Reading placed participants wearing a virtual reality headset in a virtual room in which they had to judge which of two objects was the nearest. On some occasions, the size of the room was increased four-fold – previous research by the team showed that participants fail to notice this expansion.

In this new study, the researchers found that people’s judgement of the relative depth of objects depended on the order in which the objects were compared. Although the results are readily explained in relation to the expansion of the room, the participants had no idea that the room changed at any stage during the experiment. It is the properties of this stable perception that the experiment tested.

Dr Andrew Glennerster from the University of Reading, who led the study, explains: “In the impossible staircase illusion, you cannot tell whether the back corner is higher or lower than the front one as it depends which route you take to get there. The same is true, we find, in our task. This means that our own internal representations of space must be rather like Escher’s paradoxes, with no one-to-one relationship to real space.”

“Even when the size of the room increases four-fold, people think they are in a stable room throughout the experiment. Their interpretation of the room does not update itself when the room itself changes.

“Does it make sense for their representation of the room to have 3D coordinates, as a proper staircase would? No – there is no way to write down the coordinates of the objects that could explain the judgements people made. Visual space – the internal representation – is much more like the paradoxical staircase than a physically realisable model.”

Source:

Wellcome Trust press release
Image Source: NeuroscienceNews.com image adapted from public domain image with credit to Sakurambo on Wikipedia.
Original Research: Research article “A demonstration of ‘broken’ visual space” by Svarverud E et al. to appear in PLoS ONE 2012

The Neural Basis of Synesthesia

October 18, 2010 Leave a comment

Wikipedia has a page on the neural basis of synesthesia, but not yet described there is a new study in press by Vilayanur S. Ramachandran’s group that provides interesting insights.

Synesthesia is a neurological condition in which affected individuals experience one sense (e.g. hearing) as another sense (e.g. visual colours). Ramachandran’s latest study investigated grapheme-colour synesthetes who experience specific colours when they view specific graphemes (i.e., letters and numbers). The results demonstrate that two brain areas – for grapheme and colour representation respectively – are activated at virtually the same time in the brains of synesthetes who are viewing letters and numbers. On the other hand, normal controls viewing the same thing exhibit activity in the grapheme region but not the colour region.

This is the first study of synesthesia to demonstrate simultaneous activation of the two brain areas, known as the posterior temporal grapheme area (PTGA) and colour area V4 (pictured below in the brain of a representative synesthete). The finding was made possible because the researchers used a neuroimaging technique called magnetoencephalography (MEG) to measure weak magnetic fields emitted by specific areas of the brain while the subjects viewed graphemes. Compared to other neuroimaging techniques, such as fMRI and EEG, MEG offers the best combination of temporal and spatial precision in measuring brain activation.

If you read the Wikipedia page, you know that there are two main theories that attempt to explain how synesthesia occurs in the brain: the cross-activation theory and the disinhibited feedback theory. Let’s call them Theory 1 and Theory 2 for simplicity. Theory 1 posits that the grapheme and colour brain areas are ‘hyper-connected’ such that activity in the grapheme area evoked by viewing a letter or number immediately leads to activity in the colour area and conscious perception of colour. Theory 2 maintains that there are ‘executive’ brain areas that control the communication between the grapheme and  colour areas, and in synesthetes this control is disrupted. To reiterate, Theory 1 says that normal brains are anatomically different than synesthete brains, whereas Theory 2 says that normal brains are the same as synesthete brains but the two brains act differently.
The results of Ramachandran’s group support Theory 1, the cross-activation theory, since this model predicts that the colour and grapheme areas should be activated at roughly the same time in synesthetes looking at graphemes.
This is perhaps the strongest evidence for the cross-activation theory of synesthesia to date. But to complicate things, Ramachandran’s group proposed a new theory called ‘cascaded cross-tuning model,’ which is essentially a refinement of the cross-activation model (let’s call it Theory 1.1).

According to Theory 1.1, when a synesthete views a number, a series of simultaneous activations lead to perception of a colour. First, a subcomponent of the grapheme area responds to features of the number (e.g. the “o” that makes up the top of the number 9). This leads to activity in other subcomponents of the grapheme area representing possible numbers that the feature is part of (e.g. the “o” could be a component of the numbers 6, 8, or 9) as well as the colour area V4. At this point however, colour is not consciously perceived. Next, when the grapheme area identifies the number 6 (based on monitoring by other brain areas), activity in V4 is triggered, leading to conscious perception of the colour associated with the number 6.

Cool theory? Cool theory.

Note, however, that it only applies to ‘projector’ synesthetes who see colours in the outside world when they see numbers, but not ‘associator’ synesthetes who perceive the colours in the “mind’s eye.” Also, it doesn’t yet apply to other forms of synesthesia, such as acquired synesthesias (e.g. synesthesia for pain).

Yeah, it’s only a matter of time before Theory 1.2 takes over.

Reference:
Brang D, Hubbard EM, Coulson S, Huang M, & Ramachandran VS (2010). Magnetoencephalography reveals early activation of V4 in grapheme-color synesthesia. NeuroImage PMID: 20547226

Watching a Living Brain in the Act of Seeing – With Single-Synapse Resolution

September 29, 2010 Leave a comment

Dendrites of a nerve cell in brain appear like branches of a tree. Left: A patch clamp pipette injects fluorescent dye into the cell. (Credit: Image courtesy of Technische Universitaet Muenchen)


Pioneering a novel microscopy method, neuroscientist Arthur Konnerth and colleagues from the Technische Universitaet Muenchen (TUM) have shown that individual neurons carry out significant aspects of sensory processing: specifically, in this case, determining which direction an object in the field of view is moving. Their method makes it possible for the first time to observe individual synapses, nerve contact sites that are just one micrometer in size, on a single neuron in a living mammalian brain.

Focusing on neurons known to play a role in processing visual signals related to movement, Konnerth’s team discovered that an individual neuron integrates inputs it receives via many synapses at once into a single output signal — a decision, in essence, made by a single nerve cell. The scientists report these results in the latest issue of the journal Nature. Looking ahead, they say their method opens a new avenue for exploration of how learning functions at the level of the individual neuron.

When light falls on the retina of the human eye, it hits 126 million sensory cells, which transform it into electrical signals. Even the smallest unit of light, a photon, can stimulate one of these sensory cells. As a result, enormous amounts of data have to be processed for us to be able to see. While the processing of visual data starts in the retina, the finished image only arises in the brain or, to be more precise, in the visual cortex at the back of the cerebrum. Scientists working with Arthur Konnerth — professor of neurophysiology at TUM and Carl von Linde Senior Fellow at the TUM Institute for Advanced Study — are interested in a certain kind of neuron in the visual cortex that fires electrical signals when an object moves in front of our eyes — or the eyes of a mouse.

When a mouse is shown a horizontal bar pattern in motion, specific neurons in its visual cortex consistently respond, depending on whether the movement is from bottom to top or from right to left. The impulse response pattern of these “orientation” neurons is already well known. What was not previously known, however, is what the input signal looks like in detail. This was not easy to establish, as each of the neurons has a whole tree of tiny, branched antennae, known as dendrites, at which hundreds of other neurons “dock” with their synapses.

To find out more about the input signal, Konnerth and his colleagues observed a mouse in the act of seeing, with resolution that goes beyond a single nerve cell to a single synapse. They refined a method called two-photon fluorescence microscopy, which makes it possible to look up to half a millimeter into brain tissue and view not only an individual cell, but even its fine dendrites. Together with this microscopic probe, they conducted electrical signals to individual dendrites of the same neuron using tiny glass pipettes (patch-clamp technique). “Up to now, similar experiments have only been carried out on cultured neurons in Petri dishes,” Konnerth says. “The intact brain is far more complex. Because it moves slightly all the time, resolving individual synaptic input sites on dendrites was extremely difficult.”

The effort has already rewarded the team with a discovery. They found that in response to differently oriented motions of a bar pattern in the mouse’s field of vision, an individual orientation neuron receives input signals from a number of differently oriented nerve cells in its network of connections but sends only one kind of output signal. “And this,” Konnerth says, “is where things get really exciting.” The orientation neuron only sends output signals when, for example, the bar pattern moves from bottom to top. Evidently the neuron weighs the various input signals against each other and thus reduces the glut of incoming data to the most essential information needed for clear perception of motion.

In the future, Konnerth would like to extend this research approach to observation of the learning process in an individual neuron. Neuroscientists speculate that a neuron might be caught in the act of learning a new orientation. Many nerve endings practically never send signals to the dendritic tree of an orientation neuron. Presented with visual input signals that represent an unfamiliar kind of movement, formerly silent nerve endings may become active. This might alter the way the neuron weighs and processes inputs, in such a way that it would change its preferred orientation; and the mouse might learn to discern certain movements better or more rapidly. “Because our method enables us to observe, down to the level of a single synapse, how an individual neuron in the living brain is networked with others and how it behaves, we should be able to make a fundamental contribution to understanding the learning process,” Konnerth asserts. “Furthermore, because here at TUM we work closely with physicists and engineers, we have the best possible prospects for improving the spatial and temporal resolution of the images.”

This work was supported by grants from Deutsche Forschungsgemeinschaft (DFG) and Friedrich-Schiedel-Stiftung.

Provided by Technische Universitaet Muenchen.

Journal Reference:

Hongbo Jia, Nathalie L. Rochefort, Xiaowei Chen, Arthur Konnerth. Dendritic organization of sensory input to cortical neurons in vivo. Nature, 2010; 464 (7293): 1307 DOI: 10.1038/nature08947