Scientific Frontline: "At a Glance" Summary
- Main Discovery: Visual motion patterns generated by eye movements are actively used by the brain to perceive depth and 3D space, contradicting the long-held belief that this motion is mere "noise" the brain must subtract.
- Methodology: Researchers formulated a theoretical framework predicting human perception during eye movements and validated it using 3D virtual reality tasks where participants estimated the direction and depth of moving objects while maintaining specific focal points.
- Key Data: Experimental results showed participants committed consistent, predictable patterns of errors in depth and motion estimation that aligned precisely with the researchers' theoretical model, confirming the brain processes rather than ignores this visual input.
- Significance: This finding fundamentally shifts the understanding of visual processing by demonstrating that the brain analyzes global image motion patterns to infer eye position relative to the environment and interpret spatial structure.
- Future Application: Findings could enhance Virtual Reality (VR) technology by incorporating eye-movement-relative motion calculations, potentially reducing motion sickness caused by mismatches between displayed images and the brain's expectations.
- Branch of Science: Neuroscience, Visual Science, and Biomedical Engineering.
Contrary to long-standing beliefs, motion from eye movements helps the brain perceive depth—a finding that could enhance virtual reality.
When you go for a walk, how does your brain know the difference between a parked car and a moving car? This seemingly simple distinction is challenging because eye movements, such as the ones we make when watching a car pass by, make even stationary objects move across the retina—motion that has long been thought of as visual “noise” the brain must subtract out.
Now, researchers at the University of Rochester have discovered that instead of being meaningless interference, the visual motion of an image caused by eye movements helps us understand the world. The specific patterns of visual motion created by eye movements are useful to the brain for figuring out how objects move and where they are located in 3D space.
“The conventional idea has been that the brain needs to somehow discount, or subtract off, the image motion that is produced by eye movements, as this motion has been thought to be a nuisance,” says Greg DeAngelis, George Eastman Professor; professor in the Departments of Brain and Cognitive Sciences, Neuroscience, and Biomedical Engineering and the Center for Visual Science; member of the Del Monte Institute for Neuroscience; and lead author of the new research, published in Nature Communications. “But we found that the visual motion produced by our eye movements is not just a nuisance variable to be subtracted off; rather, our brains analyze these global patterns of image motion and use this to infer how our eyes have moved relative to the world.”
The research team developed a new theoretical framework to predict how humans should perceive an object’s motion and depth during different types of eye movements. They tested these predictions by having participants view 3D virtual environments in which a target object moved through a scene while the participants kept their eyes focused on a single point. In one task, participants estimated the direction the target object was moving by using a dial to match its motion with a second object. In a second task that measured depth perception, participants reported whether the target object appeared nearer or farther than the fixation spot. Across both tasks, the researchers found consistent, predictable patterns of errors that closely matched the theoretical predictions.
“We show that the brain considers many pieces of information to understand the 3D structure of the world through vision, including the patterns of image motion caused by eye movements,” says DeAngelis. “Contrary to conventional ideas, the brain doesn’t ignore or suppress image motion produced by eye movement. Instead, it uses this image motion to understand a scene and accurately estimate an object’s motion and depth.”
This research has important implications for understanding visual perception, which informs how the brain interprets everyday activities like reading and recognizing faces. But it could also provide insight and new applications for visual technologies, such as virtual reality headsets.
“VR headsets don’t factor in how the eyes are moving relative to the scene when they compute the images to show to each eye. There may be a stark mismatch between the image motion that is shown to the observer in VR and what the brain is expecting to receive based on the eye movements that the observer is making,” says DeAngelis. This could be what causes some people to experience motion sickness while using a VR headset.
Funding: The National Institutes of Health supported this research.
Published in journal: Nature Communications
Title: Flexible computation of object motion and depth based on viewing geometry inferred from optic flow
Authors: Zhe-Xin Xu, Jiayi Pang, Akiyuki Anzai, and Gregory C. DeAngelis
Source/Credit: University of Rochester | Kelsie Smith Hayduk
Reference Number: ns020426_01
