You finally spot it: your phone peeking out from under a pillow on the couch across the room. Within a fraction of a second, you can pretty accurately pinpoint where it is in relation to other objects in the space.
New research by psychologists at the University of Toronto has determined the visual features your brain relies on to make these calculations.
“With both behavioural and neuroimaging findings, we find that people prioritize sharp edges and object boundaries more than other features like textures and surface information to process orientation,” says Seohee Han, a PhD student in the Department of Psychology.
These insights hold lessons for A.I. and computer-based vision systems, like those that power self-driving cars. It may lead to the development of more robust and human-like visual processing systems that are effective even in weather conditions that currently challenge these systems, like rain, snow, or fog.
Previous studies have examined how the visual system processes orientation by using simple stimuli like bars or stripe-like patterns in very controlled settings.
“But that’s not actually what we see in real life,” Han says. “We see very complex scenes in our surroundings. They are very cluttered, and we see multiple objects at a time.”
For their study, Han asked participants to judge the average orientation of patches of real-world photographs. She compared their judgements to computational estimates of orientation using contour-based and filter-based models and found that participants’ judgements consistently matched the former in which object contours and boundaries were prioritized.
This finding is consistent with fMRI data from the Natural Scenes Dataset, which Han found to be better explained by models that prioritize contour structure in visual processing.
Han’s work is detailed in, “Contours drive distinct orientation selectivity in the human visual system,” published alongside Faculty of Arts and Science professor Dirk Bernhardt-Walther in Scientific Reports.
This research can also help guide studies that look further along the visual pathway.
“Orientation is like a building block. For human vision, we need orientation information to go to higher levels of visual processing and compute more advanced features, like symmetry.”
“Orientation is like a building block,” says Han. “For human vision, we need orientation information to go to higher levels of visual processing and compute more advanced features, like symmetry.”
Funding Acknowledgement
This work was supported by a Connaught International Scholarship, and a Natural Sciences and Engineering Research Council (NSERC) Discovery Grant.
More Information
To learn more about this study or to speak to its authors, please contact:
-
Michael Pereira
Communications Officer, Department of Psychology, University of Toronto
psy.communications.officer@utoronto.ca