Vision and attention in the real world

Colloquium
Date: 
20 April 2011
Begin time: 
18:00
End time: 
19:30
Room: 
Q2-101

Abstract

Human sensory systems seem particularly well adapted to the processing of real-world input. Yet, most physiological and psychophysical studies employ simplified, thus well-controllable, stimuli. I will present several lines of evidence that exemplify the usefulness of psychophysical data obtained under natural conditions for understanding human perception and performance in the real world.
In natural scenes, higher-order scene structure and objects, rather than low-level features, dominate stimulus-driven gaze allocation. In turn, standard bottom-up attention models predict performance in rapid object detection tasks. Together, this demonstrates that attention and object recognition are tightly and bi-directionally coupled.
Laboratory measurements have some, but limited predictive power for gaze allocation during free behavior in the real-world. Experiments with a fully mobile, wearable eye-tracking device allow us to quantify various contributions to the differences between laboratory and real world – in particular head and body movements, implicit task sets (e.g., navigating difficult terrain) and the temporal continuity of natural visual input.
Since all perception requires inferring real-world sources from underconstrained input, we use multi-stable stimuli as model for real-world perception under ambiguity. We argue that perception and decision-making recruit overlapping neural circuitry. In a similar vein, we demonstrate that action has a direct effect on perceptual representations, even if the stimulus remains unchanged.
In sum, our work stresses the necessity of using unbiased real-world data to adequately capture the interactions between action, cognition, and perception.