What makes a landmark a landmark? - How active vision strategies help bees to process salient visual features for spatial learning

Research Areas: 

For visual homing bees use salient objects that serve as landmarks. A behavioral approach already indicates an active vision strategy shaping their flight structure. To find out how landmarks are represented in the bee’s brain, the activity of single neurons in the visual motion pathway is recorded. First results here indicate that landmarks affect the neuronal responses. The texture of landmarks seems to have a considerably smaller influence on these neurons than the position of landmarks relative to the bee.


Methods and Research Questions: 

How can active sensing strategies simplify visual navigation and what are the salient visual features that bees use for homing? It is clear that distinct objects in the vicinity of the goal location can act as landmarks and guide the insect’s return path. Unknown is, what visual cues define a landmark under natural conditions and how landmarks are represented in the brain.

The structure of bee flights includes fast head and body turns (saccades), which help eliminating rotational components from the optic flow pattern between saccades, where mainly translational optic flow components are perceived. In these phases distance estimation from motion parrallax is possible – a prerequisite for visual navigation.

In socalled learning flights bees actively enhance the amount of pure translational optic flow to solve navigational tasks guided by salient objects that serve as landmarks. Our working hypothesis is that bees, by using such a flight and gaze strategy, are able to reduce the computational load of their nervous system.

The specific goal of the project is to understand how neuronal computations in the bee’s brain can form the basis of spatial learning and navigation and how this is linked to the bee’s behavior.

Bringing light into this intricate issue is likely to be relevant for the development of biologically inspired technical solutions in artificial autonomous navigation systems that are confronted with the task to select salient and reliable visual features of the goal environment as landmarks.

Bumblebees are filmed at high temporal and spatial resolution while solving navigational tasks in an experimenter-defined environment. The camera images allowed resolving the head movements of the bee. Using virtual reality software the optic flow sequences were reconstructed as experienced from ego-perspective by bees during their visual navigation tasks.

These optic flow sequences are then played back to tethered bumblebees while recording the intracellular activity of single neurons. This is done using an LED-based panoramic stimulus device.

In addition to the replay of original flight sequences we furthermore present optic flow sequences based on manipulations of the original environment, for instance, by changing the texture of environmental objects. In this way the significance of landmark features for the neuronal responses can be identified.



The fine structure of head and body saccades during learning flights is characterized by a large proportion of translatory sideward movements, which are especially important for distance estimation and so for navigation itself. Additionally, the presence of objects guides the way to a learned location.

Recordings of neuronal activity show that these landmarks also affect the responses of motion processing neurons in the bee’s lobula. Strong rotations as experienced during saccadic head movements are represented in the neuronal signal as well. During phases where the bee mainly experiences translational optic flow, object texture seems to have a minor influence in the neuronal representation of object features. This finding is in agreement with recent behavioral experiments in a parallel CITEC project (BeeNaFF)