Visual Robot Navigation with Topological Maps

Acronym: 
VIRONA
Term: 
2008-05-01 till 2012-10-30
Research Areas: 
A
Abstract: 

Within this project, we develop navigation strategies based on a topological representation of space. Known places are characterized by a panoramic snapshot collected when the robot visited the place for the first time. For navigation between adjacent places, local navigation strategies are used. Depending on the robot's task, the topological map is enriched with task-relevant sensor information or an estimate of the robot's position. The developed strategies are tested in the context of visual navigation of an autonomous cleaning robot.

 

Methods and Research Questions: 

The main goal of this project is to develop parsimonious methods for visual long-range navigation of autonomous mobile robots. Therefore, local navigation strategies are embedded into a framework for topological navigation. The topological representation of space can be enriched by further task-relevant information (e.g. position information or non-visual sensor data).

Topological maps are graph-like representations of the robot's environment. In the graph, places are stored as nodes; interrelations between places are represented by links. In purely topological maps, places are characterized solely by the sensory information (in our case a panoramic camera image with a horizontal field of view of full 360°) collected when the robot visited the place for the first time. Depending on the application, task-relevant sensor information (such as obstacle or free-space information) or an estimate of the robot's current position can be attached to the nodes. Route planning can be achieved by standard graph-searching algorithms: The route is decomposed into a chain of intermediate places and the robot relies on local navigation strategies (developed in the companion project LOVIHO) to navigate between consecutive places along the chain. We feel that this approach has a large potential —especially for mobile robot applications with limited on-board computation power.

As a testbed, we apply the developed navigation methods in the context of vision-based navigation of an autonomous cleaning robot. For this task, it is essential to completely cover the accessible workspace while avoiding repeated coverage. Therefore, the developed representations of space serve as a basis for higher-level strategies (i) for planning and traveling along a lane parallel to its predecessor, (ii) for planning a new segment of lanes at the frontier of an already explored area, (iii) for obstacle detection and avoidance, and (iv) for planning and following paths to places in the map.

Besides for the navigation of an autonomous cleaning robot, the methods developed within this project can also be applied for the navigation of the walking robot HECTOR (http://www.cit-ec.de/research/MULERO).

Outcomes: 

During the course of this project, we investigated topological maps with partial and complete pose information as a spatial representation for covering a rectangular area by parallel lanes. The topological map with partial pose information contains an estimate for the robot's current orientation and an estimate for the robot's current distance to a previous lane. For the topological map with complete pose information, the robot's position and orientation with respect to an external reference frame are estimated. The latter approach achieves better results than our method based on partial pose estimation and is therefore used as a building block for cleaning strategies which allow the robot to completely cover more complex workspaces. In order to avoid overlap between adjacent cleaning segments, we currently investigate approaches to visual loop-closure detection (i.e. the detection whether the robot has already been at its current location or not). These approaches are based on comparisons of global image descriptors derived from panoramic images and on pixel-by-pixel comparisons of panoramic images.

 

Publications: