Building a manual interaction database populated with physics-based models

Acronym: 
MINDA
Term: 
2008-03 till 2012-10
Research Areas: 
A
Abstract: 

MINDA is creating incrementally growing database of manual interactions to help put manual intelligence research on a firmer empirical bases. This involves the study of manual interactions in humans using a multi-sensing approach. The database contains: geometry information, tactile sensor information, vision information and sound information. Using these multimodal information sources allows us to build models that can aid robots to carry out complex tasks of the type that humans perform with ease.

 

Methods and Research Questions: 

In order to decide on the structure of the database involves the answering of several important scientific questions: How should manual interactions be represented for storage, comparison and retrieval? What are suitable similarity measures for manual interactions? What are the elementary building blocks of a manual interaction? How do manual interactions motivated on the perceptual, control and task levels differ? Solving these questions will involve using skills in both psychology and computer science.

While at the physics level almost all laws that are involved in manual interaction situations are known, this knowledge alone is only sufficient to derive meaningful grasping and interaction strategies in highly simplified and constrained situations. The following analogy with language is revealing: the physics of the vocal tract and its interaction with the surrounding air permits the implementation of a huge variety of languages. However, the commonalities of their structure cannot be derived from the physics alone, but requires a deeper understanding of linguistic phenomena that become observable and meaningful only at higher levels of abstraction. A research agenda for studying manual intelligence therefore has to observe and analyse manual actions at a hierarchy of levels and analyse this data in order to arrive at a comprehensive picture of the cognition enabling manual interaction and manual intelligence. An important entry point is the construction of a comprehensive database of manual interaction patterns in a variety of situations. While today linguistic databases exist in a great variety, the construction of databases for manual interaction patterns is still largely in an infancy stage. We are currently developing a comprehensive and versatile database as a major cornerstone for manual interaction research.

The construction of such a database is intimately connected with the creation of sophisticated motion capturing facilities to observe manual interactions at a high level of spatio-temporal resolution. This involves numerous technical challenges with regard to data acquisition, integration of different input channels as well as the calibration and mutual registration of the involved modalities. Many of the associated questions turn out to be inseparable from key research questions connected with the observation and identification of highly articulated movements in the presence of occlusion and noise.

 

Outcomes: 

Connecting current research in robotics and cognitive science on the control of manual actions exhibits mutually complementing ideas about the role of basic action units and their embedding into an overarching computational-cognitive architecture for synthesising complex manual actions. To pursue this further will require us to complement the current, strongly control- and physics-based approach for the synthesis of robot manual actions with an observation-driven approach, combining modern capture technology with advanced analysis methods for enabling rich, multimodal recordings of human manual actions and to refine these into highly organized, multi-level representations of human manual actions. A database along these lines is an important step towards mapping the large interaction knowledge underlying and enabling the "manual intelligence" exhibited in human manual actions and would constitute a valuable basis for shaping robot manual actions more closely according to our own abilities. Data captured during the course of the project is starting to be made available to the public and can be found here: http://opensource.cit-ec.de/projects/virtual-and-real-grasping.

 

Publications: