May I guide you? – Context-Aware Embodied Cooperative Systems in Virtual Environments

Acronym: 
MIGY
Term: 
2009-05 till 2012-10
Research Areas: 
C
Abstract: 

The project investigates how awareness and memory of goals and executed actions can improve the behavior and assistance of virtual humanoid agents in collaborative exploration of information-enriched virtual environments. Its main setting embraces placing an embodied system together with a human in a virtual environment and let them reach a goal together in cooperation. The system takes the role of an assisting partner, it keeps track of the human’s and its own movements and actions in the environment, it can recommend actions, and it can be asked about past events.

 

Methods and Research Questions: 

How can awareness and memory of goals and executed actions improve the behavior and assistance of virtual agents in collaborative exploration of information-enriched virtual environments?

Virtual humans have been a topic in situated communication research. There is still a strong need to provide such embodied systems with cognitive faculties like self-perception, self-knowledge, and memory to make them more person-like cooperation partners. While research is underway to enhance virtual agents with web-based, more general knowledge, the MIGY project will provide cognitive guidance based on situated memory and goal awareness. The bottom line is that integrating context awareness and memory of own movements and actions will make such embodied systems more human-like and improve their acceptance as cooperative partners. Thus, it investigates how awareness and memory of goals and executed actions can improve the behavior and assistance of virtual humanoid agents in collaborative exploration of information-enriched virtual environments. Its main setting embraces placing such an embodied system together with a human in a virtual environment and let them reach a goal together in cooperation. The system takes the role of an assisting partner, it keeps track of the human’s and its own movements and actions in the environment, it can recommend actions, and it can be asked about past events.

To achieve the goal of a context-aware virtual assistant with a goal and action memory, data structures and symbolic representations of events and locations of visited places in a spatially arranged knowledge resource have to be designed and implemented. The system, which takes the role of an assisting partner, should be able to e.g. keep track of the human's and its own movements and actions, to recommend actions and directions, and answer questions about past events and locations. Steps to achieve the goal of a context-aware virtual assistant with a goal and action memory are the design of dynamic memory data structures and symbolic representations of events and visited places in a spatially arranged knowledge resource. The development of cognitive mechanisms for a basic sensing of the human’s intentions provides situated guidance in exploring knowledge resources, and it provides adaptation of the cooperative agent's natural language interface to fit the special needs of understanding the human’s vague requests.

 

Outcomes: 

The project extends the cognitive architecture of a virtual agent with an episodic memory, where a joint experience is conceptualized as an event in which a sequence of events forms an episode. The major result from the research project is a gain in cognitive interaction technology by approaching embodied cooperative systems that have context-awareness and remember goal histories and executed actions. The envisioned virtual assistant is expected to positively influence the human’s trust, if it acts naturally and can support the human by sharing his/her intentions. Technical results of this project are re-usable tools and frameworks for cognitive interaction in virtual environments on the one hand and on the other hand visualizations of interrelated information and knowledge resources.

Publications: