Talking Stick – sensorimotor grounding of language

Acronym: 
Talking Stick
Term: 
2008-03 till 2012-10
Research Areas: 
A
D
Abstract: 

The goal of the Talking Stick project is to operationalise hypothetical mechanisms responsible for cognitive behavior. In the project, we want to understand these mechanisms by implementing them in a minimal cognitive system, making it possible to evaluate their contribution to cognition and to investigate interactions between the proposed mechanisms and in a next step their connection to language. One fundamental mechanisms appears to be the recruitment of internal models in motor control, perception, planning ahead and communication.

 

Methods and Research Questions: 

How can a behavior, which is defined by a multitude of sensorimotor processes, be linked to a condensed description like a single word?

Can seemingly cognitive properties be controlled by reactive structures? How is cognition grounded in reactive behavior? From our point of view cognitive behavior arises out of recruiting already present mechanisms and putting them together in a flexible way. Higher-level function as planning ahead are assumed to reuse existing embodied internal models in a form of internal simulation. Internal simulation allows in this way to imagine behavior without actually performing them and therefore to test consequences of a behavior before actually carrying it out. The same form of mental processing is implied in making sense of one’s own perception through mapping them onto the own motor system that constitutes an internal model of the own body.

Related to this question is how the conceptual system is organized and can interact or make use of such internal simulations? How are behaviors related to their sensory context and how does this organization allow to modulate the behavior or adapt to changing conditions? While this organization might at first be a self-organizing process driven from the bottom-up, it is also interesting how top-down influences can guide conceptualization as in humans most of the knowledge is not experienced, but is given through communication.

We want to understand hypothetical cognitive mechanisms by implementing them in a minimal cognitive system, making it possible to evaluate their contribution to cognition and to investigate interactions between the proposed mechanisms and their connection to language.

The system is build in a bottom-up way with the lower reactive level based on the behavior-based approach Walknet. Walknet is a biologically inspired network controlling a six-legged walking machine which on a reactive level is able to move around, coordinate its legs and cope with simple tasks or disturbances (climbing over obstacles, loosing a leg, ...).

Already in the context of such simple behaviors the necessity for internal models arises, especially for a model of the body that is assumed to have coevolved in service for actions. This internal model can be decoupled from the body and used for planning ahead through internal simulation. Recruitment of the internal model in internal simulation is assumed to allow understanding what one perceives or hears in communication.

A central part of the extended system, now termed reaCog, is the conceptual system that is realized as a recurrent neuronal network.

Outcomes: 

Insect navigation, often considered to require a “cognitive map” can be understood on a reactive level. The reactive control network has been extended by an internal body model which can be applied in mental simulation. The conceptual system is extended and a neural scheme has been introduced which allows trying out variations of the already present behaviors and afterwards incorporating these into the conceptual system. On a higher level the organization of the conceptual system allows to establish relations between concepts and features through learning.

The internal model has also been applied in perception and in a communicative scenario. In Language Games on body postures between two agents the model mediates between the visual appearance as seen by an observing agent and the proprioceptive information used by the performing agent during motor control. The agents can establish a shared vocabulary that is grounded in multimodal bodily representations.

More higher-level connections and influences will be introduced in the future, as we have started-in corporation with the ICSI, Berkeley-to extend the model towards language representation and use it in an interactive communicative task.

On the left, the Stick insect, the biological model, is shown. In the middle, a picture of the dynamic simulation used to test the control system. On the right, the robot Hector which is currently developed in the working group of Axel Schneider in the Mulero project and which will be controlled by the reaCog system.

More information:
reaCog architecture overview – shown for one leg.
Schematic overview of relations between different levels of the controller structure.

Publications: