Robots and Avatars learn Gesticulation

When people communicate, they also use body language. In contrast, robots or avatars hardly move their hands, or their body language is at odds with what these technical assistants are saying. A research team led by Professor Dr Stefan Kopp and Dr Kirsten Bergmann of the Faculty of Technology of Bielefeld University has now developed a system that supports verbal communication with meaningful gestures. This system is one research outcome of the Collaborative Research Centre “Alignment in Communication”-SFB 673 at Bielefeld University.

‘Gestures are important for the organisation of spoken content,’ explains Professor Kopp, who in addition to leading a research group at the cluster of excellence Cognitive Interaction Technology has joint responsibility for the subproject of SFB 673. When people, for instance, describe their flat, their words are often automatically accompanied by gestures. They may draw an imaginary rectangular shape, for example, representing a picture farme. Or they point to the position of furniture and other objects. ‘Directions on how to get somewhere are often too complex to verbalise. Gestures support these kinds of descriptions,’ says Kopp. ‘Gestures often contain important information which is not present in spoken language. They reduce the mental effort of the speaker.’

In their system, the CITEC researchers have modelled which inner processes in the mind occur when people speak and gesticulate. ‘The cognitive model can predict which gestures fit with a planned verbal expression,’ says Professor Kopp. At the same time, the new software takes into account how people gesticulate in different conditions. The language spoken, for example, influences the gestures. If someone describes something in another language, then the concepts that form in the mind change as a result of the new terms available. In turn, the accompanying gesture also changes. Time plays an important part in this process. ‘If we give the system more time to think, then it produces gestures more suited to the language, than if there was less time available.’

The computer program is able to produce iconic gestures, i.e. figurative hand movements such as circles, cubes or lines. ‘Much of spoken language can be accompanied by these gestures”, says Kopp. The system is suited, amongst other uses, to “teach” robots gestures. It can also be used for avatars. Stefan Kopp and his team are developing such virtual assistants. By incorporating gestures, communication with these everyday assistants should be more robust and feel more natural to the user. The researchers have presented their system at a science conference. For their contribution they were awarded in August at the “International Conference on Intelligent Virtual Agents” which was held in Edinburgh, Scotland.

Since 2006, the Collaborative Research Centre “Alignment in Communication” (SFB 673) at Bielefeld University has been researching the processes of alignment in communication as an alternative to the previously accepted theories on human communication. At its core lie the varied verbal and non-verbal mechanisms that enable reciprocal agreement and mutual alignment when people communicate with one another. The researchers from Bielefeld University employ an interdisciplinary approach where linguistic and technical research groups collaborate. The research group led by Stefan Kopp is working on the subproject “Speech-Gesture Alignment.”

Publication:
Kirsten Bergmann, Sebastian Kahl and Stefan Kopp (2013): Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints. In: Intelligent Virtual Agents, Lecture Notes in Artificial Intelligence. Berlin/Heidelberg: Springer.

More information on the Internet at:
www.sfb673.org/projects/B1

Contakt:
apl. Prof. Dr.-Ing. Stefan Kopp, Universität Bielefeld
Center of Excellence Cognitive Interaction Technology (CITEC)
Phone: + 49 (0) 521 106-12144
Email: skopp@techfak.uni-bielefeld.de