Computational pragmatics - multimodal intention processing system

01.03.2014 until 31.12.2018

To enable users to interact intuitively with a robotic or virtual agent, the system requires a deep computational model of the user and the scene they are in. This project will develop a model that allows the agent to recognise the user’s communicative intention from her behaviour and respond appropriately. In particular, we focus on the time course of the user’s behaviour and incrementally combine sensor data from several modalities (e.g., speech and gesture recognition) with prior knowledge about the scenario to identify the user’s communicative intentions as early as possible. The user’s intention will be responded to appropriately through the agent’s expressive modalities including speech, gestures, and facial expressions. We will conduct empirical and computational experiments to incorporate and advance existing work (e.g., De Ruiter, Cummins, 2012; Kopp, Bergmann, Buschmeier, Sadeghipour, 2009).