Machine-Mediated Human-Human Interaction

Acronym: 
M2H2I
Research Areas: 
C
B
A
Abstract: 

Strategies for human-human interaction are not static. Humans are capable to adapt their  strategies to new situations and conditions. This aspect has gained much relevance in the past decades, because much of human interaction nowadays is machine-mediated, e.g. by telephone, email, chat, video-conference, discussion boards, etc. Each of these technologies influences human interaction, as they offer certain advantages and impose restrictions, to which users have to adapt. This project aims to understand, model and reproduce strategies of human interaction and adaptation by observing and comparing the interaction strategies and identify adaptations of strategies under different experimental conditions. Modes of machine-mediation used during these experiments will include brain-computer interfacing which, for most users, constitutes a unfamiliar mode of machine-mediation, which imposes new own limitations to adapt to.

Methods and Research Questions: 

This project aims to do research in the fields of:: (i) Machine-mediation of human-human interaction and joint (competative and collaborative) task-solving. (ii.) Mechanisms to implement joint attention and turn taking in machine-mediated interaction. (iii.) Brain-machine interfaces.

Various types of machine-mediation techniques have emerged in the past decades ranging from text-based chat messages to video-based meetings in a virtual space. These techniques differ in the affordances they provide in interaction situations (such as gestures, facial expressions, undertones, language, etc) and in the social presence they allow. For instance, a text-only chat system allows for a lower social presence of the interactants as a full-duplex video/audio channel. These differences have been shown to alter interaction content and (when joint task solving is required) success. In this project we aim to examine another aspect of machine-mediation: Every type of machine-mediation imposes certain limitations to the interaction (e.g. regarding affordances). We aim to examine how humans adapt to these limitations. Therefore we plan to confront test subjects with entirely novel, unfamiliar modes of machine-mediation (to wit brain-computer interfaces complemented by eye-tracking techniques), which entail their own limitations and advantages, and observe how subjects adapt their strategies.
The interaction will take place in a shared space in which users can influence a series of robotic devices. The experiment design will afford that the robots reflect the mutual interests of the subjects in the interaction. Subjects will have a common task to solve in some kind of gaming interaction scenario. This joint-task solving can be either cooperative or competitive, depending on the individual experiment. We plan to conduct experimental scenarios with two different user interfaces of which one is familiar to the subject (e.g. keyboard or joystick input) and one is unfamiliar (brain-computer interface based input). By comparing both settings we hope to gain insight in human interaction strategies and their adaption to novel requirements.
Brain-computer interfaces (BCIs) have been been studied for several decades and rapid progress has been made, recently. In the past, the main application of such interfaces was as an assistive system for severely handicapped patients. Improvements of reliability, speed, and usability prevailed on researchers to think about other possible applications. For instance, several gaming applications and various studies about brain-robot interfaces (BRIs) have been presented, recently. However, this study neither aims to improve  the performance of brain-machine interfaces (BMIs, a general term including BCIs and BRIs), nor aims to propose novel applications for them. We use brain-machine interfaces as a mean to study human interaction and thereby aim to contribute to a better understanding of chances and limitations of this technique in every-day live. We plan to employ P300 and ERD brain activity patterns for the BMI. However, we consider to include some monitoring of the subjects mental state (error potentials, decision making, surprise) as well, to enrich interaction scenarios.


Research questions:

  • How do humans adapt their interaction strategies  to the limitations imposed by machine-mediation?
  • Are changes in strategies more distinct for cooperative or competitive scenarios?
  • Does machine-mediation allow for strategies which were impossible for normal interaction?
  • How fast can humans adapt to a novel mode of machine-mediation (such as BCIs) which imposes rather strict limitations?
  • Which affordances aid in establishing joint attention and in turn taking?

We plan to conduct different experiments where subjects perform machine-mediated collaborative/competitive task solving in a shared space. These interactive scenarios have in common that subjects need to influence/control robotic devices to solve the given task. In order to study the adaptation to machine-mediation, subjects will use different ways of machine-mediation including brain computer interfaces. Subjects that are unfamiliar with the use of BCIs will allow us to observe how adaption to new ways of machine-mediation functions in humans. The analysis of oculomotor data can provide clues to the strategies that interaction partners implement to signal a turn-taking event. Furthermore, the transfer of gaze positions between interaction partners or the gaze-contingent display of foci of attention produce additional interaction clues and should significantly affect adaptation processes in robot-mediated human-human interaction.

 

 

 

Outcomes: 

This PhD project is based on a master thesis which aimed at controlling the humanoid iCub robot using multiple brain activity patterns (BAPs). A demo video of the system can be found below. This project uses that system to conduct the experiments described above, but, as it is yet in an early stage, only a few pretest have been conducted so far.

 

Publications: