Language-mediated and joint attention in human-robot interaction
Speaker: Maria Staudte
Institution: Saarland University
Abstract:
Psycholinguistic studies of situated language processing have revealed that gaze in the visual environment is tightly coupled with both spoken language comprehension and production. It has also been established that interlocutors monitor the gaze of their partners, so-called "joint attention", as a further means for facilitating mutual understanding. It is therefore plausible to hypothesise that human-robot spoken interaction would similarly benefit when the robot's language-related gaze behaviour is similar to that of people, potentially providing information about the robot's intended meaning or successful understanding.
In my talk I will report findings from an eye-tracking experiment which investigated this hypothesis in the case of robot speech production. Human participants were eye-tracked while observing the robot and were instructed to determine the 'correctness' of the robot's statement. Specifically, we examined the human behavior in response to incongruency of the robot's gaze behaviour and/or errors in the statements' logical truth.
We found clear evidence for (robot) utterance-mediated gaze as well as for gaze-mediated joint attention in human-robot interaction. Our results clearly suggest that this kind of human-like robot-gaze is useful in spoken HRI and that humans react to robots in a manner typical of HHI.