Computational Linguistics & Phonetics Computational Linguistics & Phonetics Fachrichtung 4.7 Universität des Saarlandes
Mulitmodal Interaction with Intelligent Agents Topics and Readings

Below you find the proposed topics and associated reading material.


Gaze-following and recognising intentions from gaze
  • Meltzoff, A. N. and Brooks, R. (2007). Eyes wide shut: The importance of eyes in infant gaze-following and understanding of other minds. In Flom, R., Lee, K., and Muir, D. (editors), Gaze-Following: Its Development and Significance, pages 217-241. Lawrence Erlbaum Associates, Publishers, Mahwah, NJ. [pdf]

  • Becchio, C., Bertone, C., and Castiello, U. (2008). How the gaze of others influences object processing. Trends in Cognitive Science, 12, pages 254-258. [pdf]


What is joint attention and how is it established?
  • Kaplan, F. and Hafner, V. (2006). The challenges of joint attention. Interaction Studies, 7(2), pages 135-169. [pdf]


Referential Gestures: Attention-directing cues and their use
  • Bangerter, A. (2004). Using Pointing and Describing to Achieve Joint Focus of Attention in Dialogue. Psychological Science, 15(6), pages 415-419. [pdf]

  • Langton, S. and Bruce, V. (2000). You must see the point: Automatic processing of cues to the direction of social attention. J. of Exp. Psy.: Human Perception and Performance, 26(2), pages 747-757. [pdf]

  • Morrel-Samuels, P. and Krauss, R. M. (1992). Word familiarity predicts temporal asynchrony of hand gestures and speech. Journal of Experimental Psychology: Language, Memory and Cognition, 18, 615-622. [pdf]


Gaze as a means to coordinate conversation in human-human and human-machine interaction
  • Cassell, J., Torres, O., and Prevost, S. (1999). Turn Taking vs. Discourse Structure: How Best to Model Multimodal Conversation. Machine Conversations, pages 143-154. [pdf]

  • Mutlu, B., Hodgins, J., and Forlizzi, J. (2006). A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (HUMANOIDS'06), Genova, Italy. [pdf]


Intentional gaze grounds referring expressions in situated human-human interaction
  • Hanna, J. and Brennan, S. (2007). Speakers' eye gaze disambiguates referring expressions early during face-to-face conversation. Journal of Memory and Language, 57, pages 596-615. [pdf]


General non-verbal behaviour in human-machine interaction
  • Breazeal, C., Kidd, C., Thomaz, A., Hoffman, G., and Berlin, M. (2005). Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'05), pages 708-713. [pdf]

  • Sidner, C. L., Lee, C., Kidd, C., Lesh, N., and Rich, C. (2005). Explorations in engagement for humans and robots. ArtiÞcial Intelligence, 166(1-2), pages 140-164. [pdf]


Intelligence and Identity: How does an agent's appearance influence how it is perceived?
  • Hegel, F., Krach, S., Kircher, T., Wrede, B., and Sagerer, G. (2008). Theory of mind (ToM) on robots: a functional neuroimaging study. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction (HRI'08), pages 335-342. ACM. [pdf]

  • Groom, V., Takayama, L., Ochi, P., and Nass, C. (2009). I Am My Robot: The Impact of Robot-building and Robot Form on Operators. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction (HRI'09), pages 31-36. ACM. [pdf]


Intelligence and Identity: How does an agent's motion and behaviour influence how it is perceived?
  • Heider, F. and Simmel, M. (1944). An experimental study of apparent behavior. American Journal of Psychology, 57, pages 243-259. [pdf]

  • Crick, C. and Scassellati, B. (2010) Controlling a Robot with Intention Derived from Motion. Topics in Cognitive Science, 2, pages 114-126. [pdf]

  • Nagai, Y. (2005). The Role of Motion Information in Learning Human-Robot Joint Attention. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pages 2081-2086, April 2005.[pdf]


The Uncanny Valley: How human-like should an agent look and behave?
  • Minato, T., Shimada, M., Ishiguro, H., and Itakura, S. (2004). Development of an Android Robot for Studying Human-Robot Interaction. Innovations in Applied Artificial Intelligence 3029/2004:424-434. [pdf]

  • Tinwell, A. and Grimshaw, M. (2009). Bridging the Uncanny: An Impossible Traverse? Proceedings of the 13th International MindTrek Conference: Everyday Life in the Ubiquitous Era, pages 66-73, October 2009. [pdf]