Computational Linguistics & Phonetics Computational Linguistics & Phonetics Fachrichtung 4.7 Universität des Saarlandes

Computational Linguistics Colloquium

25 June 2015, 16:15
Conference Room, Building C7.4

Dialogue Processing in Time and Space

David Schlangen
Faculty of Linguistics and Literary Studies, Bielefeld University, Germany

Dialogue settings can be characterised along many dimensions: for example, according to the familiarity between the participants (between strangers, between family), according to the immediacy of the medium of exchange (exchanging letters, or speaking in a free conversation), or according to the immediacy of contact between participant (on the phone, or face to face). Current spoken dialogue systems mostly occupy an odd spot in that multi-dimensional space: Siri, for example, is an "assistant" with very limited familiarity with the person it is assisting; it converses via speech, but with chat-like fixed turn-taking; and it knows where "here" is, but not what else is "here" besides its user. I will talk about work we have done in my group on modelling conversational situations that are somewhat closer to natural face-to-face dialogue. I will briefly introduce the "incremental units" model of dialogue processing (Schlangen & Skantze; EACL 2009, Dialogue & Discourse 2011) which we use as the basis for our work on situated dialogue. I will show how we used it to realise fast turn-taking in an implemented dialogue system (Skantze & Schlangen, EACL 2009). I will then discuss our work on statistical incremental language understanding / grounded semantics, which widens the scope to include reference to objects in the physically shared space (Kennington & Schlangen; SIGdial 2012, Computer Speech & Language 2014), non-linguistic information about the speaker such as gaze and gesture (Kennington, Kousidis & Schlangen; SIGdial 2013, Coling 2014), and more recently, real-time computer-vision processing (Kennington & Schlangen, IWCS and ACL 2015).

If you would like to meet with the speaker, please contact Volha Petukhova.