Multimodal Dialogue System
We develop embedded multimodal dialogue systems with a resource adaptive, distributed client/server architecture for both a mobile pedestrian navigation application (REAL) and an intelligent museum guide (PEACH).
The focal areas of research are dialogue strategies adapted to varying profiles of interest and different user models (young adults versus the elderly). We employ a corpus of speech data from elderly people to design different acoustic user models. These allow us to enhance the performance of the systems as well as to pursue differentiated dialogue strategies.