Panelist: James Allen Title: UpHill Battles: Language Grounded in Reasoning Agents Humans readily understand language in across shifting topics, domains and tasks, and effortlessly identify the intended interpretations underlying language as appropriate to their current task and conversational state. One of the key assumptions of early work in NLU was that language understanding would be understand and facilitated in terms of the participating agent’s cognitive states. With the arrival of corpus-based, statistical, machine-learning approaches that have led to great advances in robust language processing, such goals were lost. A key reason is that there were no corpora that could encode language at the appropriate level, and this remains true today. In the past decade there have been a number of efforts to develop deeper language understanding using corpus-based methods, but these are inevitably grounded a very narrow specific task (like data queries) or specific domains (like instructing a robot). Models developed for such domain-specific tasks have proven to be not useful in transferring to new domains and tasks. So we still remain in our infancy on developing general purpose, domain independent, deep language understanding technology that can be applied across any domain — i.e., language grounded to an agent’s cognitive state and usable across any domain, even in learning about new tasks and new domains.