Computational Linguistics & Phonetics Computational Linguistics & Phonetics Fachrichtung 4.7 Universität des Saarlandes

Computational Linguistics Colloquium

Thursday, 30 January 2014, 16:15
Conference Room, Building C7.4

Rational Mechanisms for Language Processing

Rick Lewis
Department of Psychology
Language and Cognitive Architecture lab
University of Michigan

Characterizing sentence processing as rational probabilistic inference has yielded a number of theoretical insights into human language comprehension. For example, surprisal theory (Hale, Levy) represents a simple and powerful formalization of incremental processing that has met with notable empirical success (and some challenges) in accounting for word-by-word reading times. A major theoretical issue now facing the field is how to integrate rational theories with bounded computational⁄cognitive mechanisms. The standard approach in cognitive science (Marr and others) is to posit (bounded) mechanisms that approximate functions specified at a rational analysis level. I will discuss an alternative approach, computational rationality, that uses assumptions about bounds in the rational analyses themselves. In general, the approach admits of two kinds of analyses and derivations: derivations of architectural mechanisms that are optimal across a broad range of tasks, and derivations of policies or programs that are optimal for specific task settings and demands. As an instance of the first kind of analysis, we will consider the derivation of an optimal short–term memory system, given quite general assumptions about noisy representations. We show how such a system provides principled explanations of a range of interference and speed–accuracy tradeoffs in sentence processing. As an instance of the second kind of analysis, we consider the derivation of optimal eye-movement policies given quite general assumptions about perceptual noise and oculomotor architecture. We show how such analyses provide principled explanations of task and payoff effects in simple reading tasks. Finally, we look at preliminary work on how these components might be combined to yield rational parsing mechanisms that provided integrated accounts of surprisal, memory, and task effects. This is joint work with Mike Shvartsman, Satinder Singh and Andrew Howes.

If you would like to meet with the speaker, please contact Vera Demberg.