Computational Linguistics & Phonetics Computational Linguistics & Phonetics Fachrichtung 4.7 Universität des Saarlandes Computational Psycholinguistics

home

research

people

publications

face

ALPHA: Adaptive Mechanisms for Human Language Processing

Incrementality, Adaptation, and Anticipation

It is widely accepted in the psycholinguistics community that human sentence comprehension is incremental, and evidence is mounting that it is also anticipatory: not only do people develop and revise their understanding of an utterance with each new word they encounter, but they actively anticipate words to come. Research in our group has shown that people do so by exploiting diverse sources of information such as morphosyntax, lexical knowledge, world knowledge, and visual context.

To better understand the underlying adaptive mechanisms that people use to so readily integrate this variety of information, our group has been using eye-tracking techniques to more closely investigate:

  • Structure bias: People demonstrate a clear preference for interpreting the initial NP as the Subject, and begin searching for the Object even before the verb is reached (Knoeferle, Crocker, Scheepers, and Pickering, 2003).
  • Syntactic Priming: The SVO bias in comprehension is also subject to syntactic priming. The likelihood of interpreting an ambiguous sentence initial NP as Object is increased if listeners have just read an unambiguous OVS priming sentence, and vice versa (Scheepers and Crocker, in press).
  • Depicted events: People were able to use depicted Agent-Action-Patient information to immediately resolve role and grammatical function ambiguity (Knoeferle, Crocker, Scheepers, and Pickering, accepted). These findings were replicated in an English study of the reduced relative clause ambiguity (Knoeferle et al, 2003).
  • Prosody: The SVO bias in comprehension is modulated by the intonation contour of the utterance, with an OVS intonation leading to increased anticipation of a Subject, rather than an Object (Weber, Grice, and Crocker, 2003).

Computational Modelling

Another important focus of our research is developing computational models that exhibit human-like behavior, as provided by the foregoing empirical evidence.

face

Die Prinzessin waescht gleich der Pirat
INSOMNet reads a sentence in one word at a time and produces a set of frames on an output grid that can be decoded into an equivalent semantic dependency graph, as illustrated above with indexed predicate logic terms. The visual scene is compressed and fed to the network as additional input (Mayberry and Crocker, 2004).

To model our findings, a recent connectionist architecture is being adapted to accept extra modality information such as scene or prosody. The model, INSOMNet, has been shown to scale up to a medium-sized corpus of deep semantic dependency graphs (Mayberry and Miikkulainen, 2003). Like most subsymbolic models, INSOMNet processes sentences incrementally and develops defaults and expectations through training, but unlike most other connectionist systems, it generates explicit structural representations of the sentence interpretation, thus making the model more transparent.