Course: Connectionist Language Processing

Lectures: Tues 14-16 in Psycholinguistics Meeting Room (C7.1 top floor)
Tutorials: Thurs 14-16 in Psycholinguistics Meeting Room (C7.1 top floor)
Begin: Tues 05.05.2020
Exam: Tues, July 21, 2020 @ 14-16 in Conference Room of C7.4 (to be confirmed)

If you plan to participate in the course, please send a mail from your preferred e-mail address to:
crocker at coli.uni-sb.de

** Course Organisation for Summer 2020 **
  • This course is taking place on-line, as follows:
  • Lectures will be pre-recorded, and made available for streaming by noon on Tuesdays
  • Tutorial sheet will be posted here, after each lecture: please get as far as you can before the Tutorial
  • Tutorials usually take place on Thursday at 14:15 in Zoom, and you are expected to participate in all sessions
  • Tutorials will provided an opportunity for Q&A about lectures, and then assistance in progressing with the tutorials
  • Completed tutorial sheets should be submitted by midnight, before the next lecture, to: aurnhammer at coli.uni-saarland.de

The course will use Microsoft Teams:
HERE. All course lectures, links, downloads, and assignments will be distributed there.


Topic: This course will examine neurocomputational (or connectionist) models of human language processing. We will start from biological neurons, and show how their processing behaviour can be modelled mathematically. The resulting artificial neurons will then be wired together to form artificial neural networks, and we will discuss how such networks can be applied to build neurocomputational models of language learning and language processing. It will be shown that such models effectively all share the same computational principles, and that any differences in their behaviour is driven by differences in the representations that they process and construct. Near the end of the course, we will use the accumulated knowledge to construct a psychologically plausible neurocomputational model of incremental (word-by-word) language comprehension that constructs a rich utterance representation beyond a simple syntactic derivation or semantic formula.

The Basics:
  • Modelling neural information processing (Connectionism)
  • Two-layer neural networks and their properties (The Perceptron)
  • Multi-layer neural networks: Towards internal representations (Multi-layer Perceptrons)
  • Neural information encoding: Localist versus Distributed schemes (Representations)
Models of Language:
  • Modelling the acquisition of the English past-tense and reading aloud
  • Processing sequences: Simple Recurrent Networks (SRNs)
  • Modelling the acquisition of hierarchical syntactic knowledge
Advanced topics:
  • Richer representations for sentence understanding
  • Neurobiological plausibility of connectionism
On-line Course Schedule (Materials are on the Teams page, here):
  • May 5 – Lecture 1: Intro to connectionism and the brain
  • May 7 – Tutorial 1: Intro to neural networks in MESH
  • May 12 – Lecture 2: Learning in Single-layer Networks
  • May 14 – Lecture 3: Training Multi-layer Networks
  • May 19 – Tutorial 2:Training Multi-layer Networks
  • May 21 – HolidayMay 26 – Lecture 4: Reading Aloud
  • May 28 – Tutorial 3: Readling Aloud
  • June 2 – Lecture 5: English Past Tense
  • June 4 – Lecture 6: Simple Recurrent Networks 1
  • June 9 – Tutorial 4: English Past Tense
  • June 11: HolidayJune 16 – Lecture 7: Simple Recurrent Networks 2
  • June 18 – Tutorial 5: Simple Recurrent Networks
  • June 23 – Lecture 8: SRN State of the Art (Christoph)
  • June 25 – Lecture 9: Modeling the Electrophysiology of Language Comprehension (Harm)
  • June 30 – Lecture 10: Situation Modeling using Microworlds (Harm)
  • July 2 – Tutorial 6: Expectation-based Comprehension 1 (Harm)
  • July 7 – Lecture 11: Surprisal and the Electrophysiology of Language Comprehension (Harm)
  • July 9 – Tutorial 7: Expectation-based Comprehension 2 (Harm)
  • July 14 – Course summary, Q&A
  • July 21 – Exam: 2-4pm Conference Room, Building C7.4


Requirements: While the course does not involve programming, students should be comfortable with basic concepts of linear algebra.

Literature:

P. McLeod, K. Plunkett and E. T. Rolls (1998). Introduction to Connectionist Modelling of Cognitive Processes. Oxford University Press. Chapters: 1-5, 7, 9.
K. Plunkett and J. Elman (1997). Exercises in rethinking innateness: A Handbook for Connectionist Simulations. MIT Press. Chapters: 1-8, 11, 12.
J. Elman (1990). Finding Structure in Time. Cognitive Science, 14: 179-211.
J. Elman (1991). Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning, 7: 195-225.


Additional readings:

N. Chater and M. Christiansen (1999). Connectionism and natural language processing. Chapter 8 of Garrod and Pickering (eds.): Language Processing. Psychology Press.
M. Christiansen and N. Chater (1999). Connectionist Natural Language Processing: The State of the Art. Cognitive Science, 23(4): 417-437.
J. Elman et al. (1996). Chapter 2: Why Connectionism? In: Rethinking Innateness. MIT Press.
J. Elman (1993). Learning and development in neural networks: The importance of starting small. Cognition, 48: 71-99.
M. Seidenberg and M. MacDonald (1999). A Probabilistic Constraints Approach to Language Acquisition and Processing. Cognitive Science, 23(4): 569-588.
M. Steedman (1999). Connectionist Sentence Processing in Perspective. Cognitive Science, 23(4): 615-634.