Language and Computation
INDUCTIVE LANGUAGE LEARNING
Introductory course

ANTAL VAN DEN BOSCH

ILK / Computational Lingustics, Tilburg University

Second week
Antal.vdnBosch@kub.nl
Course description

Inductive language learning (ILL) (a.k.a. Machine Learning of Natural Language, MLNL) investigates the efficacy of learning language tasks with inductive learning algorithms. The course is aimed at providing introductions into the following aspects of ILL:

  • historical roots of ILL
  • Chomskyan linguistics contra ILL
  • integrating disciplines: machine-learning and ILL
  • methods and techniques
  • overview of empirical studies in ILL

The roots of ILL can be traced back to De Saussure (1916) and Bloomfield (1933): language tasks can be learned and performed by employing analogy and induction on relations between language elements. Chomsky strongly criticised the bluntness of the analogy/induction approach, capitalising on its inability to capture relations involving meaning. In ILL, the pre-Chomskyan ideas on analogy and induction are implemented on present-day computer technology, using general-purpose inductive-learning tools developed in machine learning to explore the range of language tasks that can be learned successfully. After giving the historical background and an introducting into supervised inductive machine learning, the course will provide an overview of methods, techniques, and empirical results showing that ILL is successful in morpho-phonology, and (more suprisingly) succesful in several higher-level language tasks (e.g., POS tagging, PP-attachment).

Prerequisites
The course will assume novice knowledge of machine learning at most, and will partly build forth on previous ESSLLI contributions on machine learning of natural language (MLNL) by Walter Daelemans (Tilburg University), and the ESSLLI-97 course on Statistical Methods in CL by Brigitte Krenn and Christer Samuelsson.
Literature

ILL is a relatively young area lacking, as yet, a critical amount of standard texts. Historical background can be found in De Saussure (1916) and Bloomfield (1933). Recent books of relevance: Skousen (1989); Charniak (1993); Wermter, Riloff, and Scheler (1996). A good starting point for machine learning is Shavlik and Dietterich (1990). Recent books on machine learning: Langley (1996) and Mitchell (1997).

  • Bloomfield, L. (1933). Language. New York: Holt, Rinehard and Winston.
  • Charniak, E. (1993). Statistical language learning. Cambridge, MA: MIT Press/Bradford Books.
  • De Saussure, F. (1916). Course de linguistique g'{e}n'{e}rale. Paris: Payot. Edited posthumously by C. Bally and A. Riedlinger.
  • Langley, P. (1996). Elements of machine learning. San Mateo, CA: Morgan Kaufmann.
  • Mitchell, T. (1997). Machine learning. New York, NY: McGraw Hill.
  • Shavlik, J. W. and Dietterich, T. G. (eds.) (1990). Readings in Machine Learning. San Mateo, CA: Morgan Kaufmann.
  • Skousen, R. (1989). Analogical modeling of language. Dordrecht: Kluwer Academic Publishers.
  • Wermter, S., Riloff, E., and Scheler, G. (eds.) (1993). Connectionist, statistical and symbolic approaches to learning for natural language processing. Berlin: Springer-Verlag.

For further information on the course please check the this page:

http://ilk.kub.nl/~antalb/esslli

 

 


HOME
PROGRAMME
CONTACT
REGISTRATION