» |

The most central fact about natural language is that it has meaning. Semantics is the study of meaning. In formal semantics, we conduct this study in a formal manner. In computational semantics, we're additionally interested in using the results of our study when we implement programs that process natural language. This is what we will be concerned with in this course.

Course Schedule

This chapter gives a short overview of the story told in this course.

This chapter gives a short overview of the story told in this course.

First-Order Logic

In this course, First-order logic is our preferred formalism for representing the meaning of natural language sentences. In the following section we'll review its syntax and semantics. We'll discuss^{↗}vocabularies, ^{↗}first-order models and ^{↗}first-order languages, merge them to one single complex using what we call the *satisfaction* definition, define the key logical concepts of *validity* and *logical consequence*, and conclude with a short discussion of *equality*.

In this course, First-order logic is our preferred formalism for representing the meaning of natural language sentences. In the following section we'll review its syntax and semantics. We'll discuss

Prolog and First-Order Logic

We would like to use first order logic as a semantic representation formalism, and we want to deal with it in Prolog. So obviously the first thing we need is a way of representing in Prolog the basic entitities that play a role in first order logic. We've seen that these are^{↗}first-order models, ^{↗}first-order formulae and ^{↗}vocabularies. The first part of this lecture discusses the Prolog representations we will choose for them.Then, we turn to the notion of ^{↗}truth in a model. We approach this notion in Prolog by implementing a simple first-order model checker (or semantic evaluator). The model checker takes a ^{↗}first-order formula and an (exact) ^{↗}first-order model as input and checks whether the formula is true in the model.

We would like to use first order logic as a semantic representation formalism, and we want to deal with it in Prolog. So obviously the first thing we need is a way of representing in Prolog the basic entitities that play a role in first order logic. We've seen that these are

Lambda Calculus

Now that we've learned something about first-order logic and know how to work with it in Prolog, it is time to have a look at the first major issue of this course, which is:*How can we automate the process of associating semantic representations with natural language expressions?*In the following lecture we'll explore this issue concretely: first we'll try to write a simple Prolog program that will be able to perform the task we've requested - in a limited way, though. We'll observe where it commits errors, and why. This experiment will seamlessly lead us to formulating a version of λ -calculus. λ -Calculus is a tool that allows us to control the process of making substitutions. With its help, we will be able to describe concisely how semantic representations should be built. It is one of the main tools used in this course, and by the end of the lecture we will have seen why it is so useful to computational semantics.

Now that we've learned something about first-order logic and know how to work with it in Prolog, it is time to have a look at the first major issue of this course, which is:

Towards a Modular Architecture

In this chapter, we will re-package our grammar and lexicon such that we arrive at a modular framework of semantics construction. Later on, we will be experimenting with various semantic construction techniques differing fromλ -calculus.. Incorporating these changes, and keeping track of what is going on, requires a disciplined approach towards grammar design. So we will take the opportunity and get to know some basic principles of software engineering on the way. That is, we will re-structure our program such that it is:

In this chapter, we will re-package our grammar and lexicon such that we arrive at a modular framework of semantics construction. Later on, we will be experimenting with various semantic construction techniques differing from

- Modular:

Each component should have a clear role to play and a clean interface with the other components. - Extensible:

The grammar should be easy to upgrade, should the need arise. - Reusable:

We should be able to reuse a significant portion of the grammar, even when we change the underlying representation language.

Scope and Underspecification

Montague Semantics does not cover all semantic phenomena there are. In this lecture, we will investigate the classical case in which Montague Semantics fails to compute the correct meaning(s):*scope ambiguities*, a certain kind of semantic ambiguity. In the first part, we talk about what scope ambiguities are and why the mechanisms we know so far aren't powerful enough to compute them. In the second part of the lecture we learn about a rather modern approach to dealing with scope, based on *underspecification*.Finally, we look at the computational problems connected to this approach and show how to solve them.In the previous lecture, we have seen how we can compute semantic representations for simple sentences. The formal tool we have used for this purpose was λ -calculus, and the linguistic theory we have followed was Montague Semantics.Now of course Montague Semantics does not cover all semantic phenomena there are; otherwise semanticists would be out of jobs by now. The good news, however, is that the insights into the structure of semantic representations that Montague gained are so fundamental that many modern semantic theories still uphold Montague Semantics when dealing with simple sentences. Such theories typically come with extensions to the formal framework to allow them the necessary flexibility.In this lecture, we will investigate the classical case in which Montague Semantics fails to compute the correct meaning(s): *scope ambiguities*, a certain kind of semantic ambiguity. In the first part, we talk about what scope ambiguities are and why the mechanisms we know so far aren't powerful enough to compute them. In the second part of the lecture we learn about a rather modern approach to dealing with scope, based on *underspecification*. Finally, we look at the computational problems connected to this approach and show how to solve them - first in an abstract way and then (in the next Chapter) by means of a Prolog implementation.

Montague Semantics does not cover all semantic phenomena there are. In this lecture, we will investigate the classical case in which Montague Semantics fails to compute the correct meaning(s):

Constraint Solving

In this section we show how we can extract the semantic representations of the vraious readings of a sentence from its constraimt graph, how to actually implement constraints and an enumeration procedure, and how we can use our semantics construction framework to derive constraints from the syntactic analysis of an NL sentence.From Constraints to Formulae: Now we know how to describe the possible readings of a scope ambiguity by means of a constraint graph. The next thing we need to find out is how we can extract the semantic representations of the readings from the graph. This is going to be the topic of the first part of this section. Later we will show you how to actually implement constraints and an enumeration procedure. Last not least, you will see how we can use our semantics construction framework to derive constraints from the syntactic analysis of an NL sentence.

In this section we show how we can extract the semantic representations of the vraious readings of a sentence from its constraimt graph, how to actually implement constraints and an enumeration procedure, and how we can use our semantics construction framework to derive constraints from the syntactic analysis of an NL sentence.From Constraints to Formulae: Now we know how to describe the possible readings of a scope ambiguity by means of a constraint graph. The next thing we need to find out is how we can extract the semantic representations of the readings from the graph. This is going to be the topic of the first part of this section. Later we will show you how to actually implement constraints and an enumeration procedure. Last not least, you will see how we can use our semantics construction framework to derive constraints from the syntactic analysis of an NL sentence.

Inference in Computational Semantics

Up to now we have seen various ways to construct logical formulae as meaning representations for sentences. But we don't yet know what to do further with such formulae. We will now learn how to do useful work with meaning representations like the ones we saw being constructed in the last lectures.If we utter a sentence, we transport information. One way of exploiting this information is to find out what follows from the sentence. The parallel task on the level of meaning representations is that of^{↗}inference from the formula for that sentence. So it would be a great step forward if we could automate the process of drawing inferences from such formulae. For this purpose, one uses calculi known from theoretical logic. The method that we will discuss in this chapter at length is that of semantic tableaux. We start by discussing inference for a propositional logic (» The Logic PLNQ), leaving the more complex issue of first order inference for later. In the next chapter, we will see that the Prolog implementation of propositional tableaux is straightforward.

Up to now we have seen various ways to construct logical formulae as meaning representations for sentences. But we don't yet know what to do further with such formulae. We will now learn how to do useful work with meaning representations like the ones we saw being constructed in the last lectures.If we utter a sentence, we transport information. One way of exploiting this information is to find out what follows from the sentence. The parallel task on the level of meaning representations is that of

Tableaux Implemented

In this chapter, we present an implementation of the tableaux algorithm we've presented in the last chapter.

In this chapter, we present an implementation of the tableaux algorithm we've presented in the last chapter.

[Sidetrack] Model Generation

In this chapter, we discuss a special inference technique called*model generation*. First we discuss what model generation is, why it is important for semantic processing and why tableaux are well suited for model generation. Then we turn to our implementation of propositional tableaux and see how we can turn it into a model generation system.

In this chapter, we discuss a special inference technique called

First-Order Inference

In this chapter we will extend our tableaux calculus to first order logic. As we shall see, there're good reasons to do so: The boost in expressivity that comes with the move from propositional to first order logic allows us to cover a lot of interesting semantic phenomena that we couldn't handle before. But at the same time, this boost has certain less pleasant consequences - theoretically as well as practically. We lose the finite model property (» Finite Model Property) and decidability (» Decidability) of satisfiability and theorem proving. This means that we have to take heavy measures to ensure termination in any implementation of a first-order inference system.In the first part of this chapter, we will further motivate the extension of our tableaux calculus and give a theoretical overview of how the calculus has to be changed. In the second part, we will show how we practically handle these modifications (and the related problems) in our implementation.Throughout this chapter, we will focus on model generation. Although the extended tableaux calculus can readily be used for theorem proving, we will not deeply discuss this issue here. The reason is that talking about first order theorem proving immediately gives rise to a discussion of efficency. Indeed there exists a whole literature on tuning tableaux calculi for efficient theorem proving. But this goes beyond what we want to do in this course.

In this chapter we will extend our tableaux calculus to first order logic. As we shall see, there're good reasons to do so: The boost in expressivity that comes with the move from propositional to first order logic allows us to cover a lot of interesting semantic phenomena that we couldn't handle before. But at the same time, this boost has certain less pleasant consequences - theoretically as well as practically. We lose the finite model property (» Finite Model Property) and decidability (» Decidability) of satisfiability and theorem proving. This means that we have to take heavy measures to ensure termination in any implementation of a first-order inference system.In the first part of this chapter, we will further motivate the extension of our tableaux calculus and give a theoretical overview of how the calculus has to be changed. In the second part, we will show how we practically handle these modifications (and the related problems) in our implementation.Throughout this chapter, we will focus on model generation. Although the extended tableaux calculus can readily be used for theorem proving, we will not deeply discuss this issue here. The reason is that talking about first order theorem proving immediately gives rise to a discussion of efficency. Indeed there exists a whole literature on tuning tableaux calculi for efficient theorem proving. But this goes beyond what we want to do in this course.

Discourse Representation Theory

So far in this course, we have only been concerned with the semantic interpretation of single sentences. When we look at^{↗}discourse , i.e. sequences of sentences, interesting challenges arise that go beyond the tools and techniques for computational semantics we have developed up till now. One of these challenges is interpreting pronouns - words like he, she and it which indirectly refer to objects. In this chapter we will introduce Discourse Representation Theory (or DRT for short) and show how one can build semantic representations of texts and develop algorithms for resolving pronouns to their textual antecedents.

So far in this course, we have only been concerned with the semantic interpretation of single sentences. When we look at

The Proof of the Pudding is in the Eating

In the end of the course, you can review what you have learned by plugging together some of the modules developed earlier. What you will have to do is program some driver and auxiliary predicates. The code will not be more than one or two pages. Since this is the assignment formally documenting your work performed in this course, please add a doumentation of about two pages. Besides the description of your code, it should contain the input formulae, natural language sentences, etc. you used to test your programs.

In the end of the course, you can review what you have learned by plugging together some of the modules developed earlier. What you will have to do is program some driver and auxiliary predicates. The code will not be more than one or two pages. Since this is the assignment formally documenting your work performed in this course, please add a doumentation of about two pages. Besides the description of your code, it should contain the input formulae, natural language sentences, etc. you used to test your programs.

Common Predicates

Description of important helper predicates often used in the course.

Description of important helper predicates often used in the course.

Code Index

ABSTRACTDUMMY

ABSTRACTDUMMY

» |