Scope and Underspecification
Abstract:
Montague Semantics does not cover all semantic phenomena there are. In this lecture, we will investigate the classical case in which Montague Semantics fails to compute the correct meaning(s): scope ambiguities, a certain kind of semantic ambiguity. In the first part, we talk about what scope ambiguities are and why the mechanisms we know so far aren't powerful enough to compute them. In the second part of the lecture we learn about a rather modern approach to dealing with scope, based on underspecification.Finally, we look at the computational problems connected to this approach and show how to solve them.In the previous lecture, we have seen how we can compute semantic representations for simple sentences. The formal tool we have used for this purpose was λ-calculus, and the linguistic theory we have followed was Montague Semantics.Now of course Montague Semantics does not cover all semantic phenomena there are; otherwise semanticists would be out of jobs by now. The good news, however, is that the insights into the structure of semantic representations that Montague gained are so fundamental that many modern semantic theories still uphold Montague Semantics when dealing with simple sentences. Such theories typically come with extensions to the formal framework to allow them the necessary flexibility.In this lecture, we will investigate the classical case in which Montague Semantics fails to compute the correct meaning(s): scope ambiguities, a certain kind of semantic ambiguity. In the first part, we talk about what scope ambiguities are and why the mechanisms we know so far aren't powerful enough to compute them. In the second part of the lecture we learn about a rather modern approach to dealing with scope, based on underspecification. Finally, we look at the computational problems connected to this approach and show how to solve them - first in an abstract way and then (in the next Chapter) by means of a Prolog implementation.
Table of Contents
Scope Ambiguities In this part of the lecture we will learn what
scope ambiguities are and why such ambiguities constitute a problem for Montague style semantic construction. We will then see some ways of dealing with these problems (more or less satisfactorily) by extending Montague's framework a little.
Underspecification In the rest of this lecture, we will explore algorithms that do
not enumerate
all readings from a syntactic analysis, but instead derive just
one,
underspecified description of them all. It will still be possible to efficiently extract all readings from the description, but we want to delay this enumeration step for as long as possible. At the same time, the description itself will be very compact (not much bigger than one single reading), and we will be able to compute a description from a syntactic analysis efficiently.
Exercise
Quantifiers and negation aren't the only scope taking elements. Other candidates (and thus other sources of genuinely semantic ambiguity) include certain adverbs. For example modal adverbs like possibly may interfere with the scope of determiners. Consider the sentence Possibly a dog is barking. What are the different readings of this sentence?
Convince yourself - by expanding the abbreviations and β-reducing - that the above expression really is the second reading.
Notice that we not only moved A@λw.WOMAN(w) in front of the Every, but tacitly also moved the adjacent @λy bit with it. Explain why this makes sense. It may help you to look at these colored representations of the two readings:
i. (Every@λv.MAN(v))@(λx.(A@λw.WOMAN(w))@(λy.LOVE(x,y)))
ii. (A@λw.WOMAN(w))@(λy.(Every@λv.MAN(v))@(λx.LOVE(x,y)))
The sentence is syntactically unambiguous, that is there's only one syntax tree for it. Draw this syntax tree.
Give natural language paraphrases for the first four readings (try to make them as unambiguous as possible).
Give a natural language paraphrase for the fifth reading and compare it to the paraphrase for the fourth one. Can you explain why both readings are equivalent by examining the corresponding formulas?
Do you see the difference (in meaning) between the two readings? Give a natural language paraphrase for each of the readings and try to think of contexts that would favour one or the other. Can you explain why the ordering of the representations of the two determiners A and Most makes a difference in this case (in contrast to the corresponding determiners in our siamese-cat-example)?
Do you see what's problematic about this formula? Try to give a natural language paraphrase for it. What could have gone wrong in a semantic construction process that has led to this formula for our example sentence?
Write down (at least) one of the two readings as a "normal" λ-expression of the form (λx.WALK@x)@JOHN. Remember that you have to "invent" a variable for each lam! If you do not feel familiar with this notation yet, replace the @s by bracketting, e.g. (λx.WALK(x))(JOHN). Compare the way the transitive verb is combined with its arguments here with the way this was done in λ-calculus. Give a short comment on what you think is the main difference between the two approaches.
Write down a constraint graph that describes the five readings of the sentence Every owner of a Siamese cat loves a therapist.
Convince yourself that the constraint graph you've seen in the example really describes both of our λ-structures. Check each of the points in the above definition.