First-Order Inference

Abstract:
In this chapter we will extend our tableaux calculus to first order logic. As we shall see, there're good reasons to do so: The boost in expressivity that comes with the move from propositional to first order logic allows us to cover a lot of interesting semantic phenomena that we couldn't handle before. But at the same time, this boost has certain less pleasant consequences - theoretically as well as practically. We lose the finite model property (???» Finite Model Property) and decidability (???» Decidability) of satisfiability and theorem proving. This means that we have to take heavy measures to ensure termination in any implementation of a first-order inference system.In the first part of this chapter, we will further motivate the extension of our tableaux calculus and give a theoretical overview of how the calculus has to be changed. In the second part, we will show how we practically handle these modifications (and the related problems) in our implementation.Throughout this chapter, we will focus on model generation. Although the extended tableaux calculus can readily be used for theorem proving, we will not deeply discuss this issue here. The reason is that talking about first order theorem proving immediately gives rise to a discussion of efficency. Indeed there exists a whole literature on tuning tableaux calculi for efficient theorem proving. But this goes beyond what we want to do in this course.

Table of Contents

The Step to First Order
In this section, we will show that first order inference allows us to do a lot of interesting things in semantic processing, and we will see how we have to extend our tableau calculus for this new task.

Implementing First-Order Tableaux
As you'll probably expect from reading the theoretical section on first-order tableaux, the implementation of the new quantifier rules demands special care. Due to the undecidability of first-order logic, we cannot expect that we are done by translating the rules to Prolog and letting the program run. In fact, we not only have to add the two new quantifier clauses to our tableaux predicate tab/6 (formerly tabl/3). As you will see in a minute, we also have to replace our former base clause by four clauses controlling the instantiations of the universial quantifer. Last not least, we have to include a mechanism that ensures that our program really halts on every input. But let's approach the matter systematically.

Running First-Order Tableaux
We now provide driver predicates and give you a listing of all files you need in order to experiment with first-order theorem proving or model generation.

Model Generation with Quantifiers
Model Generation with Quantifiers

Sidetrack: Derived Rules for the Existential Quantifier
As for the propositional case (» Sidetrack: Practical Enhancements for Tableaux) is it possible to design derived rules of the first order rules. You might often want to construct tableaux directly for formulae that contain existential quantifiers, without pre-processing them to contain negated universal quantifiers instead. You can use the derived rules given below for this purpose (see » Sidetrack: Practical Enhancements for Tableaux on the concept of a derived rule).

Project: Adding Equality to our Calculus
We will explain how to extend our tableaux calculus by equality, which is a very important relation in natural language. Your project will be to extend our implementation accordingly.


Exercise

  1. Run the implementation and perform some traces.
  2. Implement the existential rule for model generation (» A special Rule for Model Generation). [Hint: Mind where you put cuts. Look at the disjunctive rule.]
  3. [Theoretical]
    The way we use hbaseList/2 in the universal rule we strictly speaking do not get the full Herbrand base of the branch at the time of the call: We only consider the literals on the branch, hence we can only see constants that occur in literals. Why is this enough?
  4. The way we control the universal rule isn't fully fair (which means that even if we use iterative deepening (» Restricting the Application of the Universal Rule), we won't truly approximate completeness with our implementation). To understand the problem, let's look at the input formula xMAN(x)((yzFATHER(z,y))uvMOTHER(v,u))T. In this example our implementation will use up any givenQDepth for xMAN(x) on the fathers generated from yzFATHER(z,y). So it will never have any QDepth left to instantiate with mothers when it reaches uvMOTHER(v,u). Try out the following call. It consists of two calls to modGenWrite/3, one with QDepth 10 and one with 20 (plus some additional output):

    Try it!

    Thus with such input formulae, the QDepth isn't distributed in a fair manner over new individuals coming from all conjuncts. Do you have an idea how to fix this? Can you implement your idea?
  5. [Theoretical]
    We say that an additional rule of a calculus is admissible if it doesn't make the calculus incorrect or incomplete. This exercise deals with the admissibility of 𝒯()mgF. It is obvious that our tableaux calculus doesn't become incorrect if we exchange 𝒯()F for 𝒯()mgF: We cannot close any tableaux with 𝒯()mgF that we couldn't close with 𝒯()F. The reason is that whenver a tableaux remained open with the old 𝒯()F-rule, the branch with the new constant remains open if we use the 𝒯()mgF instead. But why does the calculus remain complete? In other words: Why don't we have more tableaux that remain open if we use 𝒯()mgF instead of 𝒯()F?
  6. Give the (sub)tableaux that the derived rules for the existential quantifier given in » Sidetrack: Derived Rules for the Existential Quantifier abbreviate.