Use our tableaux-based model generation procedure to extend models like the one presented in
» Beispiel: EX EX.PUTTING1EX_EX.PUTTING1. An example run could look like this:
1 ?- extendModel(5,10).
before: [bird(tweety), siamesecat(mutzi), woman(mary), man(john), man(miles) ,love(miles, mary), owner(mary), of(mary, mutzi), love(miles, tweety), love(john, mary), walk(mary), therapist(mary), eat(mutzi, tweety)]
> john loves every siamese cat.
after: [eat(mutzi, tweety), therapist(mary), walk(mary), love(john, mary), love(miles, tweety), of(mary, mutzi), owner(mary), love(miles, mary), man(miles), ~siamesecat(miles), man(john), woman(mary), ~siamesecat(mary), siamesecat(mutzi), love(john, mutzi), ~siamesecat(john), bird(tweety), ~siamesecat(tweety), ~siamesecat(*)]
This example also gives you an impression of what direct, uncontrolled model generation does. First of all, if a speaker utters
John loves every siamese cat. in the situation described by our input model, one thing she probably wants to communicate is that
John loves Mutzi. But our system also has generated lots of other facts, such as
~siamesecat(john), that are not contradictory (up to now) although they probably weren't intended by the speaker. Moreover, the system has blindly chosen one of many possible extensions of the input model. Another possible extension would e.g. contain
siamesecat(john), love(john,john) instead of
~siamesecat(john). Now what exactly makes the extension of mental models in human language understanding so much more focussed than what our implementation does? Do you have any ideas (speculate, you don't have to implement...)?