Dialogue Management: Questions to Session 4
-
I understood that the idea of obligations of the agent comes from
observing human behavior which is supposed to be influenced by the
social context. Do beliefs, goals and intentions share this context? Do
the agent and the user have the same beliefs, goals, intentions and the
same obligations? Is it possible that they have the same goals but
different obligations?
- Discourse Obligations in Dialogue Processing:
To efficiently manage a dialogue, TRAINS uses "discourse obligations".
When such obligations are preceipt, they are gathered on a stack and will
be formed into "intentions" after a while, to communicate something back
to the user. In case of dropping an unsatisfied intention, the obligation
which has lead to the intention is put back onto the stack and the procedure
continues.
My question: Isn't it possible, that the programm runs into a stuck, that means
endless loop, resulting from resolving obligations to unsatisfied intentions
and vice versa? And how does the programm, if at all, recover from such a situation?
- TRAINS-95: Towards a Mixed-Initiative Planning Assistant
The authors believe that humans are guided in dialogues by special principles which
are difficult or impossible to quantify precisely. With that assumption they
"justify" their thesis about a mixed-initiative planning. I agree with them in
so far as the abilities of the machine will be not completely restricted to human
kind of interaction on cost of processing speed of the problem solving process.
For example, the system's ability to find routes between start and destination was
restricted artificially to routes with no more than at most 4 (?) intermediate
stations. This will add more interaction, not really needed to the discourse and
even leads to suboptimal proposals by the system (see figure 3 4.) which forces
it to criticize itself.
I know that the authors, when implementing this, want to simulate restictions
imposed by their so called "impossible to quantify principles". But shouldn't the
system instead asking for critical factors than assuming them?
I mean in terms of artificial intelligence, shouldn't we rather think about, how
the principles mentioned above can be simulated, instead of adopting our machines
to human behaviour?
So, isn't the avoidance of teaching the system human principles and instead simulating
human behaviour in hope that with more interactions we bypass these principles, isn't
it an indolence? And in case not, which are the proofs or examples that the principles
can't be ever teached to machines?
-
What does TRAINS do when I tell it to go from A to B via C, where the normal
route would be (i.e. in a line) A-B-C.
Ivana Kruijff-Korbayova
Last modified: Mon Jun 3 13:58:28 CEST 2002