 
 
 
 
 
   
One of the most important tasks in dialogue management is deliberation, i.e., deciding what to do/say next, i.e., selecting the next dialogue move of the system. In an FST model, the choices are coded in the state transitions. In a ISU-based model, the decision is based on inspecting (particular parts of) the IS. Two main factors play a role in deiciding what to do/say next:
``It turns out that, if you want to coordinate and communicate with other agents, it is extremely useful - and possibly even essential- for you and those other agents to be planners. There are two reasons for this. First, coordination between agents seems possible only because they can count on one another behaving in more or less stable ways, such as would result from an agent's commitment to its plans. ...Second, ..., communication is greatly facilitated by the agents reasoning about one another's plans.'' [Pollack1992]
For simple tasks, planning can be handled by statically encoded plans: for each task the system is prepared to handle, there is a corresponding plan (or several), typically somehow relating domain actions and communicative actions. The system loads a plan corresponding to a task and tries to carry out the actions. Such an approach is used for instance in GoDiS [Traum and Larsson2003] or in CLT's ISU-based travel information system.
Complex tasks, however, may require more sophistication, and utlimately, task planning requires problem-solving capability. Approaches to dialogue modelling employing AI planning techniques are often referred to as belief-desire-intention (BDI) models [Allen and Perrault1980].
An interesting issue concerning problem-solving is the separation of generic vs. task-specific problem-solving. More sophisticated planning pertaining to both communicative actions and domain actions is employed in the TRAINS/TRIPS decision support systems http://www.cs.rochester.edu/research/cisd/projects/, or in BEETLE, an electronic circuitry tutorial system http://www.cogsci.ed.ac.uk/~jmoore/tutoring/index.html.
Some degree of planning, and adapting the plans carried out by the system to the user, is also a precondition for a system allowing mixed initiative and/or being collaborative. For a system to take initiative, or even to collaborate, it needs to come up with posible solutions and be able to evaluate which ones are better (according to some criteria, which again can either be fixed or adapted to a user and/or situation).
[Blaylock 
2#2
2003] discuss the modelling of communicative intentions with collaborative problem solving in theTRAINS/TRIPS projects http://www.cs.rochester.edu/research/cisd/projects/. They characterize previous work as falling roughly into two areas: models of collaborative planning, which look at how agents build joint plans, and models of dialogue: 
Reading: [Blaylock 
2#2
2003]
 
 
 
 
