Some previous studies of student learning have demonstrated a strong advantage in favor of human tutoring over a classroom control condition (Bloom, 1984; Cohen et al., 1982). It is undoubtedly true that one prominent component of effective human tutoring is collaborative dialogue between student and tutor (Fox, 1993; Merrill et al., 1992; Graesser et al., 1995). These results have spawned an optimistic view toward building effective tutorial dialogue systems. Toward this end, many current tutorial dialogue systems have been evaluated successfully with students (Ashley, Desai & Levine, 2002; Evens & Michael, in press; Graesser et al., 1999; Heffernan & Koedinger, 2002; Rosé et al., 2001). Nevertheless, so far none have demonstrated conclusively that tutorial dialogue systems provide a more effective or efficient means of instruction than an otherwise equivalent purely text based approach. Furthermore, recently it has been demonstrated that even human tutoring does not always lead to significantly more student learning than a reading control where the same information is presented to students in a non-interactive form (Rosé et al., 2003). These results demonstrate the importance of exploring exactly what separates effective tutorial dialogue from ineffective tutorial dialogue so that we can build tutorial dialogue systems that consistently perform better than non-interactive alternatives. In this talk I will present an overview of the results from an evaluation of a state-of-the-art tutorial dialogue system as well as an analysis of a corpus of human tutoring dialogues in the same domain that suggests a greater emphasis on tutor questions that elicit extended explanations from students as well as focused, explicit negative feedback for inaccurate or incomplete student explanations.