Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Diane J. Litman is active.

Publication


Featured researches published by Diane J. Litman.


meeting of the association for computational linguistics | 1997

PARADISE: A Framework for Evaluating Spoken Dialogue Agents

Marilyn A. Walker; Diane J. Litman; Candace A. Kamm; Alicia Abella

This paper presents PARADISE (PARAdigm for DIalogue System Evaluation), a general framework for evaluating spoken dialogue agents. The framework decouples task requirements from an agents dialogue behaviors, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normalizing for task complexity.


Cognitive Science | 1987

A Plan Recognition Model for Subdialogues in Conversations

Diane J. Litman; James F. Allen

Previous plon-based models of dialogue understanding hove been unoble to occount for mony types of subdiologues present in noturolly occurring conversotions. One reason for this is that the models hove not clearly differentiated between the voroius woys thot on utterance con relote to a plan structure representing o topic. In this poper we present a plon-bosed theory that allows o wide variety of utterance-plan relotionships. We introduce a set of discourse plans, each one corresponding to o porticulor way that on utteronce con relote to o discourse topic, and distinguish such plans from the set of plans that ore octuolly used to model the topics. By incorporating knowledge obout discourse into o plon-bosed fromework. we con account for a wide variety of subdiologues while maintaining the computotionol odvontoges of the plon-bosed opprooch.


Journal of Artificial Intelligence Research | 2002

Optimizing dialogue management with reinforcement learning: experiments with the NJFun system

Satinder P. Singh; Diane J. Litman; Michael J. Kearns; Marilyn A. Walker

Designing the dialogue policy of a spoken dialogue system involves many nontrivial choices. This paper presents a reinforcement learning approach for automatically optimizing a dialogue policy, which addresses the technical challenges in applying reinforcement learning to a working dialogue system with human users. We report on the design, construction and empirical evaluation of NJFun, an experimental spoken dialogue system that provides users with access to information about fun things to do in New Jersey. Our results show that by optimizing its performance via reinforcement learning, NJFun measurably improves system performance.


Computational Linguistics | 1993

Empirical studies on the disambiguation of cue phrases

Julia Hirschberg; Diane J. Litman

Cue phrases are linguistic expressions such as now and well that function as explicit indicators of the structure of a discourse. For example, now may signal the beginning of a subtopic or a return to a previous topic, while well may mark subsequent material as a response to prior material, or as an explanatory comment. However, while cue phrases may convey discourse structure, each also has one or more alternate uses. While incidentally may be used sententially as an adverbial, for example, the discourse use initiates a digression. Although distinguishing discourse and sentential uses of cue phrases is critical to the interpretation and generation of discourse, the question of how speakers and hearers accomplish this disambiguation is rarely addressed.This paper reports results of empirical studies on discourse and sentential uses of cue phrases, in which both text-based and prosodic features were examined for disambiguating power. Based on these studies, it is proposed that discourse versus sentential usage may be distinguished by intonational features, specifically, pitch accent and prosodic phrasing. A prosodic model that characterizes these distinctions is identified. This model is associated with features identifiable from text analysis, including orthography and part of speech, to permit the application of the results of the prosodic analysis to the generation of appropriate intonational features for discourse and sentential uses of cue phrases in synthetic speech.


Natural Language Engineering | 2000

Towards developing general models of usability with PARADISE

Marilyn A. Walker; Candace A. Kamm; Diane J. Litman

The design of methods for performance evaluation is a major open research issue in the area of spoken language dialogue systems. This paper presents the PARADISE methodology for developing predictive models of spoken dialogue performance, and shows how to evaluate the predictive power and generalizability of such models. To illustrate the methodology, we develop a number of models for predicting system usability (as measured by user satisfaction), based on the application of PARADISE to experimental data from three different spoken dialogue systems. We then measure the extent to which the models generalize across different systems, different experimental conditions, and different user populations, by testing models trained on a subset of the corpus against a test set of dialogues. The results show that the models generalize well across the three systems, and are thus a first approximation towards a general performance model of system usability.


meeting of the association for computational linguistics | 2004

Predicting Student Emotions in Computer-Human Tutoring Dialogues

Diane J. Litman; Katherine Forbes-Riley

We examine the utility of speech and lexical features for predicting student emotions in computer-human spoken tutoring dialogues. We first annotate student turns for negative, neutral, positive and mixed emotions. We then extract acoustic-prosodic features from the speech signal, and lexical items from the transcribed or recognized speech. We compare the results of machine learning experiments using these features alone or in combination to predict various categorizations of the annotated student emotions. Our best results yield a 19-36% relative improvement in error reduction over a baseline. Finally, we compare our results with emotion prediction in human-human tutoring dialogues.


Computer Speech & Language | 1998

Evaluating spoken dialogue agents with PARADISE: Two case studies

Marilyn A. Walker; Diane J. Litman; Candace A. Kamm; Alicia Abella

Abstract This paper presents PARADISE (PARAdigm for DIalogue System Evaluation), a general framework for evaluating and comparing the performance of spoken dialogue agents. The framework decouples task requirements from an agents dialogue behaviours, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different taks by normalizing for task complexity. After presenting PARADISE, we illustrate its application to two different spoken dialogue agents. We show how to derive a performance function for each agent and how to generalize results across agents. We then show that once such a performance function has been derived, it can be used both for making predictions about future versions of an agent, and as feedback to the agent so that the agent can learn to optimize its behaviour based on its experiences with users over time.


User Modeling and User-adapted Interaction | 2002

Designing and Evaluating an Adaptive Spoken Dialogue System

Diane J. Litman; Shimei Pan

Spoken dialogue system performance can vary widely for different users, as well for the same user during different dialogues. This paper presents the design and evaluation of an adaptive version of TOOT, a spoken dialogue system for retrieving online train schedules. Based on rules learned from a set of training dialogues, adaptive TOOT constructs a user model representing whether the user is having speech recognition problems as a particular dialogue progresses. Adaptive TOOT then automatically adapts its dialogue strategies based on this dynamically changing user model. An empirical evaluation of the system demonstrates the utility of the approach.


artificial intelligence in education | 2006

Spoken Versus Typed Human and Computer Dialogue Tutoring

Diane J. Litman; Carolyn Penstein Rosé; Katherine Forbes-Riley; Kurt VanLehn; Dumisizwe Bhembe; Scott Silliman

While human tutors typically interact with students using spoken dialogue, most computer dialogue tutors are text-based. We have conducted 2 experiments comparing typed and spoken tutoring dialogues, one in a human-human scenario, and another in a human-computer scenario. In both experiments, we compared spoken versus typed tutoring for learning gains and time on task, and also measured the correlations of learning gains with dialogue features. Our main results are that changing the modality from text to speech caused large differences in the learning gains, time and superficial dialogue characteristics of human tutoring, but for computer tutoring it made less difference.


meeting of the association for computational linguistics | 1995

Combining Multiple Knowledge Sources for Discourse Segmentation

Diane J. Litman; Rebecca J. Passonneau

We predict discourse segment boundaries from linguistic features of utterances, using a corpus of spoken narratives as data. We present two methods for developing segmentation algorithms from training data: hand tuning and machine learning. When multiple types of features are used, results approach human performance on an independent test set (both methods), and using cross-validation (machine learning).

Collaboration


Dive into the Diane J. Litman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mihai Rotaru

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenting Xiong

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Kurt VanLehn

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Arthur Ward

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Huy V. Nguyen

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Fan Zhang

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge