Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marilyn A. Walker is active.

Publication


Featured researches published by Marilyn A. Walker.


Journal of Artificial Intelligence Research | 2007

Using linguistic cues for the automatic recognition of personality in conversation and text

François Mairesse; Marilyn A. Walker; Matthias R. Mehl; Roger K. Moore

It is well known that utterances convey a great deal of information about the speaker in addition to their semantic content. One such type of information consists of cues to the speakers personality traits, the most fundamental dimension of variation between humans. Recent work explores the automatic detection of other types of pragmatic variation in text and conversation, such as emotion, deception, speaker charisma, dominance, point of view, subjectivity, opinion and sentiment. Personality affects these other aspects of linguistic production, and thus personality recognition may be useful for these tasks, in addition to many other potential applications. However, to date, there is little work on the automatic recognition of personality traits. This article reports experimental results for recognition of all Big Five personality traits, in both conversation and text, utilising both self and observer ratings of personality. While other work reports classification results, we experiment with classification, regression and ranking models. For each model, we analyse the effect of different feature sets on accuracy. Results show that for some traits, any type of statistical model performs significantly better than the baseline, but ranking models perform best overall. We also present an experiment suggesting that ranking models are more accurate than multi-class classifiers for modelling personality. In addition, recognition models trained on observed personality perform better than models trained using self-reports, and the optimal feature set depends on the personality trait. A qualitative analysis of the learned models confirms previous findings linking language and personality, while revealing many new linguistic markers.


meeting of the association for computational linguistics | 1997

PARADISE: A Framework for Evaluating Spoken Dialogue Agents

Marilyn A. Walker; Diane J. Litman; Candace A. Kamm; Alicia Abella

This paper presents PARADISE (PARAdigm for DIalogue System Evaluation), a general framework for evaluating spoken dialogue agents. The framework decouples task requirements from an agents dialogue behaviors, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normalizing for task complexity.


Journal of Artificial Intelligence Research | 2002

Optimizing dialogue management with reinforcement learning: experiments with the NJFun system

Satinder P. Singh; Diane J. Litman; Michael J. Kearns; Marilyn A. Walker

Designing the dialogue policy of a spoken dialogue system involves many nontrivial choices. This paper presents a reinforcement learning approach for automatically optimizing a dialogue policy, which addresses the technical challenges in applying reinforcement learning to a working dialogue system with human users. We report on the design, construction and empirical evaluation of NJFun, an experimental spoken dialogue system that provides users with access to information about fun things to do in New Jersey. Our results show that by optimizing its performance via reinforcement learning, NJFun measurably improves system performance.


Natural Language Engineering | 2000

Towards developing general models of usability with PARADISE

Marilyn A. Walker; Candace A. Kamm; Diane J. Litman

The design of methods for performance evaluation is a major open research issue in the area of spoken language dialogue systems. This paper presents the PARADISE methodology for developing predictive models of spoken dialogue performance, and shows how to evaluate the predictive power and generalizability of such models. To illustrate the methodology, we develop a number of models for predicting system usability (as measured by user satisfaction), based on the application of PARADISE to experimental data from three different spoken dialogue systems. We then measure the extent to which the models generalize across different systems, different experimental conditions, and different user populations, by testing models trained on a subset of the corpus against a test set of dialogues. The results show that the models generalize well across the three systems, and are thus a first approximation towards a general performance model of system usability.


meeting of the association for computational linguistics | 2002

MATCH: An Architecture for Multimodal Dialogue Systems

Michael Johnston; SrinivasBangalore; Gunaranjan Vasireddy; Amanda Stent; Patrick Ehlen; Marilyn A. Walker; Steve Whittaker; Preetam Maloor

Mobile interfaces need to allow the user and system to adapt their choice of communication modes according to user preferences, the task at hand, and the physical and social environment. We describe a multimodal application architecture which combines finite-state multimodal language processing, a speech-act based multimodal dialogue manager, dynamic multimodal output generation, and user-tailored text planning to enable rapid prototyping of multimodal interfaces with flexible input and adaptive output. Our testbed application MATCH (Multimodal Access To City Help) provides a mobile multimodal speech-pen interface to restaurant and sub-way information for New York City.


meeting of the association for computational linguistics | 1990

Mixed Initiative in Dialogue: An Investigation into Discourse Segmentation

Marilyn A. Walker; Steve Whittaker

Conversation between two people is usually of MIXED-INITIATIVE, with CONTROL over the conversation being transferred from one person to another. We apply a set of rules for the transfer of control to 4 sets of dialogues consisting of a total of 1862 turns. The application of the control rules lets us derive domain-independent discourse structures. The derived structures indicate that initiative plays a role in the structuring of discourse. In order to explore the relationship of control and initiative to discourse processes like centering, we analyze the distribution of four different classes of anaphora for two data sets. This distribution indicates that some control segments are hierarchically related to others. The analysis suggests that discourse participants often mutually agree to a change of topic. We also compared initiative in Task Oriented and Advice Giving dialogues and found that both allocation of control and the manner in which control is transferred is radically different for the two dialogue types. These differences can be explained in terms of collaborative planning principles.


Journal of Artificial Intelligence Research | 2000

An application of reinforcement learning to dialogue strategy selection in a spoken dialogue system for email

Marilyn A. Walker

This paper describes a novel method by which a spoken dialogue system can learn to choose an optimal dialogue strategy from its experience interacting with human users. The method is based on a combination of reinforcement learning and performance modeling of spoken dialogue systems. The reinforcement learning component applies Q-learning (Watkins, 1989), while the performance modeling component applies the PARADISE evaluation framework (Walker et al., 1997) to learn the performance function (reward) used in reinforcement learning. We illustrate the method with a spoken dialogue system named elvis (EmaiL Voice Interactive System), that supports access to email over the phone. We conduct a set of experiments for training an optimal dialogue strategy on a corpus of 219 dialogues in which human users interact with elvis over the phone. We then test that strategy on a corpus of 18 dialogues. We show that elvis can learn to optimize its strategy selection for agent initiative, for reading messages, and for summarizing email folders.


meeting of the association for computational linguistics | 2001

Quantitative and Qualitative Evaluation of Darpa Communicator Spoken Dialogue Systems

Marilyn A. Walker; Rebecca J. Passonneau; Julie E. Boland

This paper describes the application of the PARADISE evaluation framework to the corpus of 662 human-computer dialogues collected in the June 2000 Darpa Communicator data collection. We describe results based on the standard logfile metrics as well as results based on additional qualitative metrics derived using the DATE dialogue act tagging scheme. We show that performance models derived via using the standard metrics can account for 37% of the variance in user satisfaction, and that the addition of DATE metrics improved the models by an absolute 5%.


Computer Speech & Language | 1998

Evaluating spoken dialogue agents with PARADISE: Two case studies

Marilyn A. Walker; Diane J. Litman; Candace A. Kamm; Alicia Abella

Abstract This paper presents PARADISE (PARAdigm for DIalogue System Evaluation), a general framework for evaluating and comparing the performance of spoken dialogue agents. The framework decouples task requirements from an agents dialogue behaviours, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different taks by normalizing for task complexity. After presenting PARADISE, we illustrate its application to two different spoken dialogue agents. We show how to derive a performance function for each agent and how to generalize results across agents. We then show that once such a performance function has been derived, it can be used both for making predictions about future versions of an agent, and as feedback to the agent so that the agent can learn to optimize its behaviour based on its experiences with users over time.


adaptive agents and multi-agents systems | 1997

Improvising linguistic style: social and affective bases for agent personality

Marilyn A. Walker; Janet E. Cahn; Stephen Whittaker

This paper introduces Linguistic Style Improvisation, a theory and set of algorithms for improvisation of spoken utterances by articial agents, with applications to interactive story and dialogue systems. We argue that linguistic style is a key aspect of character, and show how speech act representations common in AI can provide abstract representations from which computer characters can improvise. We show that the mechanisms proposed introduce the possibility of socially oriented agents, meet the requirements that lifelike characters be believable, and satisfy particular criteria for improvisation proposed by Hayes-Roth.

Collaboration


Dive into the Marilyn A. Walker's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shereen Oraby

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amita Misra

University of California

View shared research outputs
Top Co-Authors

Avatar

Michael Neff

University of California

View shared research outputs
Top Co-Authors

Avatar

Pranav Anand

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge