Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sudeep Gandhe is active.

Publication


Featured researches published by Sudeep Gandhe.


programming multi agent systems | 2003

Team Oriented Programming and Proxy Agents: The Next Generation

Paul Scerri; David V. Pynadath; Nathan Schurr; Alessandro Farinelli; Sudeep Gandhe; Milind Tambe

Coordination between large teams of highly heterogeneous entities will change the way complex goals are pursued in real world environments. One approach to achieving the required coordination in such teams is to give each team member a proxy that assumes routine coordination activities on behalf of its team member. Despite that approach’s success, as we attempt to apply this first generation of proxy architecture to larger teams in more challenging environments, some limitations become clear. In this paper, we present initial efforts on the next generation of proxy architecture and Team Oriented Programming (TOP), called Machinetta. Machinetta aims to overcome the limitations of the previous generation of proxies and allow effective coordination between very large teams of highly heterogeneous agents. We describe the principles underlying the design of the Machinetta proxies and present initial results from two domains.


Lecture Notes in Computer Science | 2009

Semi-formal Evaluation of Conversational Characters

Ron Artstein; Sudeep Gandhe; Jillian Gerten; Anton Leuski; David R. Traum

Conversational dialogue systems cannot be evaluated in a fully formal manner, because dialogue is heavily dependent on context and current dialogue theory is not precise enough to specify a target output ahead of time. Instead, we evaluate dialogue systems in a semi-formal manner, using human judges to rate the coherence of a conversational character and correlating these judgments with measures extracted from within the system. We present a series of three evaluations of a single conversational character over the course of a year, demonstrating how this kind of evaluation helps bring about an improvement in overall dialogue coherence.


human factors in computing systems | 2011

Does it matter if a computer jokes

Peter Khooshabeh; Cade McCall; Sudeep Gandhe; Jonathan Gratch; Jim Blascovich

The goal here was to determine whether computer interfaces are capable of social influence via humor. Users interacted with a natural language capable virtual agent that told persuasive information, and they were given the option to use information from the dialogue in order to complete a problem-solving task. Individuals interacting with an ostensibly humorous virtual agent were influenced by it such that those who judged the agent unfunny were less likely to be persuaded and departed from the agents suggestions. We discuss the implications of these results for HCI involving natural language systems and virtual agents.


meeting of the association for computational linguistics | 2005

Transonics: A Practical Speech-to-Speech Translator for English-Farsi Medical Dialogs

Robert Belvin; Emil Ettelaie; Sudeep Gandhe; Panayiotis G. Georgiou; Kevin Knight; Daniel Marcu; Scott Millward; Shrikanth Narayanan; Howard Neely; David R. Traum

We briefly describe a two-way speech-to-speech English-Farsi translation system prototype developed for use in doctor-patient interactions. The overarching philosophy of the developers has been to create a system that enables effective communication, rather than focusing on maximizing component-level performance. The discussion focuses on the general approach and evaluation of the system by an independent government evaluation team.


annual meeting of the special interest group on discourse and dialogue | 2008

Evaluation Understudy for Dialogue Coherence Models

Sudeep Gandhe; David R. Traum

Evaluating a dialogue system is seen as a major challenge within the dialogue research community. Due to the very nature of the task, most of the evaluation methods need a substantial amount of human involvement. Following the tradition in machine translation, summarization and discourse coherence modeling, we introduce the the idea of evaluation understudy for dialogue coherence models. Following (Lapata, 2006), we use the information ordering task as a testbed for evaluating dialogue coherence models. This paper reports findings about the reliability of the information ordering task as applied to dialogues. We find that simple n-gram co-occurrence statistics similar in spirit to BLEU (Papineni et al., 2001) correlate very well with human judgments for dialogue coherence


intelligent user interfaces | 2006

Improving question-answering with linking dialogues

Sudeep Gandhe; Andrew S. Gordon; David R. Traum

Question-answering dialogue systems have found many applications in interactive learning environments. This paper is concerned with one such application for Army leadership training, where trainees input free-text questions that elicit pre-recorded video responses. Since these responses are already crafted before the question is asked, a certain degree of incoherence exists between the question that is asked and the answer that is given. This paper explores the use of short linking dialogues that stand in between the question and its video response to alleviate the problem of incoherence. We describe a set of experiments with human generated linking dialogues that demonstrate their added value. We then describe our implementation of an automated method for utilizing linking dialogues and show that these have better coherence properties than the original system without linking dialogues.


intelligent virtual agents | 2009

Varying Personality in Spoken Dialogue with a Virtual Human

Michael Rushforth; Sudeep Gandhe; Ron Artstein; Antonio Roque; Sarrah Ali; Nicolle Whitman; David R. Traum

This poster reports the results of two experiments to test a personality framework for virtual characters. We use the Tactical Questioning dialogue system architecture (TACQ) [1] as a testbed for this effort. Characters built using the TACQ architecture can be used by trainees to practice their questioning skills by engaging in a role-play with a virtual human. The architecture supports advanced behavior in a questioning setting, including deceptive behavior, simple negotiations about whether to answer, tracking subdialogues for offers/threats, grounding behavior, and maintenance of the affective state of the virtual human. Trainees can use different questioning tactics in their sessions. In order for the questioning training to be effective, trainees should have experience of interacting with virtual humans with different personalities, who react in different ways to the same questioning tactics.


workshop on grammar based approaches to spoken language processing | 2007

Handling Out-of-Grammar Commands in Mobile Speech Interaction Using Backoff Filler Models

Tim Paek; Sudeep Gandhe; Max Chickering; Yun-Cheng Ju

In command and control (C&C) speech interaction, users interact by speaking commands or asking questions typically specified in a context-free grammar (CFG). Unfortunately, users often produce out-of-grammar (OOG) commands, which can result in misunderstanding or nonunderstanding. We explore a simple approach to handling OOG commands that involves generating a backoff grammar from any CFG using filler models, and utilizing that grammar for recognition whenever the CFG fails. Working within the memory footprint requirements of a mobile C&C product, applying the approach yielded a 35% relative reduction in semantic error rate for OOG commands. It also improved partial recognitions for enabling clarification dialogue.


Fifth International Workshop on Spoken Dialogue systems | 2016

A Semi-automated Evaluation Metric for Dialogue Model Coherence

Sudeep Gandhe; David R. Traum

We propose a new metric, Voted Appropriateness, which can be used to automatically evaluate dialogue policy decisions, once some wizard data has been collected. We show that this metric outperforms a previously proposed metric Weak agreement. We also present a taxonomy for dialogue model evaluation schemas, and orient our new metric within this taxonomy.


annual meeting of the special interest group on discourse and dialogue | 2014

SAWDUST: a Semi-Automated Wizard Dialogue Utterance Selection Tool for domain-independent large-domain dialogue

Sudeep Gandhe; David R. Traum

We present a tool that allows human wizards to select appropriate response utterances for a given dialogue context from a set of utterances observed in a dialogue corpus. Such a tool can be used in Wizard-of-Oz studies and for collecting data which can be used for training and/or evaluating automatic dialogue models. We also propose to incorporate such automatic dialogue models back into the tool as an aid in selecting utterances from a large dialogue corpus. The tool allows a user to rank candidate utterances for selection according to these automatic models. 1 Motivation Dialogue corpora play an increasingly important role as a resource for dialogue system creation. In addition to its traditional roles, such as training language models for speech recognition and natural language understanding, the dialogue corpora can be directly used for the selection approach to response formation (Gandhe and Traum, 2010). In the selection approach, the response is formulated by simply picking the appropriate utterance from a set of previously observed utterances. This approach is used in many wizard of oz systems, where the wizard presses a button to select an utterance, as well as in many automated

Collaboration


Dive into the Sudeep Gandhe's collaboration.

Top Co-Authors

Avatar

David R. Traum

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Anton Leuski

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jillian Gerten

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Antonio Roque

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Bilyana Martinovski

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Michael Rushforth

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Andrew S. Gordon

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

David DeVault

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jonathan Gratch

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge