Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Svetlana Stoyanchev is active.

Publication


Featured researches published by Svetlana Stoyanchev.


north american chapter of the association for computational linguistics | 2009

Lexical and Syntactic Adaptation and Their Impact in Deployed Spoken Dialog Systems

Svetlana Stoyanchev; Amanda Stent

In this paper, we examine user adaptation to the systems lexical and syntactic choices in the context of the deployed Lets Go! dialog system. We show that in deployed dialog systems with real users, as in laboratory experiments, users adapt to the systems lexical and syntactic choices. We also show that the systems lexical and syntactic choices, and consequent user adaptation, can have an impact on recognition of task-related concepts. This means that system prompt formulation, even in flexible input dialog systems, can be used to guide users into producing utterances conducive to task success.


international conference on acoustics, speech, and signal processing | 2013

“Can you give me another word for hyperbaric?”: Improving speech translation using targeted clarification questions

Necip Fazil Ayan; Arindam Mandal; Michael W. Frandsen; Jing Zheng; Peter Blasco; Andreas Kathol; Frédéric Béchet; Benoit Favre; Alex Marin; Tom Kwiatkowski; Mari Ostendorf; Luke Zettlemoyer; Philipp Salletmayr; Julia Hirschberg; Svetlana Stoyanchev

We present a novel approach for improving communication success between users of speech-to-speech translation systems by automatically detecting errors in the output of automatic speech recognition (ASR) and statistical machine translation (SMT) systems. Our approach initiates system-driven targeted clarification about errorful regions in user input and repairs them given user responses. Our system has been evaluated by unbiased subjects in live mode, and results show improved success of communication between users of the system.


international conference on computational linguistics | 2008

Exact Phrases in Information Retrieval for Question Answering

Svetlana Stoyanchev; Young Chol Song; William Lahti

Question answering (QA) is the task of finding a concise answer to a natural language question. The first stage of QA involves information retrieval. Therefore, performance of an information retrieval subsystem serves as an upper bound for the performance of a QA system. In this work we use phrases automatically identified from questions as exact match constituents to search queries. Our results show an improvement over baseline on several document and sentence retrieval measures on the WEB dataset. We get a 20% relative improvement in MRR for sentence extraction on the WEB dataset when using automatically generated phrases and a further 9.5% relative improvement when using manually annotated phrases. Surprisingly, a separate experiment on the indexed AQUAINT dataset showed no effect on IR performance of using exact phrases.


meeting of the association for computational linguistics | 2009

Automating Model Building in c-rater

Jana Z. Sukkarieh; Svetlana Stoyanchev

c-rater is Educational Testing Services technology for the content scoring of short student responses. A major step in the scoring process is Model Building where variants of model answers are generated that correspond to the rubric for each item or test question. Until recently, Model Building was knowledge-engineered (KE) and hence labor and time intensive. In this paper, we describe our approach to automating Model Building in c-rater. We show that c-rater achieves comparable accuracy on automatically built and KE models.


spoken language technology workshop | 2012

Localized detection of speech recognition errors

Svetlana Stoyanchev; Philipp Salletmayr; Jingbo Yang; Julia Hirschberg

We address the problem of localized error detection in Automatic Speech Recognition (ASR) output. Localized error detection seeks to identify which particular words in a users utterance have been misrecognized. Identifying misrecognized words permits one to create targeted clarification strategies for spoken dialogue systems, allowing the system to ask clarification questions targeting the particular type of misrecognition, in contrast to the “please repeat/rephrase” strategies used in most current dialogue systems. We present results of machine learning experiments using ASR confidence scores together with prosodic and syntactic features to predict whether 1) an utterance contains an error, and 2) whether a word in a misrecognized utterance is misrecognized. We show that by adding syntactic features to the ASR features when predicting misrecognized utterances the F-measure improves by 13.3% compared to using ASR features alone. By adding syntactic and prosodic features when predicting misrecognized words F-measure improves by 40%.


Proceedings of SRSL 2009, the 2nd Workshop on Semantic Representation of Spoken Language | 2009

Predicting Concept Types in User Corrections in Dialog

Svetlana Stoyanchev; Amanda Stent

Most dialog systems explicitly confirm user-provided task-relevant concepts. User responses to these system confirmations (e.g. corrections, topic changes) may be misrecognized because they contain unrequested task-related concepts. In this paper, we propose a concept-specific language model adaptation strategy where the language model (LM) is adapted to the concept type(s) actually present in the users post-confirmation utterance. We evaluate concept type classification and LM adaptation for post-confirmation utterances in the Lets Go! dialog system. We achieve 93% accuracy on concept type classification using acoustic, lexical and dialog history features. We also show that the use of concept type classification for LM adaptation can lead to improvements in speech recognition performance.


annual meeting of the special interest group on discourse and dialogue | 2014

MVA: The Multimodal Virtual Assistant

Michael Johnston; John Chen; Patrick Ehlen; Hyuckchul Jung; Jay Lieske; Aarthi M. Reddy; Ethan O. Selfridge; Svetlana Stoyanchev; Brant J. Vasilieff; Jay Gordon Wilpon

The Multimodal Virtual Assistant (MVA) is an application that enables users to plan an outing through an interactive multimodal dialog with a mobile device. MVA demonstrates how a cloud-based multimodal language processing infrastructure can support mobile multimodal interaction. This demonstration will highlight incremental recognition, multimodal speech and gesture input, contextually-aware language understanding, and the targeted clarification of potentially incorrect segments within user input.


international conference on acoustics, speech, and signal processing | 2008

Name-aware speech recognition for interactive question answering

Svetlana Stoyanchev; Gökhan Tür; Dilek Hakkani-Tür

In this work we show how interactivity in a voice-enabled question answering application may improve speech recognition. We allow the user to provide a target named entity before asking the question. Then we build a named entity specific language model using the documents containing the named entity. The question-specific model is obtained by merging the named entity specific model with the model built on a set of questions. We present a set of experiments using the TREC question set on the AQUAINT corpus. The question-specific language model is compared with the baseline model built by merging a model of the AQUAINT corpus and past TREC questions. The question-specific model achieves 32.2% reduction in word error rate from the baseline using the questions where pronominal references are resolved.


annual meeting of the special interest group on discourse and dialogue | 2014

Dialogue Act Modeling for Non-Visual Web Access

Vikas Ashok; Yevgen Borodin; Svetlana Stoyanchev; Ramakrishnan

Speech-enabled dialogue systems have the potential to enhance the ease with which blind individuals can interact with the Web beyond what is possible with screen readers - the currently available assistive technology which narrates the textual content on the screen and provides shortcuts to navigate the content. In this paper, we present a dialogue act model towards developing a speech enabled browsing system. The model is based on the corpus data that was collected in a wizard-of-oz study with 24 blind individuals who were assigned a gamut of browsing tasks. The development of the model included extensive experiments with assorted feature sets and classifiers; the outcomes of the experiments and the analysis of the results are presented.


intelligent virtual agents | 2012

Fully automated generation of question-answer pairs for scripted virtual instruction

Pascal Kuyten; Timothy W. Bickmore; Svetlana Stoyanchev; Paul Piwek; Helmut Prendinger; Mitsuru Ishizuka

We introduce a novel approach for automatically generating a virtual instructor from textual input only. Our fully implemented system first analyzes the rhetorical structure of the input text and then creates various question-answer pairs using patterns. These patterns have been derived from correlations found between rhetorical structure of monologue texts and question-answer pairs in the corresponding dialogues. A selection of the candidate pairs is verbalized into a diverse collection of question-answer pairs. Finally the system compiles the collection of question-answer pairs into scripts for a virtual instructor. Our end-to-end system presents questions in pre-fixed order and the agent answers them. Our system was evaluated with a group of twenty-four subjects. The evaluation was conducted using three informed consent documents of clinical trials from the domain of colon cancer. Each of the documents was explained by a virtual instructor using 1) text, 2) text and agent monologue, and 3) text and agent performing question-answering. Results show that an agent explaining an informed consent document did not provide significantly better comprehension scores, but did score higher on satisfaction, compared to two control conditions.

Collaboration


Dive into the Svetlana Stoyanchev's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge