Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Audrey N. Le is active.

Publication


Featured researches published by Audrey N. Le.


international conference on machine learning | 2005

The rich transcription 2005 spring meeting recognition evaluation

Jonathan G. Fiscus; Nicolas Radde; John S. Garofolo; Audrey N. Le; Jerome Ajot; Christophe Laprun

This paper presents the design and results of the Rich Transcription Spring 2005 (RT-05S) Meeting Recognition Evaluation. This evaluation is the third in a series of community-wide evaluations of language technologies in the meeting domain. For 2005, four evaluation tasks were supported. These included a speech-to-text (STT) transcription task and three diarization tasks: “Who Spoke When”, “Speech Activity Detection”, and “Source Localization.” The latter two were first-time experimental proof-of-concept tasks and were treated as “dry runs”. For the STT task, the lowest word error rate for the multiple distant microphone condition was 30.0% which represented an impressive 33% relative reduction from the best result obtained in the last such evaluation – the Rich Transcription Spring 2004 Meeting Recognition Evaluation. For the diarization “Who Spoke When” task, the lowest diarization error rate was 18.56% which represented a 19% relative reduction from that of RT-04S.


IEEE Transactions on Audio, Speech, and Language Processing | 2007

NIST Speaker Recognition Evaluations Utilizing the Mixer Corpora—2004, 2005, 2006

Mark A. Przybocki; Alvin F. Martin; Audrey N. Le

NIST has coordinated annual evaluations of text-independent speaker recognition from 1996 to 2006. This paper discusses the last three of these, which utilized conversational speech data from the Mixer Corpora recently collected by the Linguistic Data Consortium. We review the evaluation procedures, the matrix of test conditions included, and the performance trends observed. While most of the data is collected over telephone channels, one multichannel test condition utilizes a subset of Mixer conversations recorded simultaneously over multiple microphone channels and a telephone line. The corpus also includes some non-English conversations involving bilingual speakers, allowing an examination of the effect of language on performance results. On the various test conditions involving English language conversational telephone data, considerable performance gains are observed over the past three years.


2006 IEEE Odyssey - The Speaker and Language Recognition Workshop | 2006

NIST Speaker Recognition Evaluation Chronicles - Part 2

Mark A. Przybocki; Alvin F. Martin; Audrey N. Le

NIST has coordinated annual evaluations of text-independent speaker recognition since 1996. This update to an Odyssey 2004 paper concentrates on the past two years of the NIST evaluations. We discuss in particular the results of the 2004 and 2005 evaluations, and how they compare to earlier evaluation results. We also discuss the preparation and planning for the 2006 evaluation, which concludes with the evaluation workshop in San Juan, Puerto Rico, in June 2006


2006 IEEE Odyssey - The Speaker and Language Recognition Workshop | 2006

The Current State of Language Recognition: NIST 2005 Evaluation Results

Alvin F. Martin; Audrey N. Le

The National Institute of Standards and Technology (NIST) coordinated in 2005 an evaluation of language recognition capabilities of research systems developed by twelve participating sites. This evaluation followed fairly similar evaluations in 1996 and 2003. We describe here the protocols of the 2005 evaluation, including the data used, the evaluation rules, and the scoring metric. We present the overall performance results and compare these to results for the previous evaluations. We also discuss how the results varied across languages and the results of limited dialect recognition tests involving English and Mandarin speech data


International Journal of Speech Technology (ISSN 1381-2416) | 2004

Effects of Speech Recognition Accuracy on the Performance of DARPA Communicator Spoken Dialogue Systems

Gregory A. Sanders; Audrey N. Le

The DARPA Communicator program explored ways to construct better spoken-dialogue systems, with which users interact via speech alone to perform relatively complex tasks such as travel planning. During 2000 and 2001 two large data sets were collected from sessions in which paid users did travel planning using the Communicator systems that had been built by eight research groups. The research groups improved their systems intensively during the ten months between the two data collections. In this paper, we analyze these data sets to estimate the effects of speech recognition accuracy, as measured by Word Error Rate (WER), on other metrics. The effects that we found were linear. We found correlation between WER and Task Completion, and that correlation, unexpectedly, remained more or less linear even for high values of WER. The picture for User Satisfaction metrics is more complex: we found little effect of WER on User Satisfaction for WER less than about 35 to 40% in the 2001 data. The size of the effect of WER on Task Completion was less in 2001 than in 2000, and we believe this difference is due to improved strategies for accomplishing tasks despite speech recognition errors, which is an important accomplishment of the research groups who built the Communicator implementations. We show that additional factors must account for much of the variability in task success, and we present multivariate linear regression models for task success on the 2001 data. We also discuss the apparent gaps in the coverage of our metrics for spoken dialogue systems.


conference of the international speech communication association | 2001

DARPA communicator dialog travel planning systems: the june 2000 data collection.

Marilyn A. Walker; John S. Aberdeen; Julie E. Boland; Elizabeth Owen Bratt; John S. Garofolo; Lynette Hirschman; Audrey N. Le; Sungbok Lee; Shrikanth Narayanan; Kishore Papineni; Bryan L. Pellom; Joseph Polifroni; Alexandros Potamianos; P. Prabhu; Alexander I. Rudnicky; Gregory A. Sanders; Stephanie Seneff; David Stallard; Steve Whittaker


conference of the international speech communication association | 2002

DARPA communicator: cross-system results for the 2001 evaluation.

Marilyn A. Walker; Alexander I. Rudnicky; Rashmi Prasad; John S. Aberdeen; Elizabeth Owen Bratt; John S. Garofolo; Helen Wright Hastie; Audrey N. Le; Bryan L. Pellom; Alexandros Potamianos; Rebecca J. Passonneau; Salim Roukos; Gregory A. Sanders; Stephanie Seneff; David Stallard


conference of the international speech communication association | 2002

DARPA communicator evaluation: Progress from 2000 to 2001

Marilyn A. Walker; Alexander I. Rudnicky; John S. Aberdeen; Elizabeth Owen Bratt; John S. Garofolo; Helen Wright Hastie; Audrey N. Le; Bryan L. Pellom; Alexandros Potamianos; Rebecca J. Passonneau; Rashmi Prasad; Salim Roukos; Gregory A. Sanders; Stephanie Seneff; David Stallard


Odyssey | 2008

NIST 2007 language recognition evaluation.

Alvin F. Martin; Audrey N. Le


language resources and evaluation | 2006

Edit Distance: A Metric for Machine Translation Evaluation.

Mark A. Przybocki; Gregory A. Sanders; Audrey N. Le

Collaboration


Dive into the Audrey N. Le's collaboration.

Top Co-Authors

Avatar

Gregory A. Sanders

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

John S. Garofolo

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Alvin F. Martin

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Mark A. Przybocki

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bryan L. Pellom

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephanie Seneff

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexandros Potamianos

National Technical University of Athens

View shared research outputs
Researchain Logo
Decentralizing Knowledge