Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Douglas A. Jones is active.

Publication


Featured researches published by Douglas A. Jones.


international conference on acoustics, speech, and signal processing | 2003

The SuperSID project: exploiting high-level information for high-accuracy speaker recognition

Douglas A. Reynolds; Walter D. Andrews; Joseph P. Campbell; Jiri Navratil; Barbara Peskin; André Gustavo Adami; Qin Jin; David Klusacek; Joy S. Abramson; Radu Mihaescu; John J. Godfrey; Douglas A. Jones; Bing Xiang

The area of automatic speaker recognition has been dominated by systems using only short-term, low-level acoustic information, such as cepstral features. While these systems have indeed produced very low error rates, they ignore other levels of information beyond low-level acoustics that convey speaker information. Recently published work has shown examples that such high-level information can be used successfully in automatic speaker recognition systems and has the potential to improve accuracy and add robustness. For the 2002 JHU CLSP summer workshop, the SuperSID project (http://www.clsp.jhu.edu/ws2002/groups/supersid/) was undertaken to exploit these high-level information sources and dramatically increase speaker recognition accuracy on a defined NIST evaluation corpus and task. The paper provides an overview of the structure, data, task, tools, and accomplishments of this project. Wide ranging approaches using pronunciation models, prosodic dynamics, pitch and duration features, phone streams, and conversational interactions were explored and developed. We show how these novel features and classifiers indeed provide complementary information and can be fused together to drive down the equal error rate on the 2001 NIST extended data task to 0.2% - a 71% relative reduction in error over the previous state of the art.


international conference on computational linguistics | 1996

Role of word sense disambiguation in lexical acquisition: predicting semantics from syntactic cues

Bonnie J. Dorr; Douglas A. Jones

This paper addresses the issue of word-sense ambiguity in extraction from machine-readable resources for the construction of large-scale knowledge sources. We describe two experiments: one which ignored word-sense distinctions, resulting in 6.3% accuracy for semantic classification of verbs based on (Levin, 1993); and one which exploited word-sense distinctions, resulting in 97.9% accuracy. These experiments were dual purpose: (1) to validate the central thesis of the work of (Levin, 1993), i.e., that verb semantics and syntactic behavior are predictably related; (2) to demonstrate that a 15-fold improvement can be achieved in deriving semantic information from syntactic cues if we first divide the syntactic cues into distinct groupings that correlate with different word senses. Finally, we show that we can provide effective acquisition techniques for novel word senses using a combination of online sources.


international conference on acoustics, speech, and signal processing | 2005

Measuring human readability of machine generated text: three case studies in speech recognition and machine translation

Douglas A. Jones; Edward Gibson; Wade Shen; Neil Granoien; Martha Herzog; Douglas A. Reynolds; Clifford J. Weinstein

We present highlights from three experiments that test the readability of current state-of-the art system output from: (1) an automated English speech-to-text (SST) system; (2) a text-based Arabic-to-English machine translation (MT) system; and (3) an audio-based Arabic-to-English MT process. We measure readability in terms of reaction time and passage comprehension in each case, applying standard psycholinguistic testing procedures and a modified version of the standard defense language proficiency test for Arabic called the DLPT*. We learned that: (1) subjects are slowed down by about 25% when reading system STT output; (2) text-based MT systems enable an English speaker to pass Arabic Level 2 on the DLPT*; and (3) audio-based MT systems do not enable English speakers to pass Arabic Level 2. We intend for these generic measures of readability to predict performance of more application-specific tasks.


international conference on computational linguistics | 2000

Toward a scoring function for quality-driven machine translation

Douglas A. Jones; Gregory M. Rusk

We describe how we constructed an automatic scoring function for machine translation quality; this function makes use of arbitrarily many pieces of natural language processing software that has been designed to process English language text. By machine-learning values of functions available inside the software and by constructing functions that yield values based upon the software output, we are able to achieve preliminary, positive results in machine-learning the difference between human-produced English and machine-translation English. We suggest how the scoring function may be used for MT system development.


north american chapter of the association for computational linguistics | 2007

ILR-Based MT Comprehension Test with Multi-Level Questions

Douglas A. Jones; Martha Herzog; Hussny Ibrahim; Arvind Jairam; Wade Shen; Edward Gibson; Michael Emonts

We present results from a new Interagency Language Roundtable (ILR) based comprehension test. This new test design presents questions at multiple ILR difficulty levels within each document. We incorporated Arabic machine translation (MT) output from three independent research sites, arbitrarily merging these materials into one MT condition. We contrast the MT condition, for both text and audio data types, with high quality human reference Gold Standard (GS) translations. Overall, subjects achieved 95% comprehension for GS and 74% for MT, across 4 genres and 3 difficulty levels. Surprisingly, comprehension rates do not correlate highly with translation error rates, suggesting that we are measuring an additional dimension of MT quality. We observed that it takes 15% more time overall to read MT than GS.


spoken language technology workshop | 2006

EXPERIMENTAL FACILITY FOR MEASURING THE IMPACT OF ENVIRONMENTAL NOISE AND SPEAKER VARIATION ON SPEECH-TO-SPEECH TRANSLATION DEVICES

Douglas A. Jones; Arvind Jairam; Wade Shen; Paul D. Gatewood; John D. Tardelli; Michael Emonts

We describe the construction and use of a laboratory facility for testing the performance of speech-to-speech translation devices. Approximately 1500 English phrases from various military domains were recorded as spoken by each of 30 male and 12 female English speakers with variation in speaker accent, for a total of approximately 60,000 phrases available for experimentation. We describe an initial experiment using the facility which shows the impact of environmental noise and speaker variability on phrase recognition accuracy for two commercially available one-way speech-to-speech translation devices configured for English-to-Arabic.


neural information processing systems | 2003

Phonetic Speaker Recognition with Support Vector Machines

William M. Campbell; Joseph P. Campbell; Douglas A. Reynolds; Douglas A. Jones; Timothy R. Leek


international conference on acoustics, speech, and signal processing | 2004

High-level speaker verification with support vector machines

William M. Campbell; Joseph P. Campbell; Douglas A. Reynolds; Douglas A. Jones; Timothy R. Leek


international conference on acoustics, speech, and signal processing | 2003

Using prosodic and conversational features for high-performance speaker recognition: report from JHU WS'02

Barbara Peskin; Jiri Navratil; Joy S. Abramson; Douglas A. Jones; David Klusacek; Douglas A. Reynolds; Bing Xiang


conference of the international speech communication association | 2003

Measuring the readability of automatic speech-to-text transcripts.

Douglas A. Jones; Florian Wolf; Edward Gibson; Elliott Williams; Evelina Fedorenko; Douglas A. Reynolds; Marc A. Zissman

Collaboration


Dive into the Douglas A. Jones's collaboration.

Top Co-Authors

Avatar

Douglas A. Reynolds

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wade Shen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Clifford J. Weinstein

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joseph P. Campbell

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge