Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David S. Pallett is active.

Publication


Featured researches published by David S. Pallett.


human language technology | 1993

Multi-site data collection and evaluation in spoken language understanding

Lynette Hirschman; Madeleine Bates; Deborah Dahl; William M. Fisher; John S. Garofolo; David S. Pallett; Kate Hunicke-Smith; Patti Price; Alexander I. Rudnicky; Evelyne Tzoukermann

The Air Travel Information System (ATIS) domain serves as the common task for DARPA spoken language system research and development. The approaches and results possible in this rapidly growing area are structured by available corpora, annotations of that data, and evaluation methods. Coordination of this crucial infrastructure is the charter of the Multi-Site ATIS Data COllection Working group (MADCOW). We focus here on selection of training and test data, evaluation of language understanding, and the continuing search for evaluation methods that will correlate well with expected performance of the technology in applications.


Journal of the Acoustical Society of America | 1987

An acoustic‐phonetic data base

William M. Fisher; Victor W. Zue; Jared Bernstein; David S. Pallett

DARPA has sponsored the design and collection of a large speech data base. Six hundred and thirty speakers read ten sentences each. Two sentences were constant for all speakers; the remaining eight sentences were selected from a set of 450 designed at MIT and 1890 selected at TI from text sources. The set of sentences is phonetically rich, balanced, and deep. Although all recordings were made in Dallas, we sampled as many varieties of American English as possible. Selection of volunteer speakers was based on their childhood locality to give a balanced representation of geographical origins. The subject population is adult; 70% male; young (63% in their twenties); well educated (78% with bachelors degree); and predominantly white (96%). Recordings were made in a noise‐reducing sound booth using a Sennheiser headset microphone and digitized at 20 kHz. A natural reading style was encouraged. The recordings are complete, and time‐registered phonetic transcriptions are being added to the 6300 speech files at ...


international conference on acoustics, speech, and signal processing | 1989

Benchmark tests for DARPA resource management database performance evaluations

David S. Pallett

A nominally 1000-word resource management database for continuous speech recognition was developed for use in the DARPA Speech Research Program. This database has now been used at several sites for benchmark tests, and the database is expected to be made available to a wider community in the near future. The author documents the structure of the benchmark tests, including the selection of test material and details of studies of scoring algorithms.<<ETX>>


human language technology | 1990

DARPA ATIS test results June 1990

David S. Pallett; William M. Fisher; Jonathan G. Fiscus; John S. Garofolo

The first Spoken Language System tests to be conducted in the DARPA Air Travel Information System (ATIS) domain took place during the period June 15 - 20, 1989. This paper presents a brief description of the test protocol, comparator software used for scoring results at NIST, test material selection process, and preliminary tabulation of the scored results for seven SLS systems from five sites: BBN, CMU, MIT/LCS, SRI and Unisys. One system, designated cmu-spi(r) in this paper, made use of digitized speech as input (.wav files), and generated CAS-format answers. Other systems made use of SNOR transcriptions (.snr files) as input.


Journal of the Acoustical Society of America | 1988

Benchmark test procedures for automatic speech recognition systems

David S. Pallett

Benchmark test procedures have been developed and implemented in the DARPA‐sponsored program in automatic recognition of continuous speech. These tests make use of a database of recorded speech that includes material for both speaker‐dependent and speaker‐independent technologies. In implementing these uniform tests, it is necessary to define a standard lexicon of units to be scored and a standard or reference orthographic representation for each sentence, and to select a scoring procedure and error taxonomy. Considerations taken in identifying the reference lexicon and representations will be outlined. For the DARPA program, a dynamic‐programming string alignment procedure was implemented, and errors were defined in terms of substitutions, insertions, and deletions. This scoring software is publicly available. Known limitations of this procedure will be discussed. [Work supported by DARPA.]


human language technology | 1989

National Institute of Standards and Technology (NIST): formerly National Bureau of Standards

David S. Pallett

Our objectives in the DARPA Spoken Language Program are: (1) to provide a central role in speech database design, development and distribution; and (2) to design, coordinate implementation of, and analyze results of performance tests.


Archive | 1993

TIMIT acoustic-phonetic continuous speech corpus

John S. Garofolo; Lori Faith Lamel; William M. Fisher; Jonathan G. Fiscus; David S. Pallett; Nancy L. Dahlgren; Victor W. Zue


Archive | 1986

The DARPA TIMIT Acoustic-Phonetic Continuous Speech Corpus CDROM

John S. Garofolo; Lori Lamel; William M. Fisher; Jonathan G. Fiscus; David S. Pallett; Nancy L. Dahlgren


Archive | 1993

Darpa timit acoustic-phonetic continuous speech corpus

John S. Garofolo; Lori Lamel; William M. Fisher; Jonathan G. Fiscus; David S. Pallett; Nancy L. Dahlgren


Archive | 1993

DARPA TIMIT: : acoustic-phonetic continuous speech corpus CD-ROM, NIST speech disc 1-1.1

John S. Garofolo; William M. Fisher; Jonathan G. Fiscus; David S. Pallett; Nancy L. Dahlgren

Collaboration


Dive into the David S. Pallett's collaboration.

Top Co-Authors

Avatar

John S. Garofolo

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Jonathan G. Fiscus

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

William M. Fisher

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark A. Przybocki

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Nancy L. Dahlgren

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Lori Lamel

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Alvin F. Martin

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Victor W. Zue

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge