Jonathan G. Fiscus
National Institute of Standards and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathan G. Fiscus.
international conference on machine learning | 2005
Jonathan G. Fiscus; Nicolas Radde; John S. Garofolo; Audrey N. Le; Jerome Ajot; Christophe Laprun
This paper presents the design and results of the Rich Transcription Spring 2005 (RT-05S) Meeting Recognition Evaluation. This evaluation is the third in a series of community-wide evaluations of language technologies in the meeting domain. For 2005, four evaluation tasks were supported. These included a speech-to-text (STT) transcription task and three diarization tasks: “Who Spoke When”, “Speech Activity Detection”, and “Source Localization.” The latter two were first-time experimental proof-of-concept tasks and were treated as “dry runs”. For the STT task, the lowest word error rate for the multiple distant microphone condition was 30.0% which represented an impressive 33% relative reduction from the best result obtained in the last such evaluation – the Rich Transcription Spring 2004 Meeting Recognition Evaluation. For the diarization “Who Spoke When” task, the lowest diarization error rate was 18.56% which represented a 19% relative reduction from that of RT-04S.
Topic detection and tracking | 2002
Jonathan G. Fiscus; George R. Doddington
The objective of the Topic Detection and Tracking (TDT) program is to develop technologies that search, organize and structure multilingual, news oriented textual materials from a variety of broadcast news media. This research program uses controlled laboratory simulations of hypothetical systems to test the efficacy of potential technologies, to gauge research progress, and to provide a forum for the exchange of research information. This chapter introduces TDTs evaluation methodology including: the Linguistic Data Consortiums TDT corpora, evaluation metrics used in TDT and the five TDT research tasks: Topic Tracking, Link Detection, Topic Detection, First Story Detection, and Story Segmentation.
international conference on acoustics, speech, and signal processing | 1993
William M. Fisher; Jonathan G. Fiscus
In evaluating speech recognition, an alignment of reference symbols with hypothesized symbols is the basis of other measures. The authors report on advances made at the National Institute of Standards and Technology on algorithms for alignment. They empirically justify phonological alignment, which minimizes differences in phonological features. A novel technique for identifying splits and merges is briefly described.<<ETX>>
human language technology | 1992
David S. Pallett; Nancy L. Dahlgren; Jonathan G. Fiscus; William M. Fisher; John S. Garofolo; Brett C. Tjaden
This paper documents the third in a series of Benchmark Tests for the DARPA Air Travel Information System (ATIS) common task domain. The first results in this series were reported at the June 1990 Speech and Natural Language Workshop [1], and the second at the February 1991 Speech and Natural Language Workshop [2]. The February 1992 Benchmark Tests include: (1) ATIS domain spontaneous speech recognition system tests, (2) ATIS natural language understanding tests, and (3) ATIS spoken language understanding tests.
human language technology | 1990
David S. Pallett; William M. Fisher; Jonathan G. Fiscus; John S. Garofolo
The first Spoken Language System tests to be conducted in the DARPA Air Travel Information System (ATIS) domain took place during the period June 15 - 20, 1989. This paper presents a brief description of the test protocol, comparator software used for scoring results at NIST, test material selection process, and preliminary tabulation of the scored results for seven SLS systems from five sites: BBN, CMU, MIT/LCS, SRI and Unisys. One system, designated cmu-spi(r) in this paper, made use of digitized speech as input (.wav files), and generated CAS-format answers. Other systems made use of SNOR transcriptions (.snr files) as input.
workshop on applications of computer vision | 2009
R. Travis Rose; Jonathan G. Fiscus; Paul Over; John S. Garofolo; Martial Michel
This paper is a summary of the 2008 TRECVid Event Detection evaluation track. TRECVid is a laboratory-style evaluation that aims to model real world situations or significant component tasks. The event detection evaluation was organized to address detection of a set of specific events that would be of potential interest to an operator in the surveillance domain. This paper describes the video data, evaluation tasks, evaluation metrics, and results of the event detection evaluation.
IEEE Pervasive Computing | 2009
Antoine Fillinger; Imad Hamchi; Stéphane Degré; Lukas Diduch; R. Travis Rose; Jonathan G. Fiscus; Vincent M. Stanford
Looks at the data and metrology tools developed by The National Institute of Standards and Technology for the research community, including common middleware for distributed sensor data acquisition and processing.
advanced video and signal based surveillance | 2009
Jonathan G. Fiscus; John S. Garofolo; R. Travis Rose; Martial Michel
Technologies that track a specific person as they traverse a network of surveillance cameras can be used as the basis for a multitude of video surveillance applications including mass transit monitoring, large venue security, building security, and the like. In order to continue supporting the development robust people tracking technologies, the first AVSS Multiple Camera Person Tracking (MCPT) Challenge Evaluation was established to provide data and evaluation resources for researchers to build Single Person Tracking (SPT) technologies. This special session will focus on the AVSS MCPT Challenge Evaluation which will include a description of the evaluation task, the i-LIDS Multiple-camera tracking scenario data set used for the evaluation, and presentations by the challenge evaluation participants describing their systems.
Machine Translation | 2018
Audrey N. Tong; Lukasz L. Diduch; Jonathan G. Fiscus; Yasaman Haghpanah; Shudong Huang; David M. Joy; Kay Peterson; Ian Soboroff
Initiated in conjunction with DARPA’s low resource languages for emergent incidents (LORELEI) Program, the NIST LoReHLT (Low Resource Human Language Technology) evaluation series seeks to incubate research on fundamental natural language processing tasks in under-resourced languages. While part of the LORELEI program, LoReHLT is an open evaluation workshop that anyone may participate in, with its first evaluation taking place in July 2016. Eight teams, out of the 21 teams that registered, participated in the evaluation over three tasks—machine translation, named entity recognition, and situation frame—in the surprise language Uyghur.
IEEE Transactions on Audio, Speech, and Language Processing | 2012
Sadaoki Furui; Jonathan G. Fiscus; Gerald Friedland; Thomas Hain
The 13 papers in this special section focus on new frontiers in rich transcription. The papers concentrate mainly on three areas: speaker diarization approaches, error analysis, techniques, and features; capitalization and punctuation; and descriptions of complete rich transcription systems.