Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan G. Fiscus is active.

Publication


Featured researches published by Jonathan G. Fiscus.


international conference on machine learning | 2005

The rich transcription 2005 spring meeting recognition evaluation

Jonathan G. Fiscus; Nicolas Radde; John S. Garofolo; Audrey N. Le; Jerome Ajot; Christophe Laprun

This paper presents the design and results of the Rich Transcription Spring 2005 (RT-05S) Meeting Recognition Evaluation. This evaluation is the third in a series of community-wide evaluations of language technologies in the meeting domain. For 2005, four evaluation tasks were supported. These included a speech-to-text (STT) transcription task and three diarization tasks: “Who Spoke When”, “Speech Activity Detection”, and “Source Localization.” The latter two were first-time experimental proof-of-concept tasks and were treated as “dry runs”. For the STT task, the lowest word error rate for the multiple distant microphone condition was 30.0% which represented an impressive 33% relative reduction from the best result obtained in the last such evaluation – the Rich Transcription Spring 2004 Meeting Recognition Evaluation. For the diarization “Who Spoke When” task, the lowest diarization error rate was 18.56% which represented a 19% relative reduction from that of RT-04S.


Topic detection and tracking | 2002

Topic detection and tracking evaluation overview

Jonathan G. Fiscus; George R. Doddington

The objective of the Topic Detection and Tracking (TDT) program is to develop technologies that search, organize and structure multilingual, news oriented textual materials from a variety of broadcast news media. This research program uses controlled laboratory simulations of hypothetical systems to test the efficacy of potential technologies, to gauge research progress, and to provide a forum for the exchange of research information. This chapter introduces TDTs evaluation methodology including: the Linguistic Data Consortiums TDT corpora, evaluation metrics used in TDT and the five TDT research tasks: Topic Tracking, Link Detection, Topic Detection, First Story Detection, and Story Segmentation.


international conference on acoustics, speech, and signal processing | 1993

Better alignment procedures for speech recognition evaluation

William M. Fisher; Jonathan G. Fiscus

In evaluating speech recognition, an alignment of reference symbols with hypothesized symbols is the basis of other measures. The authors report on advances made at the National Institute of Standards and Technology on algorithms for alignment. They empirically justify phonological alignment, which minimizes differences in phonological features. A novel technique for identifying splits and merges is briefly described.<<ETX>>


human language technology | 1992

DARPA February 1992 ATIS benchmark test results

David S. Pallett; Nancy L. Dahlgren; Jonathan G. Fiscus; William M. Fisher; John S. Garofolo; Brett C. Tjaden

This paper documents the third in a series of Benchmark Tests for the DARPA Air Travel Information System (ATIS) common task domain. The first results in this series were reported at the June 1990 Speech and Natural Language Workshop [1], and the second at the February 1991 Speech and Natural Language Workshop [2]. The February 1992 Benchmark Tests include: (1) ATIS domain spontaneous speech recognition system tests, (2) ATIS natural language understanding tests, and (3) ATIS spoken language understanding tests.


human language technology | 1990

DARPA ATIS test results June 1990

David S. Pallett; William M. Fisher; Jonathan G. Fiscus; John S. Garofolo

The first Spoken Language System tests to be conducted in the DARPA Air Travel Information System (ATIS) domain took place during the period June 15 - 20, 1989. This paper presents a brief description of the test protocol, comparator software used for scoring results at NIST, test material selection process, and preliminary tabulation of the scored results for seven SLS systems from five sites: BBN, CMU, MIT/LCS, SRI and Unisys. One system, designated cmu-spi(r) in this paper, made use of digitized speech as input (.wav files), and generated CAS-format answers. Other systems made use of SNOR transcriptions (.snr files) as input.


workshop on applications of computer vision | 2009

The TRECVid 2008 Event Detection evaluation

R. Travis Rose; Jonathan G. Fiscus; Paul Over; John S. Garofolo; Martial Michel

This paper is a summary of the 2008 TRECVid Event Detection evaluation track. TRECVid is a laboratory-style evaluation that aims to model real world situations or significant component tasks. The event detection evaluation was organized to address detection of a set of specific events that would be of potential interest to an operator in the surveillance domain. This paper describes the video data, evaluation tasks, evaluation metrics, and results of the event detection evaluation.


IEEE Pervasive Computing | 2009

Middleware and Metrology for the Pervasive Future

Antoine Fillinger; Imad Hamchi; Stéphane Degré; Lukas Diduch; R. Travis Rose; Jonathan G. Fiscus; Vincent M. Stanford

Looks at the data and metrology tools developed by The National Institute of Standards and Technology for the research community, including common middleware for distributed sensor data acquisition and processing.


advanced video and signal based surveillance | 2009

AVSS Multiple Camera Person Tracking Challenge Evaluation Overview

Jonathan G. Fiscus; John S. Garofolo; R. Travis Rose; Martial Michel

Technologies that track a specific person as they traverse a network of surveillance cameras can be used as the basis for a multitude of video surveillance applications including mass transit monitoring, large venue security, building security, and the like. In order to continue supporting the development robust people tracking technologies, the first AVSS Multiple Camera Person Tracking (MCPT) Challenge Evaluation was established to provide data and evaluation resources for researchers to build Single Person Tracking (SPT) technologies. This special session will focus on the AVSS MCPT Challenge Evaluation which will include a description of the evaluation task, the i-LIDS Multiple-camera tracking scenario data set used for the evaluation, and presentations by the challenge evaluation participants describing their systems.


Machine Translation | 2018

Overview of the NIST 2016 LoReHLT Evaluation

Audrey N. Tong; Lukasz L. Diduch; Jonathan G. Fiscus; Yasaman Haghpanah; Shudong Huang; David M. Joy; Kay Peterson; Ian Soboroff

Initiated in conjunction with DARPA’s low resource languages for emergent incidents (LORELEI) Program, the NIST LoReHLT (Low Resource Human Language Technology) evaluation series seeks to incubate research on fundamental natural language processing tasks in under-resourced languages. While part of the LORELEI program, LoReHLT is an open evaluation workshop that anyone may participate in, with its first evaluation taking place in July 2016. Eight teams, out of the 21 teams that registered, participated in the evaluation over three tasks—machine translation, named entity recognition, and situation frame—in the surprise language Uyghur.


IEEE Transactions on Audio, Speech, and Language Processing | 2012

Introduction to the Special Section on New Frontiers in Rich Transcription

Sadaoki Furui; Jonathan G. Fiscus; Gerald Friedland; Thomas Hain

The 13 papers in this special section focus on new frontiers in rich transcription. The papers concentrate mainly on three areas: speaker diarization approaches, error analysis, techniques, and features; capitalization and punctuation; and descriptions of complete rich transcription systems.

Collaboration


Dive into the Jonathan G. Fiscus's collaboration.

Top Co-Authors

Avatar

John S. Garofolo

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

David S. Pallett

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

William M. Fisher

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Martial Michel

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Mark A. Przybocki

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

David M. Joy

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

George Awad

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Jerome Ajot

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Nancy L. Dahlgren

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Paul Over

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge