Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edward A. Epstein.
international conference on acoustics, speech, and signal processing | 1987
Amir Averbuch; Lalit R. Bahl; Raimo Bakis; Peter F. Brown; G. Daggett; Subhro Das; K. Davies; S. De Gennaro; P. V. de Souza; Edward A. Epstein; D. Fraleigh; Frederick Jelinek; Burn L. Lewis; Robert Leroy Mercer; J. Moorhead; Arthur Nádas; Deebitsudo Nahamoo; Michael Picheny; G. Shichman; P. Spinelli; D. Van Compernolle; H. Wilkens
The Speech Recognition Group at IBM Research in Yorktown Heights has developed a real-time, isolated-utterance speech recognizer for natural language based on the IBM Personal Computer AT and IBM Signal Processors. The system has recently been enhanced by expanding the vocabulary from 5,000 words to 20,000 words and by the addition of a speech workstation to support usability studies on document creation by voice. The system supports spelling and interactive personalization to augment the vocabularies. This paper describes the implementation, user interface, and comparative performance of the recognizer.
Ibm Journal of Research and Development | 2012
Edward A. Epstein; Marshall I. Schor; Bhavani S. Iyer; Adam Lally; Eric W. Brown; Jaroslaw Cwiklik
IBM Watson™ is a system created to demonstrate DeepQA technology by competing against human champions in a question-answering game designed for people. The DeepQA architecture was designed to be massively parallel, with an expectation that low latency response times could be achieved by doing parallel computation on many computers. This paper describes how a large set of deep natural-language processing programs were integrated into a single application, scaled out across thousands of central processing unit cores, and optimized to run fast enough to compete in live Jeopardy!™ games.
Journal of the Acoustical Society of America | 1991
Dimitri Kanevsky; Ponani S. Gopalakrishnan; G. Daggett; Catalina M. Danis; Edward A. Epstein; David Nahamoo
The goal of this paper is to describe work on the use of TANGORA—an Automatic Speech Recognizer (ASR) that was developed by the Speech Recognition Group at the Watson Research Center—as a communication device that would allow a hearing‐impaired person to communicate with hearing individuals over the telephone. In this implementation, the speech of the hearing individual is decoded by the ASR, and the output is displayed on a computer screen for the hearing‐impaired person. The general usability of this system could be limited by the degradation in TANGORAs recognition accuracy due to (1) use of public toll‐quality telephone lines (instead of high bandwidth, low‐noise communication lines) and (2) using it as a speaker‐independent system (instead of trained to recognize each user). For these reasons, the present study was aimed at understanding the effect of decoder accuracy and knowledge about the topic of conversation on the comprehension ability of the hearing‐impaired individual. The results of some su...
Archive | 1999
Daniel M. Coffman; Liam David Comerford; Steven V. Degennaro; Edward A. Epstein; Ponani S. Gopalakrishnan; Stephane Herman Maes; David Nahamoo
Archive | 2000
Dimitri Kanevsky; Sara H. Basson; Edward A. Epstein; Alexander Zlatsin
Journal of the Acoustical Society of America | 1996
Edward A. Epstein
Archive | 2001
Edward A. Epstein; Burn L. Lewis; Etienne Marcheret
Journal of the Acoustical Society of America | 1995
Lalit R. Bahl; Jerome R. Bellegarda; Edward A. Epstein; John M. Lucassen; David Nahamoo; Michael Picheny
Journal of the Acoustical Society of America | 2007
Raimo Bakis; Hari Chittaluru; Edward A. Epstein; Steven J. Friedland; Abraham Ittycheriah; Stephen Graham Copinger Lawrence; Michael Picheny; Charles T. Rutherfoord; Maria E. Smith
Archive | 1999
Lalit R. Bahl; Steven V. De Gennaro; Peter Vincent Desouza; Edward A. Epstein; Jean-Michel Le Roux; Burn L. Lewis; Claire Waast-Richard