Burn L. Lewis
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Burn L. Lewis.
international conference on acoustics, speech, and signal processing | 1987
Amir Averbuch; Lalit R. Bahl; Raimo Bakis; Peter F. Brown; G. Daggett; Subhro Das; K. Davies; S. De Gennaro; P. V. de Souza; Edward A. Epstein; D. Fraleigh; Frederick Jelinek; Burn L. Lewis; Robert Leroy Mercer; J. Moorhead; Arthur Nádas; Deebitsudo Nahamoo; Michael Picheny; G. Shichman; P. Spinelli; D. Van Compernolle; H. Wilkens
The Speech Recognition Group at IBM Research in Yorktown Heights has developed a real-time, isolated-utterance speech recognizer for natural language based on the IBM Personal Computer AT and IBM Signal Processors. The system has recently been enhanced by expanding the vocabulary from 5,000 words to 20,000 words and by the addition of a speech workstation to support usability studies on document creation by voice. The system supports spelling and interactive personalization to augment the vocabularies. This paper describes the implementation, user interface, and comparative performance of the recognizer.
Journal of the Acoustical Society of America | 2005
Thomas A. Kist; Burn L. Lewis; Bruce David Lucas
In a computer speech recognition system, the present invention provides a method and system for recognizing and executing a voice command that has a dictation portion. Upon receiving a user input, the spoken utterance is processed to identify a pattern of words which matches a pre-determined command pattern. Then, computer system command is identified that corresponds to the pre-determined command pattern and has at least one parameter. The parameter is extracted from a dictation portion of the spoken utterance which is separate from the pattern of words matching the command pattern. The computer system command is then processed to perform an event in accordance with the parameter. If the spoken utterance does not contain a pattern of words matching a pre-determined command pattern, then the spoken utterance is recognized as dictation and inserted at a specified location into an electronic document or other system or application software.
Ibm Journal of Research and Development | 2012
Burn L. Lewis
To play as a contestant in Jeopardy!™, IBM Watson™ needed an interface program to handle the communications between the Jeopardy! computers that operate the game and its own components: question answering, game strategy, speech, buzzer, etc. Because Watson cannot hear or see, when the categories and clues were displayed on the game board, they were also sent electronically to Watson. The program also monitored signals generated when the buzzer system was activated and when a contestant successfully rang in. If Watson was confident of its answer, it triggered a solenoid to depress its buzzer button and used a text-to-speech system to speak its response. Since it did not hear the hosts judgment, it relied on changes to the scores and the game flow to infer whether its answer was correct. The interface program had to use what were sometimes conflicting events to determine the state of the game without any human intervention.
Journal of the Acoustical Society of America | 2000
Hubert Crepy; Jeffrey A. Kusnitz; Burn L. Lewis
Journal of the Acoustical Society of America | 2000
Upali Bandara; Siegfried Kunzmann; Karlheinz Mohr; Burn L. Lewis
Archive | 2001
Edward A. Epstein; Burn L. Lewis; Etienne Marcheret
Archive | 2004
Patrick M. Commarford; Mario E. De Armas; Burn L. Lewis; James R. Lewis
Journal of the Acoustical Society of America | 2011
Liam David Comerford; David C. Frank; Burn L. Lewis; Leonid Rachevksy; Mahesh Viswanathan
Archive | 1999
Kerry A. Ortega; Kris Coe; Steven J. Friedland; Burn L. Lewis; Maria E. Smith
Archive | 1999
Lalit R. Bahl; Steven V. De Gennaro; Peter Vincent Desouza; Edward A. Epstein; Jean-Michel Le Roux; Burn L. Lewis; Claire Waast-Richard