Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mike Hochberg is active.

Publication


Featured researches published by Mike Hochberg.


Archive | 1996

THE USE OF RECURRENT NEURAL NETWORKS IN CONTINUOUS SPEECH RECOGNITION

Tony Robinson; Mike Hochberg; Steve Renals

This chapter describes a use of recurrent neural networks (i.e., feedback is incorporated in the computation) as an acoustic model for continuous speech recognition. The form of the recurrent neural network is described along with an appropriate parameter estimation procedure. For each frame of acoustic data, the recurrent network generates an estimate of the posterior probability of of the possible phones given the observed acoustic signal. The posteriors are then converted into scaled likelihoods and used as the observation probabilities within a conventional decoding paradigm (e.g., Viterbi decoding). The advantages of using recurrent networks are that they require a small number of parameters and provide a fast decoding capability (relative to conventional, large-vocabulary, HMM systems)3.


international conference on acoustics, speech, and signal processing | 1995

Efficient search using posterior phone probability estimates

Steve Renals; Mike Hochberg

We present a novel, efficient search strategy for large vocabulary continuous speech recognition (LVCSR). The search algorithm, based on stack decoding, uses posterior phone probability estimates to substantially increase its efficiency with minimal effect on accuracy. In particular, the search space is dramatically reduced by phone deactivation pruning where phones with a small local posterior probability are deactivated. This approach is particularly well-suited to hybrid connectionist/hidden Markov model systems because posterior phone probabilities are directly computed by the acoustic model. On large vocabulary tasks, using a trigram language model, this increased the search speed by an order of magnitude, with 2% or less relative search error. Results from a hybrid system are presented using the Wall Street Journal LVCSR database for a 20,000 word task using a backed-off trigram language model. For this task, our single-pass decoder took around 15x realtime on an HP735 workstation. At a cost of 7% relative search error, the decoding time can be speeded up to approximately realtime.


international conference on acoustics, speech, and signal processing | 1994

IPA: improved phone modelling with recurrent neural networks

Tony Robinson; Mike Hochberg; Steve Renals

This paper describes phone modelling improvements to the hybrid connectionist-hidden Markov model speech recognition system developed at Cambridge University. These improvements are applied to phone recognition from the TIMIT task and word recognition from the Wall Street Journal (WSJ) task. A recurrent net is used to map acoustic vectors to posterior probabilities of phone classes. The maximum likelihood phone or word string is then extracted using Markov models. The paper describes three improvements: connectionist model merging; explicit presentation of acoustic context; and improved duration modelling. The first is shown to provide a significant improvement in the TIMIT phone recognition rate and all three provide an improvement in the WSJ word recognition rate.<<ETX>>


ieee workshop on neural networks for signal processing | 1994

Connectionist model combination for large vocabulary speech recognition

Mike Hochberg; Gary D. Cook; Steve Renals; Anthony J. Robinson

Reports in the statistics and neural networks literature have expounded the benefits of merging multiple models to improve classification and prediction performance. The Cambridge University connectionist speech group has developed a hybrid connectionist-hidden Markov model system for large vocabulary talker independent speech recognition. The performance of this system has been greatly enhanced through the merging of connectionist acoustic models. This paper presents and compares a number of different approaches to connectionist model merging and evaluates them on the TIMIT phone recognition and ARPA Wall Street Journal word recognition tasks.<<ETX>>


conference of the international speech communication association | 1995

SPEAKER-ADAPTATION FOR HYBRID HMM-ANN CONTINUOUS SPEECH RECOGNITION SYSTEM

João Paulo Neto; Luís B. Almeida; Mike Hochberg; Ciro Martins; Luís Nunes; Steve Renals; Tony Robinson


neural information processing systems | 1995

Context-Dependent Classes in a Hybrid Recurrent Network-HMM Speech Recognition System

Dan J. Kershaw; Anthony J. Robinson; Mike Hochberg


conference of the international speech communication association | 1993

A neural network based, speaker independent, large vocabulary, continuous speech recognition system: the WERNICKE project.

Tony Robinson; Luis B. Almeida; Jean-Marc Boite; Frank Fallside; Mike Hochberg; Dan J. Kershaw; Phil Kohn; Yochai Konig; Nelson Morgan; João Paulo Neto; Steve Renals; Marco Saerens; Chuck Wooters


Archive | 1995

The 1994 Abbot hybrid connectionist-HMM large vocabulary recognition system.

Mike Hochberg; Gary Cook; Steve Renals; Tony Robinson; R Schechtman


international conference on acoustics speech and signal processing | 1996

Efficient evaluation of the LVCSR search space using the NOWAY decoder

Steve Renals; Mike Hochberg


conference of the international speech communication association | 1994

Large vocabulary continuous speech recognition using a hybrid connectionist-HMM system.

Mike Hochberg; Steve Renals; Anthony J. Robinson; Dan J. Kershaw

Collaboration


Dive into the Mike Hochberg's collaboration.

Top Co-Authors

Avatar

Steve Renals

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gary Cook

King's College London

View shared research outputs
Top Co-Authors

Avatar

Gary D. Cook

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

R Schechtman

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

Chuck Wooters

International Computer Science Institute

View shared research outputs
Top Co-Authors

Avatar

Frank Fallside

International Computer Science Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge