Paul Vozila
Nuance Communications
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Vozila.
ieee automatic speech recognition and understanding workshop | 2013
Abhinav Sethy; Stanley F. Chen; Ebru Arisoy; Bhuvana Ramabhadran; Kartik Audkhasi; Shrikanth Narayanan; Paul Vozila
For many speech recognition tasks, the best language model performance is achieved by collecting text from multiple sources or domains, and interpolating language models built separately on each individual corpus. When multiple corpora are available, it has also been shown that when using a domain adaptation technique such as feature augmentation [1], the performance on each individual domain can be improved by training a joint model across all of the corpora. In this paper, we explore whether improving each domain model via joint training also improves performance when interpolating the models together. We show that the diversity of the individual models is an important consideration, and propose a method for adjusting diversity to optimize overall performance. We present results using word n-gram models and Model M, a class-based n-gram model, and demonstrate improvements in both perplexity and word-error rate relative to state-of-the-art results on a Broadcast News transcription task.
international conference on acoustics, speech, and signal processing | 2014
Abhinav Sethy; Stanley F. Chen; Bhuvana Ramabhadran; Paul Vozila
The best language model performance for a task is often achieved by interpolating language models built separately on corpora from multiple sources. While common practice is to use a single set of fixed interpolation weights to combine models, past work has found that gains can be had by allowing weights to vary by n-gram, when linearly interpolating word n-gram models. In this work, we investigate whether similar ideas can be used to improve log-linear interpolation for Model M, an exponential class-based n-gram model with state-of-the-art performance. We focus on log-linear interpolation as Model Ms combined via (regular) linear interpolation cannot be statically compiled into a single model, as is required for many applications due to resource constraints. We present a general parameter interpolation framework in which a weight prediction model is used to compute the interpolation weights for each n-gram. The weight prediction model takes a rich representation of n-gram features as input, and is trained to optimize the perplexity of a held-out set. In experiments on Broadcast News, we show that a mixture of experts weight prediction model yields significant perplexity and word-error rate improvements as compared to static linear interpolation.
Archive | 2000
Brian Ulicny; Alex Vasserman; Paul Vozila; Jeffrey Penrod Adams
conference of the international speech communication association | 2003
Paul Vozila; Jeffrey Penrod Adams; Yuliya Lobacheva; Ryan Paul Thomas
Archive | 2014
Jonathan Mamou; Abhinav Sethy; Bhuvana Ramabhadran; Ron Hoory; Paul Vozila; Nathan M. Bodenstab
conference of the international speech communication association | 2012
Hong-Kwang Kuo; Ebru Arisoy; Ahmad Emami; Paul Vozila
Archive | 2012
Vladimir Sejnoha; William F. Ganong; Paul Vozila; Nathan M. Bodenstab; Yik-Cheung Tam
conference of the international speech communication association | 2013
Nicola Ueffing; Maximilian Bisani; Paul Vozila
conference of the international speech communication association | 2012
Stefan Hahn; Paul Vozila; Maximilian Bisani
Archive | 2010
Vladimir Sejnoha; William F. Ganong; Paul Vozila; Nathan M. Bodenstab; Yik-Cheung Tam