Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benoit Dumoulin is active.

Publication


Featured researches published by Benoit Dumoulin.


international conference on acoustics, speech, and signal processing | 2014

Cache based recurrent neural network language model inference for first pass speech recognition

Zhiheng Huang; Geoffrey Zweig; Benoit Dumoulin

Recurrent neural network language models (RNNLMs) have recently produced improvements on language processing tasks ranging from machine translation to word tagging and speech recognition. To date, however, the computational expense of RNNLMs has hampered their application to first pass decoding. In this paper, we show that by restricting the RNNLM calls to those words that receive a reasonable score according to a n-gram model, and by deploying a set of caches, we can reduce the cost of using an RNNLM in the first pass to that of using an additional n-gram model. We compare this scheme to lattice rescoring, and find that they produce comparable results for a Bing Voice search task. The best performance results from rescoring a lattice that is itself created with a RNNLM in the first pass.


ieee automatic speech recognition and understanding workshop | 2013

Accelerating recurrent neural network training via two stage classes and parallelization

Zhiheng Huang; Geoffrey Zweig; Michael Levit; Benoit Dumoulin; Barlas Oguz; Shawn Chang

Recurrent neural network (RNN) language models have proven to be successful to lower the perplexity and word error rate in automatic speech recognition (ASR). However, one challenge to adopt RNN language models is due to their heavy computational cost in training. In this paper, we propose two techniques to accelerate RNN training: 1) two stage class RNN and 2) parallel RNN training. In experiments on Microsoft internal short message dictation (SMD) data set, two stage class RNNs and parallel RNNs not only result in equal or lower WERs compared to original RNNs but also accelerate training by 2 and 10 times respectively. It is worth noting that two stage class RNN speedup can also be applied to test stage, which is essential to reduce the latency in real time ASR applications.


international world wide web conferences | 2011

Evaluating new search engine configurations with pre-existing judgments and clicks

Umut Ozertem; Rosie Jones; Benoit Dumoulin

We provide a novel method of evaluating search results, which allows us to combine existing editorial judgments with the relevance estimates generated by click-based user browsing models. There are evaluation methods in the literature that use clicks and editorial judgments together, but our approach is novel in the sense that it allows us to predict the impact of unseen search models without online tests to collect clicks and without requesting new editorial data, since we are only re-using existing editorial data, and clicks observed for previous result set configurations. Since the user browsing model and the pre-existing editorial data cannot provide relevance estimates for all documents for the selected set of queries, one important challenge is to obtain this performance estimation where there are a lot of ranked documents with missing relevance values. We introduce a query and rank based smoothing to overcome this problem. We show that a hybrid of these smoothing techniques performs better than both query and position based smoothing, and despite the high percentage of missing judgments, the resulting method is significantly correlated (0.74) with DCG values evaluated using fully judged datasets, and approaches inter-annotator agreement. We show that previously published techniques, applicable to frequent queries, degrade when applied to a random sample of queries, with a correlation of only 0.29. While our experiments focus on evaluation using DCG, our method is also applicable to other commonly used metrics.


ieee automatic speech recognition and understanding workshop | 2015

Discriminative training of context-dependent language model scaling factors and interpolation weights

Shuangyu Chang; Abhik Lahiri; Issac Alphonso; Barlas Oguz; Michael Levit; Benoit Dumoulin

We demonstrate how context-dependent language model scaling factors and interpolation weights can be unified in a single formulation where free parameters are discriminatively trained using linear and non-linear optimization. Objective functions of the optimization are defined based on pairs of superior and inferior recognition hypotheses and correlate well with recognition error metrics. Experiments on a large, real world application demonstrated the effectiveness of the solution in significantly reducing recognition errors, by leveraging the benefits of both context-dependent weighting and discriminative training.


Archive | 2012

Personalized real-time recommendation system

Rajen Subba; Dragomir Yankov; Pavel Berkhin; Steven W. Macbeth; Zhaowei Charlie Jiang; Benoit Dumoulin


Archive | 2013

PERSONALIZED CONTENT TAGGING

Murat Akbacak; Benoit Dumoulin


Archive | 2014

Flexible schema for language model customization

Michael Levit; Hernan Guelman; Shuangyu Chang; Sarangarajan Parthasarathy; Benoit Dumoulin


Archive | 2014

Language Modeling For Conversational Understanding Domains Using Semantic Web Resources

Murat Akbacak; Dilek Hakkani-Tür; Gokhan Tur; Larry P. Heck; Benoit Dumoulin


Archive | 2015

Knowledge source personalization to improve language models

Murat Akbacak; Dilek Hakkani-Tür; Gokhan Tur; Larry P. Heck; Benoit Dumoulin


Archive | 2012

SELF-TUNING ALTERATIONS FRAMEWORK

William D. Ramsey; Benoit Dumoulin; Nick Craswell

Collaboration


Dive into the Benoit Dumoulin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barlas Oguz

University of California

View shared research outputs
Top Co-Authors

Avatar

Murat Akbacak

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge