Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Strope is active.

Publication


Featured researches published by Brian Strope.


Archive | 2010

“Your Word is my Command”: Google Search by Voice: A Case Study

Johan Schalkwyk; Doug Beeferman; Françoise Beaufays; Bill Byrne; Ciprian Chelba; Mike Cohen; Maryam Kamvar; Brian Strope

An important goal at Google is to make spoken access ubiquitously available. Achieving ubiquity requires two things: availability (i.e., built into every possible interaction where speech input or output can make sense) and performance (i.e., works so well that the modality adds no friction to the interaction).


international conference on acoustics, speech, and signal processing | 2008

Deploying GOOG-411: Early lessons in data, measurement, and testing

Michiel Bacchiani; Francoise Beaufays; Johan Schalkwyk; Mike Schuster; Brian Strope

We describe our early experience building and optimizing GOOG-411, a fully automated, voice-enabled, business finder. We show how taking an iterative approach to system development allows us to optimize the various components of the system, thereby progressively improving user-facing metrics. We show the contributions of different data sources to recognition accuracy. For business listing language models, we see a nearly linear performance increase with the logarithm of the amount of training data. To date, we have improved our correct accept rate by 25% absolute, and increased our transfer rate by 35% absolute.


international conference on acoustics, speech, and signal processing | 2012

Distributed discriminative language models for Google voice-search

Preethi Jyothi; Leif Johnson; Ciprian Chelba; Brian Strope

This paper considers large-scale linear discriminative language models trained using a distributed perceptron algorithm. The algorithm is implemented efficiently using a MapReduce/SSTable framework. This work also introduces the use of large amounts of unsupervised data (confidence filtered Google voice-search logs) in conjunction with a novel training procedure that regenerates word lattices for the given data with a weaker acoustic model than the one used to generate the unsupervised transcriptions for the logged data. We observe small but statistically significant improvements in recognition performance after reranking N-best lists of a standard Google voice-search data set.


international conference on acoustics, speech, and signal processing | 2011

Recognizing English queries in Mandarin Voice Search

Hung-An Chang; Yun-hsuan Sung; Brian Strope; Francoise Beaufays

Recent improvements in speech recognition technology, along with increased computing power and bigger datasets, have considerably improved the state of the art in the field, making it possible for commercial apps such as Google Voice Search to serve users in their everyday mobile search needs. Deploying such systems in various countries has shown us the extent to which multilingualism is present in some cultures, and the need for better solutions to handle it in our speech recognition systems. In this paper, we describe a few early data sharing and model combination experiments we did to improve the recognition of English queries made to Mandarin Voice Search, in Taiwan. We obtained a 12% relative sentence accuracy improvement over a baseline system already including some support for English queries.


international conference on acoustics, speech, and signal processing | 2013

Language model capitalization

Francoise Beaufays; Brian Strope

In many speech recognition systems, capitalization is not an inherent component of the language model: training corpora are down cased, and counts are accumulated for sequences of lower-cased words. This level of modeling is sufficient for automating voice commands or otherwise enabling users to communicate with a machine, but when the recognized speech is intended to be read by a person, such as in email dictation or even some web search applications, the lack of capitalization of the users input can add an extra cognitive load on the reader. For these cases, speech recognition systems often post-process the recognized text to restore capitalization. We propose folding capitalization directly in the recognition language model. Instead of post-processing, we take the approach that language should be represented in all its richness, with capitalization, diacritics, and other special symbols. With that perspective, we describe a strategy to handle poorly capitalized or uncapitalized training corpora for language modeling. The resulting recognition system retains the accuracy/latency/memory tradeoff of our uncapitalized production recognizer, while providing properly cased outputs.


international conference on acoustics, speech, and signal processing | 2012

Recognition of multilingual speech in mobile applications

Hui Lin; Jui-ting Huang; Francoise Beaufays; Brian Strope; Yun-hsuan Sung

We evaluate different architectures to recognize multilingual speech for real-time mobile applications. In particular, we show that combining the results of several recognizers greatly outperforms other solutions such as training a single large multilingual system or using an explicit language identification system to select the appropriate recognizer. Experiments are conducted on a trilingual English-French-Mandarin mobile speech task. The data set includes Google searches, Maps queries, as well as more general inputs such as email and short message dictation. Without pre-specifying the input language, the combined system achieves comparable accuracy to that of the monolingual systems when the input language is known. The combined system is also roughly 5% absolute better than an explicit language identification approach, and 10% better than a single large multilingual system.


Archive | 2006

Business listing search

Brian Strope; William J. Byrne; Francoise Beaufays


Archive | 2007

Integrating Voice-Enabled Local Search and Contact Lists

Francoise Beaufays; Brian Strope; William J. Byrne


Archive | 2013

Speech Recognition with Parallel Recognition Tasks

Brian Strope; Francoise Beaufays; Olivier Siohan


Archive | 2011

Cross-lingual initialization of language models

Kaisuke Nakajima; Brian Strope

Collaboration


Dive into the Brian Strope's collaboration.

Researchain Logo
Decentralizing Knowledge