Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Kombrink is active.

Publication


Featured researches published by Stefan Kombrink.


international conference on acoustics, speech, and signal processing | 2011

Extensions of recurrent neural network language model

Tomas Mikolov; Stefan Kombrink; Lukas Burget; Jan Cernocky; Sanjeev Khudanpur

We present several modifications of the original recurrent neural network language model (RNN LM).While this model has been shown to significantly outperform many competitive language modeling techniques in terms of accuracy, the remaining problem is the computational complexity. In this work, we show approaches that lead to more than 15 times speedup for both training and testing phases. Next, we show importance of using a backpropagation through time algorithm. An empirical comparison with feedforward networks is also provided. In the end, we discuss possibilities how to reduce the amount of parameters in the model. The resulting RNN model can thus be smaller, faster both during training and testing, and more accurate than the basic one.


international conference on acoustics, speech, and signal processing | 2012

Generating exact lattices in the WFST framework

Daniel Povey; Mirko Hannemann; Gilles Boulianne; Lukas Burget; Arnab Ghoshal; Milos Janda; Martin Karafiát; Stefan Kombrink; Petr Motlicek; Yanmin Qian; Korbinian Riedhammer; Karel Vesely; Ngoc Thang Vu

We describe a lattice generation method that is exact, i.e. it satisfies all the natural properties we would want from a lattice of alternative transcriptions of an utterance. This method does not introduce substantial overhead above one-best decoding. Our method is most directly applicable when using WFST decoders where the WFST is “fully expanded”, i.e. where the arcs correspond to HMM transitions. It outputs lattices that include HMM-state-level alignments as well as word labels. The general idea is to create a state-level lattice during decoding, and to do a special form of determinization that retains only the best-scoring path for each word sequence. This special determinization algorithm is a solution to the following problem: Given a WFST A, compute a WFST B that, for each input-symbol-sequence of A, contains just the lowest-cost path through A.


international conference on acoustics, speech, and signal processing | 2011

Variational approximation of long-span language models for lvcsr

Anoop Deoras; Tomas Mikolov; Stefan Kombrink; Martin Karafiát; Sanjeev Khudanpur

Long-span language models that capture syntax and semantics are seldom used in the first pass of large vocabulary continuous speech recognition systems due to the prohibitive search-space of sentence-hypotheses. Instead, an N-best list of hypotheses is created using tractable n-gram models, and rescored using the long-span models. It is shown in this paper that computationally tractable variational approximations of the long-span models are a better choice than standard n-gram models for first pass decoding. They not only result in a better first pass output, but also produce a lattice with a lower oracle word error rate, and rescoring the N-best list from such lattices with the long-span models requires a smaller N to attain the same accuracy. Empirical results on the WSJ, MIT Lectures, NIST 2007 Meeting Recognition and NIST 2001 Conversational Telephone Recognition data sets are presented to support these claims.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Beyond Novelty Detection: Incongruent Events, When General and Specific Classifiers Disagree

Daphna Weinshall; Alon Zweig; Hynek Hermansky; Stefan Kombrink; Frank W. Ohl; rg-Hendrik Bach; Luc Van Gool; Fabian Nater; Tomas Pajdla; Michal Havlena; Misha Pavel

Unexpected stimuli are a challenge to any machine learning algorithm. Here, we identify distinct types of unexpected events when general-level and specific-level classifiers give conflicting predictions. We define a formal framework for the representation and processing of incongruent events: Starting from the notion of label hierarchy, we show how partial order on labels can be deduced from such hierarchies. For each event, we compute its probability in different ways, based on adjacent levels in the label hierarchy. An incongruent event is an event where the probability computed based on some more specific level is much smaller than the probability computed based on some more general level, leading to conflicting predictions. Algorithms are derived to detect incongruent events from different types of hierarchies, different applications, and a variety of data types. We present promising results for the detection of novel visual and audio objects, and new patterns of motion in video. We also discuss the detection of Out-Of-Vocabulary words in speech recognition, and the detection of incongruent events in a multimodal audiovisual scenario.


Speech Communication | 2013

Approximate inference: A sampling based modeling technique to capture complex dependencies in a language model

Anoop Deoras; Tomas Mikolov; Stefan Kombrink; Kenneth Church

In this paper, we present strategies to incorporate long context information directly during the first pass decoding and also for the second pass lattice re-scoring in speech recognition systems. Long-span language models that capture complex syntactic and/or semantic information are seldom used in the first pass of large vocabulary continuous speech recognition systems due to the prohibitive increase in the size of the sentence-hypotheses search space. Typically, n-gram language models are used in the first pass to produce N-best lists, which are then re-scored using long-span models. Such a pipeline produces biased first pass output, resulting in sub-optimal performance during re-scoring. In this paper we show that computationally tractable variational approximations of the long-span and complex language models are a better choice than the standard n-gram model for the first pass decoding and also for lattice re-scoring.


text speech and dialogue | 2010

Recovery of rare words in lecture speech

Stefan Kombrink; Mirko Hannemann; Lukas Burget; Hynek Heřmanský

The vocabulary used in speech usually consists of two types of words: a limited set of common words, shared across multiple documents, and a virtually unlimited set of rare words, each of which might appear a few times only in particular documents. In most documents, however, these rare words are not seen at all. The first type of words is typically included in the language model of an automatic speech recognizer (ASR) and is thus widely referred to as invocabulary (IV). Words of the second type are missing in the language model and thus are called out-of-vocabulary (OOV). However, these words usually carry important information. We use a hybrid word/sub-word recognizer to detect OOV words occurring in English talks and describe them as sequences of sub-words. We detected about one third of all OOV words, and were able to recover the correct spelling for 26.2% of all detections by using a phoneme-to-grapheme (P2G) conversion trained on the recognition dictionary. By omitting detections corresponding to recovered IV words, we were able to increase the precision of the OOV detection substantially.


Detection and Identification of Rare Audiovisual Cues | 2012

Out-of-Vocabulary Word Detection and Beyond

Stefan Kombrink; Mirko Hannemann; Lukas Burget

In this work, we summarize our experiences in detection of unexpected words in automatic speech recognition (ASR). Two approaches based upon a paradigm of incongruence detection between generic and specific recognition systems are introduced. By arguing, that detection of incongruence is a necessity, but does not suffice when having in mind possible follow-up actions, we motivate the preference of one approach over the other. Nevertheless, we show, that a fusion outperforms both single systems. Finally, we propose possible actions after the detection of unexpected words, and conclude with general remarks about what we found to be important when dealing with unexpected words.


international conference on acoustics, speech, and signal processing | 2012

Improving language models for ASR using translated in-domain data

Stefan Kombrink; Tomas Mikolov; Martin Karafiát; Lukas Burget

Acquisition of in-domain training data to build speech recognition systems for under-resourced languages can be a costly, time-demanding and tedious process. In this work, we propose the use of machine translation to translate English transcripts of telephone speech into Czech language in order to improve a Czech CTS speech recognition system. The translated transcripts are used as additional language model training data in a scenario where the baseline language model is trained on off- and close-domain data only. We report perplexities, OOV and word error rates and examine different data sets and translators on their suitability for the described task.


conference of the international speech communication association | 2011

Empirical Evaluation and Combination of Advanced Language Modeling Techniques

Tomas Mikolov; Anoop Deoras; Stefan Kombrink; Lukas Burget; Jan Cernocký


Archive | 2011

RNNLM - Recurrent Neural Network Language Modeling Toolkit

Tomas Mikolov; Stefan Kombrink; Anoop Deoras; Lukas Burget; Jan Cernocky

Collaboration


Dive into the Stefan Kombrink's collaboration.

Top Co-Authors

Avatar

Lukas Burget

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tomas Mikolov

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Karafiát

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mirko Hannemann

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jan Cernocky

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hynek Heřmanský

Brno University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge