Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karel Vesely is active.

Publication


Featured researches published by Karel Vesely.


spoken language technology workshop | 2012

The language-independent bottleneck features

Karel Vesely; Martin Karafiát; Frantisek Grezl; Milos Janda; Ekaterina Egorova

In this paper we present novel language-independent bottleneck (BN) feature extraction framework. In our experiments we have used Multilingual Artificial Neural Network (ANN), where each language is modelled by separate output layer, while all the hidden layers jointly model the variability of all the source languages. The key idea is that the entire ANN is trained on all the languages simultaneously, thus the BN-features are not biased towards any of the languages. Exactly for this reason, the final BN-features are considered as language independent. In the experiments with GlobalPhone database, we show that Multilingual BN-features consistently outperform Monolingual BN-features. Also, cross-lingual generalization is evaluated, where we train on 5 source languages and test on 3 other languages. The results show that the ANN can produce very good BN-features even for unseen languages, in some cases even better than if we trained the ANN on the target language only.


ieee automatic speech recognition and understanding workshop | 2013

Score normalization and system combination for improved keyword spotting

Damianos Karakos; Richard M. Schwartz; Stavros Tsakalidis; Le Zhang; Shivesh Ranjan; Tim Ng; Roger Hsiao; Guruprasad Saikumar; Ivan Bulyko; Long Nguyen; John Makhoul; Frantisek Grezl; Mirko Hannemann; Martin Karafiát; Igor Szöke; Karel Vesely; Lori Lamel; Viet-Bac Le

We present two techniques that are shown to yield improved Keyword Spotting (KWS) performance when using the ATWV/MTWV performance measures: (i) score normalization, where the scores of different keywords become commensurate with each other and they more closely correspond to the probability of being correct than raw posteriors; and (ii) system combination, where the detections of multiple systems are merged together, and their scores are interpolated with weights which are optimized using MTWV as the maximization criterion. Both score normalization and system combination approaches show that significant gains in ATWV/MTWV can be obtained, sometimes on the order of 8-10 points (absolute), in five different languages. A variant of these methods resulted in the highest performance for the official surprise language evaluation for the IARPA-funded Babel project in April 2013.


ieee automatic speech recognition and understanding workshop | 2011

Convolutive Bottleneck Network features for LVCSR

Karel Vesely; Martin Karafiát; Frantisek Grezl

In this paper, we focus on improvements of the bottleneck ANN in a Tandem LVCSR system. First, the influence of training set size and the ANN size is evaluated. Second, a very positive effect of linear bottleneck is shown. Finally a Convolutive Bottleneck Network is proposed as extension of the current state-of-the-art Universal Context Network. The proposed training method leads to 5.5% relative reduction of WER, compared to the Universal Context ANN baseline. The relative improvement compared to the 5-layer single-bottleneck network is 17.7%. The dataset ctstrain07 composed of more than 2000 hours of English Conversational Telephone Speech was used for the experiments. The TNet toolkit with CUDA GPGPU implementation was used for fast training.


international conference on acoustics, speech, and signal processing | 2014

Adaptation of multilingual stacked bottle-neck neural network structure for new language

Frantisek Grezl; Martin Karafiát; Karel Vesely

The neural network based features became an inseparable part of state-of-the-art LVCSR systems. In order to perform well, the network has to be trained on a large amount of in-domain data. With the increasing emphasis on fast development of ASR system on limited resources, there is an effort to alleviate the need of in-domain data. To evaluate the effectiveness of other resources, we have trained the Stacked Bottle-Neck neural networks structure on multilingual data investigating several training strategies while treating the target language as the unseen one. Further, the systems were adapted to the target language by re-training. Finally, we evaluated the effect of adaptation of individual NNs in the Stacked Bottle-Neck structure to find out the optimal adaptation strategy. We have shown that the adaptation can significantly improve system performance over both, the multilingual network and network trained only on target data. The experiments were performed on Babel Year 1 data.


international conference on acoustics, speech, and signal processing | 2012

Generating exact lattices in the WFST framework

Daniel Povey; Mirko Hannemann; Gilles Boulianne; Lukas Burget; Arnab Ghoshal; Milos Janda; Martin Karafiát; Stefan Kombrink; Petr Motlicek; Yanmin Qian; Korbinian Riedhammer; Karel Vesely; Ngoc Thang Vu

We describe a lattice generation method that is exact, i.e. it satisfies all the natural properties we would want from a lattice of alternative transcriptions of an utterance. This method does not introduce substantial overhead above one-best decoding. Our method is most directly applicable when using WFST decoders where the WFST is “fully expanded”, i.e. where the arcs correspond to HMM transitions. It outputs lattices that include HMM-state-level alignments as well as word labels. The general idea is to create a state-level lattice during decoding, and to do a special form of determinization that retains only the best-scoring path for each word sequence. This special determinization algorithm is a solution to the following problem: Given a WFST A, compute a WFST B that, for each input-symbol-sequence of A, contains just the lowest-cost path through A.


ieee automatic speech recognition and understanding workshop | 2013

Semi-supervised training of Deep Neural Networks

Karel Vesely; Mirko Hannemann; Lukas Burget

In this paper we search for an optimal strategy for semi-supervised Deep Neural Network (DNN) training. We assume that a small part of the data is transcribed, while the majority of the data is untranscribed. We explore self-training strategies with data selection based on both the utterance-level and frame-level confidences. Further on, we study the interactions between semi-supervised frame-discriminative training and sequence-discriminative sMBR training. We found it beneficial to reduce the disproportion in amounts of transcribed and untranscribed data by including the transcribed data several times, as well as to do a frame-selection based on per-frame confidences derived from confusion in a lattice. For the experiments, we used the Limited language pack condition for the Surprise language task (Vietnamese) from the IARPA Babel program. The absolute Word Error Rate (WER) improvement for frame cross-entropy training is 2.2%, this corresponds to WER recovery of 36% when compared to the identical system, where the DNN is built on the fully transcribed data.


international conference on acoustics, speech, and signal processing | 2015

Towards machines that know when they do not know: Summary of work done at 2014 Frederick Jelinek Memorial Workshop

Hynek Hermansky; Lukas Burget; Jordan Cohen; Emmanuel Dupoux; Naomi H. Feldman; John J. Godfrey; Sanjeev Khudanpur; Matthew Maciejewski; Sri Harish Mallidi; Anjali Menon; Tetsuji Ogawa; Vijayaditya Peddinti; Richard C. Rose; Richard M. Stern; Matthew Wiesner; Karel Vesely

A group of junior and senior researchers gathered as a part of the 2014 Frederick Jelinek Memorial Workshop in Prague to address the problem of predicting the accuracy of a nonlinear Deep Neural Network probability estimator for unknown data in a different application domain from the domain in which the estimator was trained. The paper describes the problem and summarizes approaches that were taken by the group1.


spoken language technology workshop | 2014

But ASR system for BABEL Surprise evaluation 2014

Martin Karafiát; Karel Vesely; Igor Szöke; Lukas Burget; Frantisek Grezl; Mirko Hannemann; Jan Cernocky

The paper describes Brno University of Technology (BUT) ASR system for 2014 BABEL Surprise language evaluation (Tamil). While being largely based on our previous work, two original contributions were brought: (1) speaker-adapted bottle-neck neural network (BN) features were investigated as an input to DNN recognizer and semi-supervised training was found effective. (2) Adding of noise to training data outperformed a classical de-noising technique while dealing with noisy test data was found beneficial, and the performance of this approach was verified on a relatively clean training/test data setup from a different language. All results are reported on BABEL 2014 Tamil data.


international conference on acoustics, speech, and signal processing | 2017

Residual memory networks: Feed-forward approach to learn long-term temporal dependencies

Murali Karthick Baskar; Martin Karafiát; Lukas Burget; Karel Vesely; Frantisek Grezl; Jan Cernocky

Training deep recurrent neural network (RNN) architectures is complicated due to the increased network complexity. This disrupts the learning of higher order abstracts using deep RNN. In case of feed-forward networks training deep structures is simple and faster while learning long-term temporal information is not possible. In this paper we propose a residual memory neural network (RMN) architecture to model short-time dependencies using deep feed-forward layers having residual and time delayed connections. The residual connection paves way to construct deeper networks by enabling unhindered flow of gradients and the time delay units capture temporal information with shared weights. The number of layers in RMN signifies both the hierarchical processing depth and temporal depth. The computational complexity in training RMN is significantly less when compared to deep recurrent networks. RMN is further extended as bi-directional RMN (BRMN) to capture both past and future information. Experimental analysis is done on AMI corpus to substantiate the capability of RMN in learning long-term information and hierarchical information. Recognition performance of RMN trained with 300 hours of Switchboard corpus is compared with various state-of-the-art LVCSR systems. The results indicate that RMN and BRMN gains 6 % and 3.8 % relative improvement over LSTM and BLSTM networks.


international conference on acoustics, speech, and signal processing | 2013

Manual and semi-automatic approaches to building a multilingual phoneme set

Ekaterina Egorova; Karel Vesely; Martin Karafiát; Milos Janda; Jan Cernocky

The paper addresses manual and semi-automatic approaches to building a multilingual phoneme set for automatic speech recognition. The first approach involves mapping and reduction of the phoneme set based on IPA and expert knowledge, the later one involves phoneme confusion matrix generated by a neural network. The comparison is done for 8 languages selected from GlobalPhone on three scenarios: 1) multilingual system with abundant data for all the languages, 2) multilingual systems excluding target language 3) multilingual systems with small amount of data for target languages. For 3), the multilingual system brought improvement for languages close enough to the others in the set.

Collaboration


Dive into the Karel Vesely's collaboration.

Top Co-Authors

Avatar

Martin Karafiát

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar

Frantisek Grezl

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar

Lukas Burget

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jan Cernocky

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar

Mirko Hannemann

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar

Milos Janda

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ekaterina Egorova

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar

Igor Szöke

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Povey

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge