Kris Demuynck
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kris Demuynck.
Essential speech and language technology for Dutch | 2013
Yujun Wang; Jort F. Gemmeke; Kris Demuynck; Hugo Van hamme
Current automatic speech recognisers rely for a great deal on statistical models learned from training data. When they are deployed in conditions that differ from those observed in the training data, the generative models are unable to explain the incoming data and poor accuracy results. A very noticeable effect is deterioration due to background noise. In the MIDAS project, the state-of-the-art in noise robustness was advanced on two fronts, both making use of the missing data approach. First, novel sparse exemplar-based representations of speech were proposed. Compressed sensing techniques were used to impute noise-corrupted data from exemplars. Second, a missing data approach was adopted in the context of a large vocabulary speech recogniser, resulting in increased robustness at high noise levels without compromising on accuracy at low noise levels. The performance of the missing data recogniser was compared with that of the Nuance VOCON-3200 recogniser in a variety of noise conditions observed in field data.
Lecture notes in artificial intelligence | 2016
Kseniya Proença; Kris Demuynck; Dirk Van Compernolle
In automatic speech recognition, as in many areas of machine learning, stochastic modeling relies on neural networks more and more. Both in acoustic and language modeling, neural networks today mark the state of the art for large vocabulary continuous speech recognition, providing huge improvements over former approaches that were solely based on Gaussian mixture hidden markov models and count-based language models. We give an overview of current activities in neural network based modeling for automatic speech recognition. This includes discussions of network topologies and cell types, training and optimization, choice of input features, adaptation and normalization, multitask training, as well as neural network based language modeling. Despite the clear progress obtained with neural network modeling in speech recognition, a lot is to be done, yet to obtain a consistent and self-contained neural network based modeling approach that ties in with the former state of the art. We will conclude by a discussion of open problems as well as potential future directions w.r.t. to neural network integration into automatic speech recognition systems.
Archive | 2006
Jacques Duchateau; Mari Wigham; Kris Demuynck; Hugo Van hamme
Archive | 1998
W Xu; Jacques Duchateau; Kris Demuynck; Ioannis Dologlou
Archive | 1997
Kris Demuynck; Dirk Van Compernolle; Conan Van Hove; Jean-Pierre Martens
Archive | 2010
Geoffrey Zweig; Patrick Nguyen; Dirk Van Compernolle; Kris Demuynck; Les Atlas; Pascal Clark; Greg Sell; Fei Sha; Meihong Wang; Aren Jansen; Hynek Hermansky; Damianos Karakos; Keith Kintzley; Samuel Thomas; Sivaram Gsvs; Sam Bowman; Justine Kao
Spoken languages technologies for under-resourced languages (SLTU - 2012) | 2012
Xueru Zhang; Kris Demuynck; Dirk Van Compernolle; Hugo Van hamme
Archive | 2014
Kris Demuynck; Jan Roelens; Patrick Wambacq
Proceedings of the Annual IEEE EMBS Benelux Symposium | 2011
Bert Van Den Broeck; Peter Karsmakers; Kris Demuynck; Hugo Van hamme; Bart Vanrumste
Archive | 2008
Veronique Stouten; Kris Demuynck; Hugo Van hamme