Philemon Brakel
Ghent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philemon Brakel.
conference of the international speech communication association | 2016
Ying Zhang; Mohammad Pezeshki; Philemon Brakel; Saizheng Zhang; César Laurent; Yoshua Bengio; Aaron C. Courville
Convolutional Neural Networks (CNNs) are effective models for reducing spectral variations and modeling spectral correlations in acoustic features for automatic speech recognition (ASR). Hybrid speech recognition systems incorporating CNNs with Hidden Markov Models/Gaussian Mixture Models (HMMs/GMMs) have achieved the state-of-the-art in various benchmarks. Meanwhile, Connectionist Temporal Classification (CTC) with Recurrent Neural Networks (RNNs), which is proposed for labeling unsegmented sequences, makes it feasible to train an end-to-end speech recognition system instead of hybrid settings. However, RNNs are computationally expensive and sometimes difficult to train. In this paper, inspired by the advantages of both CNNs and the CTC approach, we propose an end-to-end speech framework for sequence labeling, by combining hierarchical CNNs with CTC directly without recurrent connections. By evaluating the approach on the TIMIT phoneme recognition task, we show that the proposed model is not only computationally efficient, but also competitive with the existing baseline systems. Moreover, we argue that CNNs have the capability to model temporal correlations with appropriate context information.
international conference on artificial neural networks | 2012
Philemon Brakel; Sander Dieleman; Benjamin Schrauwen
Restricted Boltzmann Machines (RBMs) are unsupervised probabilistic neural networks that can be stacked to form Deep Belief Networks. Given the recent popularity of RBMs and the increasing availability of parallel computing architectures, it becomes interesting to investigate learning algorithms for RBMs that benefit from parallel computations. In this paper, we look at two extensions of the parallel tempering algorithm, which is a Markov Chain Monte Carlo method to approximate the likelihood gradient. The first extension is directed at a more effective exchange of information among the parallel sampling chains. The second extension estimates gradients by averaging over chains from different temperatures. We investigate the efficiency of the proposed methods and demonstrate their usefulness on the MNIST dataset. Especially the weighted averaging seems to benefit Maximum Likelihood learning.
international conference on neural information processing | 2012
Philemon Brakel; Benjamin Schrauwen
Imputing missing values in high dimensional time series is a difficult problem. There have been some approaches to the problem [11,8] where neural architectures were trained as probabilistic models of the data. However, we argue that this approach is not optimal. We propose to view temporal neural networks with latent variables as energy-based models and train them for missing value recovery directly. In this paper we introduce two energy-based models. The first model is based on a one dimensional convolution and the second model utilizes a recurrent neural network. We demonstrate how ideas from the energy-based learning framework can be used to train these models to recover missing values. The models are evaluated on a motion capture dataset.
international conference on learning representations | 2017
Dzmitry Bahdanau; Philemon Brakel; Kelvin Xu; Anirudh Goyal; Ryan Lowe; Joelle Pineau; Aaron C. Courville; Yoshua Bengio
international symposium conference on music information retrieval | 2011
Sander Dieleman; Philemon Brakel; Benjamin Schrauwen
Journal of Machine Learning Research | 2012
David Verstraeten; Benjamin Schrauwen; Sander Dieleman; Philemon Brakel; Pieter Buteneers; Dejan Pecevski
Journal of Machine Learning Research | 2013
Philemon Brakel; Dirk Stroobandt; Benjamin Schrauwen
arXiv: Learning | 2015
Dzmitry Bahdanau; Dmitriy Serdyuk; Philemon Brakel; Nan Rosemary Ke; Jan Chorowski; Aaron C. Courville; Yoshua Bengio
conference cognitive science | 2009
Philemon Brakel; Stefan L. Frank
arXiv: Computation and Language | 2016
Dmitriy Serdyuk; Kartik Audhkhasi; Philemon Brakel; Bhuvana Ramabhadran; Samuel Thomas; Yoshua Bengio