Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yann N. Dauphin is active.

Publication


Featured researches published by Yann N. Dauphin.


international conference on multimodal interfaces | 2013

Combining modality specific deep neural networks for emotion recognition in video

Samira Ebrahimi Kahou; Chris Pal; Xavier Bouthillier; Pierre Froumenty; Caglar Gulcehre; Roland Memisevic; Pascal Vincent; Aaron C. Courville; Yoshua Bengio; Raul Chandias Ferrari; Mehdi Mirza; Sébastien Jean; Pierre-Luc Carrier; Yann N. Dauphin; Nicolas Boulanger-Lewandowski; Abhishek Aggarwal; Jeremie Zumer; Pascal Lamblin; Jean-Philippe Raymond; Guillaume Desjardins; Razvan Pascanu; David Warde-Farley; Atousa Torabi; Arjun Sharma; Emmanuel Bengio; Myriam Côté; Kishore Reddy Konda; Zhenzhou Wu

In this paper we present the techniques used for the University of Montréals team submissions to the 2013 Emotion Recognition in the Wild Challenge. The challenge is to classify the emotions expressed by the primary human subject in short video clips extracted from feature length movies. This involves the analysis of video clips of acted scenes lasting approximately one-two seconds, including the audio track which may contain human voices as well as background music. Our approach combines multiple deep neural networks for different data modalities, including: (1) a deep convolutional neural network for the analysis of facial expressions within video frames; (2) a deep belief net to capture audio information; (3) a deep autoencoder to model the spatio-temporal information produced by the human actions depicted within the entire scene; and (4) a shallow network architecture focused on extracted features of the mouth of the primary human subject in the scene. We discuss each of these techniques, their performance characteristics and different strategies to aggregate their predictions. Our best single model was a convolutional neural network trained to predict emotions from static frames using two large data sets, the Toronto Face Database and our own set of faces images harvested from Google image search, followed by a per frame aggregation strategy that used the challenge training data. This yielded a test set accuracy of 35.58%. Using our best strategy for aggregating our top performing models into a single predictor we were able to produce an accuracy of 41.03% on the challenge test set. These compare favorably to the challenge baseline test set accuracy of 27.56%.


european conference on machine learning | 2011

Higher order contractive auto-encoder

Salah Rifai; Grégoire Mesnil; Pascal Vincent; Xavier Muller; Yoshua Bengio; Yann N. Dauphin; Xavier Glorot

We propose a novel regularizer when training an autoencoder for unsupervised feature extraction. We explicitly encourage the latent representation to contract the input space by regularizing the norm of the Jacobian (analytically) and the Hessian (stochastically) of the encoders output with respect to its input, at the training points. While the penalty on the Jacobians norm ensures robustness to tiny corruption of samples in the input space, constraining the norm of the Hessian extends this robustness when moving further away from the sample. From a manifold learning perspective, balancing this regularization with the auto-encoders reconstruction objective yields a representation that varies most when moving along the data manifold in input space, and is most insensitive in directions orthogonal to the manifold. The second order regularization, using the Hessian, penalizes curvature, and thus favors smooth manifold. We show that our proposed technique, while remaining computationally efficient, yields representations that are significantly better suited for initializing deep architectures than previously proposed approaches, beating state-of-the-art performance on a number of datasets.


IEEE Transactions on Audio, Speech, and Language Processing | 2015

Using recurrent neural networks for slot filling in spoken language understanding

Grégoire Mesnil; Yann N. Dauphin; Kaisheng Yao; Yoshua Bengio; Li Deng; Dilek Hakkani-Tür; Xiaodong He; Larry P. Heck; Gokhan Tur; Dong Yu; Geoffrey Zweig

Semantic slot filling is one of the most challenging problems in spoken language understanding (SLU). In this paper, we propose to use recurrent neural networks (RNNs) for this task, and present several novel architectures designed to efficiently model past and future temporal dependencies. Specifically, we implemented and compared several important RNN architectures, including Elman, Jordan, and hybrid variants. To facilitate reproducibility, we implemented these networks with the publicly available Theano neural network toolkit and completed experiments on the well-known airline travel information system (ATIS) benchmark. In addition, we compared the approaches on two custom SLU data sets from the entertainment and movies domains. Our results show that the RNN-based models outperform the conditional random field (CRF) baseline by 2% in absolute error reduction on the ATIS benchmark. We improve the state-of-the-art by 0.5% in the Entertainment domain, and 6.7% for the movies domain.


neural information processing systems | 2014

Identifying and attacking the saddle point problem in high-dimensional non-convex optimization

Yann N. Dauphin; Razvan Pascanu; Caglar Gulcehre; Kyunghyun Cho; Surya Ganguli; Yoshua Bengio


international conference on machine learning | 2017

Convolutional Sequence to Sequence Learning

Jonas Gehring; Michael Auli; Denis Yarats; Yann N. Dauphin


international conference on machine learning | 2013

Better Mixing via Deep Representations

Yoshua Bengio; Grégoire Mesnil; Yann N. Dauphin; Salah Rifai


international conference on machine learning | 2016

Language Modeling with Gated Convolutional Networks

Yann N. Dauphin; Angela Fan; Michael Auli


international conference on machine learning | 2012

Unsupervised and Transfer Learning Challenge: a Deep Learning Approach

Grégoire Mesnil; Yann N. Dauphin; Xavier Glorot; Salah Rifai; Yoshua Bengio; Ian J. Goodfellow; Erick Lavoie; Xavier Muller; Guillaume Desjardins; David Warde-Farley; Pascal Vincent; Aaron C. Courville; James Bergstra


Journal on Multimodal User Interfaces | 2016

EmoNets: Multimodal deep learning approaches for emotion recognition in video

Samira Ebrahimi Kahou; Xavier Bouthillier; Pascal Lamblin; Caglar Gulcehre; Vincent Michalski; Kishore Reddy Konda; Sébastien Jean; Pierre Froumenty; Yann N. Dauphin; Nicolas Boulanger-Lewandowski; Raul Chandias Ferrari; Mehdi Mirza; David Warde-Farley; Aaron C. Courville; Pascal Vincent; Roland Memisevic; Chris Pal; Yoshua Bengio


neural information processing systems | 2011

The Manifold Tangent Classifier

Salah Rifai; Yann N. Dauphin; Pascal Vincent; Yoshua Bengio; Xavier Muller

Collaboration


Dive into the Yann N. Dauphin's collaboration.

Top Co-Authors

Avatar

Yoshua Bengio

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Pascal Vincent

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Salah Rifai

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xavier Glorot

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Xavier Muller

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge