Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Salah Rifai is active.

Publication


Featured researches published by Salah Rifai.


european conference on computer vision | 2012

Disentangling factors of variation for facial expression recognition

Salah Rifai; Yoshua Bengio; Aaron C. Courville; Pascal Vincent; Mehdi Mirza

We propose a semi-supervised approach to solve the task of emotion recognition in 2D face images using recent ideas in deep learning for handling the factors of variation present in data. An emotion classification algorithm should be both robust to (1) remaining variations due to the pose of the face in the image after centering and alignment, (2) the identity or morphology of the face. In order to achieve this invariance, we propose to learn a hierarchy of features in which we gradually filter the factors of variation arising from both (1) and (2). We address (1) by using a multi-scale contractive convolutional network (CCNET) in order to obtain invariance to translations of the facial traits in the image. Using the feature representation produced by the CCNET, we train a Contractive Discriminative Analysis (CDA) feature extractor, a novel variant of the Contractive Auto-Encoder (CAE), designed to learn a representation separating out the emotion-related factors from the others (which mostly capture the subject identity, and what is left of pose after the CCNET). This system beats the state-of-the-art on a recently proposed dataset for facial expression recognition, the Toronto Face Database, moving the state-of-art accuracy from 82.4% to 85.0%, while the CCNET and CDA improve accuracy of a standard CAE by 8%.


european conference on machine learning | 2011

Higher order contractive auto-encoder

Salah Rifai; Grégoire Mesnil; Pascal Vincent; Xavier Muller; Yoshua Bengio; Yann N. Dauphin; Xavier Glorot

We propose a novel regularizer when training an autoencoder for unsupervised feature extraction. We explicitly encourage the latent representation to contract the input space by regularizing the norm of the Jacobian (analytically) and the Hessian (stochastically) of the encoders output with respect to its input, at the training points. While the penalty on the Jacobians norm ensures robustness to tiny corruption of samples in the input space, constraining the norm of the Hessian extends this robustness when moving further away from the sample. From a manifold learning perspective, balancing this regularization with the auto-encoders reconstruction objective yields a representation that varies most when moving along the data manifold in input space, and is most insensitive in directions orthogonal to the manifold. The second order regularization, using the Hessian, penalizes curvature, and thus favors smooth manifold. We show that our proposed technique, while remaining computationally efficient, yields representations that are significantly better suited for initializing deep architectures than previously proposed approaches, beating state-of-the-art performance on a number of datasets.


ICPRAM (Selected Papers) | 2015

Unsupervised Learning of Semantics of Object Detections for Scene Categorization

Grégoire Mesnil; Salah Rifai; Antoine Bordes; Xavier Glorot; Yoshua Bengio; Pascal Vincent

Classifying scenes (e.g. into “street”, “home” or “leisure”) is an important but complicated task nowadays, because images come with variability, ambiguity, and a wide range of illumination or scale conditions. Standard approaches build an intermediate representation of the global image and learn classifiers on it. Recently, it has been proposed to depict an image as an aggregation of its contained objects: the representation on which classifiers are trained is composed of many heterogeneous feature vectors derived from various object detectors. In this paper, we propose to study different approaches to efficiently learn contextual semantics out of these object detections. We use the features provided by Object-Bank [24] (177 different object detectors producing 252 attributes each), and show on several benchmarks for scene categorization that careful combinations, taking into account the structure of the data, allows to greatly improve over original results (from \(+5\) to \(+11\,\%\)) while drastically reducing the dimensionality of the representation by 97 % (from 44,604 to 1,000). We also show that the uncertainty relative to object detectors hampers the use of external semantic knowledge to improve detectors combination, unlike our unsupervised learning approach.


international conference on machine learning | 2011

Contractive Auto-Encoders: Explicit Invariance During Feature Extraction

Salah Rifai; Pascal Vincent; Xavier Muller; Xavier Glorot; Yoshua Bengio


international conference on machine learning | 2013

Better Mixing via Deep Representations

Yoshua Bengio; Grégoire Mesnil; Yann N. Dauphin; Salah Rifai


international conference on machine learning | 2012

Unsupervised and Transfer Learning Challenge: a Deep Learning Approach

Grégoire Mesnil; Yann N. Dauphin; Xavier Glorot; Salah Rifai; Yoshua Bengio; Ian J. Goodfellow; Erick Lavoie; Xavier Muller; Guillaume Desjardins; David Warde-Farley; Pascal Vincent; Aaron C. Courville; James Bergstra


neural information processing systems | 2011

The Manifold Tangent Classifier

Salah Rifai; Yann N. Dauphin; Pascal Vincent; Yoshua Bengio; Xavier Muller


international conference on machine learning | 2012

A Generative Process for sampling Contractive Auto-Encoders

Salah Rifai; Yoshua Bengio; Pascal Vincent; Yann N. Dauphin


arXiv: Artificial Intelligence | 2011

Adding noise to the input of a model trained with a regularized objective

Salah Rifai; Xavier Glorot; Yoshua Bengio; Pascal Vincent


international conference on artificial intelligence and statistics | 2011

Deep Learners Benefit More from Out-of-Distribution Examples

Yoshua Bengio; Frédéric Bastien; Arnaud Bergeron; Nicolas Boulanger-Lewandowski; Thomas M. Breuel; Youssouf Chherawala; Moustapha Cissé; Myriam Côté; Dumitru Erhan; Jeremy Eustache; Xavier Glorot; Xavier Muller; Sylvain Pannetier Lebeuf; Razvan Pascanu; Salah Rifai; François Savard; Guillaume Sicard

Collaboration


Dive into the Salah Rifai's collaboration.

Top Co-Authors

Avatar

Yoshua Bengio

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Pascal Vincent

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Xavier Glorot

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Xavier Muller

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge