Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Séverine Dubuisson is active.

Publication


Featured researches published by Séverine Dubuisson.


Signal Processing-image Communication | 2002

A solution for facial expression representation and recognition

Séverine Dubuisson; Franck Davoine; Mylène Masson

The design of a recognition system requires careful attention to pattern representation and classifier design. Some statistical approaches choose those features, in a d-dimensional initial space, which allow sample vectors belonging to different categories to occupy compact and disjoint regions in a low-dimensional subspace. The effectiveness of the representation subspace is then determined by how well samples from different classes can be separated. In this paper, we propose a feature selection process that sorts the principal components, generated by principal component analysis, in the order of their importance to solve a specific recognition task. This method provides a low-dimensional representation subspace which has been optimized to improve the classification accuracy. We focus on the problem of facial expression recognition to demonstrate this technique. We also propose a decision tree-based classifier that provides a ‘‘coarse-to-fine’’ classification of new samples by successive projections onto more and more precise representation subspaces. Results confirm, first, that the choice of the representation strongly influences the classification results, second that a classifier has to be designed for a specific representation. r 2002 Elsevier Science B.V. All rights reserved.


Computer Vision and Image Understanding | 2012

Fragments based tracking with adaptive cue integration

Erkut Erdem; Séverine Dubuisson; Isabelle Bloch

In this paper, we address the issue of part-based tracking by proposing a new fragments-based tracker. The proposed tracker enhances the recently suggested FragTrack algorithm to employ an adaptive cue integration scheme. This is done by embedding the original tracker into a particle filter framework, associating a reliability value to each fragment that describes a different part of the target object and dynamically adjusting these reliabilities at each frame with respect to the current context. Particularly, the vote of each fragment contributes to the joint tracking result according to its reliability, and this allows us to achieve a better accuracy in handling partial occlusions and pose changes while preserving and even improving the efficiency of the original tracker. In order to demonstrate the performance and the effectiveness of the proposed algorithm we present qualitative and quantitative results on a number of challenging video sequences.


international conference on computer vision | 2015

Pairwise Conditional Random Forests for Facial Expression Recognition

Arnaud Dapogny; Kevin Bailly; Séverine Dubuisson

Facial expression can be seen as the dynamic variation of ones appearance over time. Successful recognition thus involves finding representations of high-dimensional spatiotemporal patterns that can be generalized to unseen facial morphologies and variations of the expression dynamics. In this paper, we propose to learn Random Forests from heterogeneous derivative features (e.g. facial fiducial point movements or texture variations) upon pairs of images. Those forests are conditioned on the expression label of the first frame to reduce the variability of the ongoing expression transitions. When testing on a specific frame of a video, pairs are created between this frame and the previous ones. Predictions for each previous frame are used to draw trees from Pairwise Conditional Random Forests (PCRF) whose pairwise outputs are averaged over time to produce robust estimates. As such, PCRF appears as a natural extension of Random Forests to learn spatio-temporal patterns, that leads to significant improvements over standard Random Forests as well as state-of-the-art approaches on several facial expression benchmarks.


Pattern Recognition | 2012

Visual tracking by fusing multiple cues with context-sensitive reliabilities

Erkurt Erdem; Séverine Dubuisson; Isabelle Bloch

Many researchers argue that fusing multiple cues increases the reliability and robustness of visual tracking. However, how the multi-cue integration is realized during tracking is still an open issue. In this work, we present a novel data fusion approach for multi-cue tracking using particle filter. Our method differs from previous approaches in a number of ways. First, we carry out the integration of cues both in making predictions about the target object and in verifying them through observations. Our second and more significant contribution is that both stages of integration directly depend on the dynamically changing reliabilities of visual cues. These two aspects of our method allow the tracker to easily adapt itself to the changes in the context, and accordingly improve the tracking accuracy by resolving the ambiguities.


international conference on image processing | 2010

The computation of the Bhattacharyya distance between histograms without histograms

Séverine Dubuisson

In this paper we present a new method for fast histogram computing and its extension to bin to bin histogram distance computing. The idea consists in using the information of spatial differences between images, or between regions of images (a current and a reference one), and encoding it into a specific data structure: a tree. The Bhattacharyya distance between two histograms is then computed using an incremental approach that avoid histogram: we just need histograms of the reference image, and spatial differences between the reference and the current image to compute this distance using an updating process. We compare our approach with the well-known Integral Histogram one, and obtain better results in terms of processing time while reducing the memory footprint. We show theoretically and with experimental results the superiority of our approach in many cases. Finally, we demonstrate the advantages of our approach on a real visual tracking application using a particle filter framework by improving its correction step computation time.


systems man and cybernetics | 2011

Integration of Fuzzy Spatial Information in Tracking Based on Particle Filtering

Nicolas Widynski; Séverine Dubuisson; Isabelle Bloch

In this paper, we propose a novel method to introduce spatial information in particle filters. This information may be expressed as spatial relations (orientation, distance, etc.), velocity, scaling, or shape information. Spatial information is modeled in a generic fuzzy-set framework. The fuzzy models are then introduced in the particle filter and automatically define transition and prior spatial distributions. We also propose an efficient importance distribution to produce relevant particles, which is dedicated to the proposed fuzzy framework. The fuzzy modeling provides flexibility both in the semantics of information and in the transitions from one instant to another one. This allows one to take into account situations where a tracked object changes its direction in a quite abrupt way and where poor prior information on dynamics is available, as demonstrated on synthetic data. As an illustration, two tests on real video sequences are performed in this paper. The first one concerns a classical tracking problem and shows that our approach efficiently tracks objects with complex and unknown dynamics, outperforming classical filtering techniques while using only a small number of particles. In the second experiment, we show the flexibility of our approach for modeling: Fuzzy shapes are modeled in a generic way and allow the tracking of objects with changing shape.


Signal Processing-image Communication | 2012

Motion compensation based on tangent distance prediction for video compression

Jonathan Fabrizio; Séverine Dubuisson; Dominique Béréziat

We present a new algorithm for motion compensation that uses a motion estimation method based on tangent distance. The method is compared with a Block-Matching based approach in various common situations. Whereas Block-Matching algorithms usually only predict positions of blocks over time, our method also predicts the evolution of pixels into these blocks. The prediction error is then drastically decreased. The method is implemented into the Theora codec proving that this algorithm improves the video codec performances. Highlights? Provides a new motion compensation algorithm for video compression. ? Compares offered method with classical block matching strategy. ? Improves compression rates on Theora codec.


advanced concepts for intelligent vision systems | 2006

Comparison of statistical and shape-based approaches for non-rigid motion tracking with missing data using a particle filter

Abir El Abed; Séverine Dubuisson; Dominique Béréziat

Recent developments in dynamic contour tracking in video sequences are based on prediction using dynamical models. The parameters of these models are fixed by learning the dynamics from a training set to represent plausible motions, such as constant velocity or critically damped oscillations. Thus, a problem arise in cases of non-constant velocity and unknown interframe motion, i.e. unlearned motions, and the CONDENSATION algorithm fails to track the dynamic contour. The main contribution of this work is to propose an adaptative dynamical model which parameters are based on non-linear/non-gaussian observation models. We study two different approaches, one statistical and one shape-based, to estimate the deformation of an object and track complex dynamics without learning from a training set neather the dynamical nor the deformation models and under the constraints of missing data, non-linear deformation and unknown interframe motion. The developed approaches have been successfully tested on several sequences.


Computer Vision and Image Understanding | 2012

Fuzzy spatial constraints and ranked partitioned sampling approach for multiple object tracking

Nicolas Widynski; Séverine Dubuisson; Isabelle Bloch

While particle filters are now widely used for object tracking in videos, the case of multiple object tracking still raises a number of issues. Among them, a first, and very important, problem concerns the exponential increase of the number of particles with the number of objects to be tracked, that can make some practical applications intractable. To achieve good tracking performances, we propose to use a Partitioned Sampling method in the estimation process with an additional feature about the ordering sequence in which the objects are processed. We call it Ranked Partitioned Sampling, where the optimal order in which objects should be processed and tracked is estimated jointly with the object state. Another essential point concerns the modeling of possible interactions between objects. As another contribution, we propose to represent these interactions within a formal framework relying on fuzzy sets theory. This allows us to easily model spatial constraints between objects, in a general and formal way. The association of these two contributions was tested on typical videos exhibiting difficult situations such as partial or total occlusions, and appearance or disappearance of objects. We show the benefit of using conjointly these two contributions, in comparison to classical approaches, through multiple object tracking and articulated object tracking experiments on real video sequences. The results show that our approach provides less tracking errors than those obtained with the classical Partitioned Sampling method, without the need for increasing the number of particles.


ieee international conference on automatic face gesture recognition | 2015

Dynamic facial expression recognition by joint static and multi-time gap transition classification

Arnaud Dapogny; Kevin Bailly; Séverine Dubuisson

Automatic facial expression classification is a challenging problem for developing intelligent human-computer interaction systems. In order to take into account the expression dynamics, existing works usually make the assumption that a specific facial expression is displayed with a pre-segmented evolution, i.e. starting from neutral and finishing on an apex frame. In this paper, we propose a method to train a transition classifier from pairs of images. This transition classifier is applied at multiple time gaps and the output probabilities are fused along with a static estimation. We eventually show that our approach yields state-of-the-art accuracy on popular datasets without exploiting any such prior on the segmentation of the expression.

Collaboration


Dive into the Séverine Dubuisson's collaboration.

Top Co-Authors

Avatar

Isabelle Bloch

Université Paris-Saclay

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christophe Gonzales

Pierre-and-Marie-Curie University

View shared research outputs
Top Co-Authors

Avatar

Franck Davoine

University of Technology of Compiègne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge