Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Isabel Gonzalez is active.

Publication


Featured researches published by Isabel Gonzalez.


affective computing and intelligent interaction | 2011

Context-independent facial action unit recognition using shape and gabor phase information

Isabel Gonzalez; Hichem Sahli; Valentin Enescu; Werner Verhelst

In this paper we investigate the combination of shape features and Phase-based Gabor features for context-independent Action Unit Recognition. For our recognition goal, three regions of interest have been devised that efficiently capture the AUs activation/deactivation areas. In each of these regions a feature set consisting of geometrical and histogram of Gabor phase appearance-based features have been estimated. For each Action Unit, we applied Adaboost for feature selection, and used a binary SVM for context-independent classification. Using the Cohn-Kanade database, we achieved an average F1 score of 93.8% and an average area under the ROC curve of 97.9 %, for the 11 AUs considered.


Computational and Mathematical Methods in Medicine | 2014

Objectifying Facial Expressivity Assessment of Parkinson’s Patients: Preliminary Study

Peng Wu; Isabel Gonzalez; Georgios Patsis; Dongmei Jiang; Hichem Sahli; Eric Kerckhofs; Marie Vandekerckhove

Patients with Parkinsons disease (PD) can exhibit a reduction of spontaneous facial expression, designated as “facial masking,” a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participants self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinsons disease have been observed.


Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research | 2010

Automatic recognition of lower facial action units

Isabel Gonzalez; Hichem Sahli; Werner Verhelst

The face is an important source of information in multimodal communication. Facial expressions are generated by contractions of facial muscles, which lead to subtle changes in the area of the eyelids, eye brows, nose, lips and skin texture, often revealed by wrinkles and bulges. To measure these subtle changes, Ekman et al.[5] developed the Facial Action Coding System (FACS). FACS is a human-observer-based system designed to detect subtle changes in facial features, and describes facial expressions by action units (AUs). We present a technique to automatically recognize lower facial Action Units, independently from one another. Even though we do not explicitly take into account AU combinations, thereby making the classification process harder, an average F1 score of 94.83% is achieved.


IEEE Transactions on Affective Computing | 2017

Leveraging the Bayesian Filtering Paradigm for Vision-Based Facial Affective State Estimation

Meshia Cédric Oveneke; Isabel Gonzalez; Valentin Enescu; Dongmei Jiang; Hichem Sahli

Estimating a persons affective state from facial information is an essential capability for social interaction. Automatizing such a capability has therefore increasingly driven multidisciplinary research for the past decades. At the heart of this issue are very challenging signal processing and artificial intelligence problems driven by the inherent complexity of human affect. We therefore propose a principled framework for designing automated systems capable of continuously estimating the human affective state from an incoming stream of images. First, we model human affect as a dynamical system and define the affective state in terms of valence, arousal and their higher-order derivatives. We then pose the affective state estimation problem as a Bayesian filtering problem and provide a solution based on Kalman filtering (KF) for probabilistic reasoning over time, combined with multiple instance sparse Gaussian processes (MI-SGP) for inferring affect-related measurements from image sequences. We quantitatively and qualitatively evaluate our proposed framework on the AVEC 2012 and AVEC 2014 benchmark datasets and obtain state-of-the-art results using the baseline features as input to our MI-SGP-KF model. We therefore believe that leveraging the Bayesian filtering paradigm can pave the way for further enhancing the design of automated systems for affective state estimation.


Multimedia Tools and Applications | 2015

Recognition of facial actions and their temporal segments based on duration models

Isabel Gonzalez; Francesco Cartella; Valentin Enescu; Hichem Sahli

Being able to automatically analyze finegrained changes in facial expression into action units (AUs), of the Facial Action Coding System (FACS), and their temporal models (i.e., sequences of temporal phases, neutral, onset, apex, and offset), in face videos would greatly benefit for facial expression recognition systems. Previous works, considered combining, per AU, a discriminative frame-based Support Vector Machine (SVM) and a dynamic generative Hidden Markov Models (HMM), to detect the presence of the AU in question and its temporal segments in an input image sequence. The major drawback of HMMs, is that they do not model well time dependent dynamics as the ones of AUs, especially when dealing with spontaneous expressions. To alleviate this problem, in this paper, we exploit efficient duration modeling of the temporal behavior of AUs, and we propose hidden semi-Markov model (HSMM) and variable duration semi-Markov model (VDHMM) to recognize the dynamics of AU’s. Such models allow the parameterization and inference of the AU’s state duration distributions. Within our system, geometrical and appearance based measurements, as well as their first derivatives, modeling both the dynamics and the appearance of AUs, are applied to pair-wise SVM classifiers for a frame-based classification. The output of which are then fed as evidence to the HSMM or VDHMM for inferring AUs temporal phases. A thorough investigation into the aspect of duration modeling and its application to AU recognition through extensive comparison to state-of-art SVM-HMM approaches are presented. For comparison, an average recognition rate of 64.83 % and 64.66 % is achieved for the HSMM and VDHMM respectively. Our framework has several benefits: (1) it models the AU’s temporal phases duration; (2) it does not require any assumption about the underlying structure of the AU events, and (3) compared to HMM, the proposed HSMM and VDHMM duration models reduce the duration error of the temporal phases of an AU, and they are especially better in recognizing the offset ending of an AU.


affective computing and intelligent interaction | 2011

Kalman filter-based facial emotional expression recognition

Ping Fan; Isabel Gonzalez; Valentin Enescu; Hichem Sahli; Dongmei Jiang

In this work we examine the use of State-Space Models to model the temporal information of dynamic facial expressions. The later being represented by the 3D animation parameters which are recovered using 3D Candide model. The 3D animation parameters of an image sequence can be seen as the observation of a stochastic process which can be modeled by a linear State-Space Model, the Kalman Filter. In the proposed approach each emotion is represented by a Kalman Filter, with parameters being State Transition matrix, Observation matrix, State and Observation noise covariance matrices. Person-independent experimental results have proved the validity and the good generalization ability of the proposed approach for emotional facial expression recognition. Moreover, compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing facial expressions.


affective computing and intelligent interaction | 2015

Framework for combination aware AU intensity recognition

Isabel Gonzalez; Werner Verhelst; Meshia Cédric Oveneke; Hichem Sahli; Dongmei Jiang

We present a framework for combination aware AU intensity recognition. It includes a feature extraction approach that can handle small head movements which does not require face alignment. A three layered structure is used for the AU classification. The first layer is dedicated to independent AU recognition, and the second layer incorporates AU combination knowledge. At a third layer, AU dynamics are handled based on variable duration semi-Markov model. The first two layers are modeled using extreme learning machines (ELMs). ELMs have equal performance to support vector machines but are computationally more efficient, and can handle multi-class classification directly. Moreover, they include feature selection via manifold regularization. We show that the proposed layered classification scheme can improve results by considering AU combinations as well as intensity recognition.


international conference on image and graphics | 2009

A Visual Silence Detector Constraining Speech Source Separation

Isabel Gonzalez; Ilse Ravyse; Henk Brouckxon; Werner Verhelst; Dongmei Jiang; Hichem Sahli

We propose an audiovisual source separation algorithm for speech signals. In our proposed algorithm we first extract the time segments with low activity of the mouth region from synchronous video recordings. An automatically selected optimal classifier is used to detect silent intervals in these instants of low visual mouth activity. Then, the source separation problem is formulated and solved for the entire signal duration. Our approach was tested on two challenging speech corpora with two speakers and two microphones, namely in the first corpus separate source signals were mixed in a simulated room, and the second corpus contains recorded conversations. The results are promising on both corpora: with the visual silence detector the performance of the source separation algorithm, measured by the signal to noise inference ratio increases.


affective computing and intelligent interaction | 2015

Monocular 3D facial information retrieval for automated facial expression analysis

Meshia Cédric Oveneke; Isabel Gonzalez; Weiyi Wang; Dongmei Jiang; Hichem Sahli

Understanding social signals is a very important aspect of human communication and interaction and has therefore attracted increased attention from various research areas. Among the different types of social signals, particular attention has been paid to facial expression of emotions and its automated analysis from image sequences. Automated facial expression analysis is a very challenging task due to the complex three-dimensional deformation and motion of the face associated to the facial expressions and the loss of 3D information during the image formation process. As a consequence, retrieving 3D spatio-temporal facial information from image sequences is essential for automated facial expression analysis. In this paper, we propose a framework for retrieving three-dimensional facial structure, motion and spatio-temporal features from monocular image sequences. First, we estimate monocular 3D scene flow by retrieving the facial structure using shape-from-shading (SFS) and combine it with 2D optical flow. Secondly, based on the retrieved structure and motion of the face, we extract spatio-temporal features for automated facial expression analysis. Experimental results illustrate the potential of the proposed 3D facial information retrieval framework for facial expression analysis, i.e. facial expression recognition and facial action-unit recognition on a benchmark dataset. This paves the way for future research on monocular 3D facial expression analysis.


Computación y Sistemas | 2012

Sparse and Non-Sparse Multiple Kernel Learning for Recognition

Mitchel Alioscha-Perez; Hichem Sahli; Isabel Gonzalez; Alberto Taboada-Crispi

Collaboration


Dive into the Isabel Gonzalez's collaboration.

Top Co-Authors

Avatar

Hichem Sahli

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Dongmei Jiang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Valentin Enescu

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georgios Patsis

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Henk Brouckxon

Vrije Universiteit Brussel

View shared research outputs
Researchain Logo
Decentralizing Knowledge