Valentin Enescu
Vrije Universiteit Brussel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Valentin Enescu.
human robot interaction | 2016
Alexandre Coninx; Paul Baxter; Elettra Oleari; Sara Bellini; Bert P.B. Bierman; Olivier A. Blanson Henkemans; Lola Cañamero; Piero Cosi; Valentin Enescu; Raquel Ros Espinoza; Antoine Hiolle; Rémi Humbert; Bernd Kiefer; Ivana Kruijff-Korbayová; Rosemarijn Looije; Marco Mosconi; Mark A. Neerincx; Giulio Paci; Georgios Patsis; Clara Pozzi; Francesca Sacchitelli; Hichem Sahli; Alberto Sanna; Giacomo Sommavilla; Fabio Tesser; Yiannis Demiris; Tony Belpaeme
Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI.
affective computing and intelligent interaction | 2011
Isabel Gonzalez; Hichem Sahli; Valentin Enescu; Werner Verhelst
In this paper we investigate the combination of shape features and Phase-based Gabor features for context-independent Action Unit Recognition. For our recognition goal, three regions of interest have been devised that efficiently capture the AUs activation/deactivation areas. In each of these regions a feature set consisting of geometrical and histogram of Gabor phase appearance-based features have been estimated. For each Action Unit, we applied Adaboost for feature selection, and used a binary SVM for context-independent classification. Using the Cohn-Kanade database, we achieved an average F1 score of 93.8% and an average area under the ROC curve of 97.9 %, for the 11 AUs considered.
Archive | 2011
Ivana Kruijff-Korbayová; Georgios Athanasopoulos; Aryel Beck; Piero Cosi; Heriberto Cuayáhuitl; Tomas Dekens; Valentin Enescu; Antoine Hiolle; Bernd Kiefer; Hichem Sahli; Marc Schröder; Giacomo Sommavilla; Fabio Tesser; Werner Verhelst
Conversational systems play an important role in scenarios without a keyboard, e.g., talking to a robot. Communication in human-robot interaction (HRI) ultimately involves a combination of verbal and non-verbal inputs and outputs. HRI systems must process verbal and non-verbal observations and execute verbal and non-verbal actions in parallel, to interpret and produce synchronized behaviours. The development of such systems involves the integration of potentially many components and ensuring a complex interaction and synchronization between them. Most work in spoken dialogue system development uses pipeline architectures. Some exceptions are [1, 17], which execute system components in parallel (weakly-coupled or tightly-coupled architectures). The latter are more promising for building adaptive systems, which is one of the goals of contemporary research systems.
Proceedings of 4th International Workshop on Human Behavior Understanding - Volume 8212 | 2013
Weiyi Wang; Valentin Enescu; Hichem Sahli
Social psychological research indicates that bodily expressions convey important affective information, although this modality is relatively neglected in the literature as compared to facial expressions and speech. In this paper we propose a real-time system that continuously recognizes emotions from body movements data streams. Low-level 3D postural features and high-level kinematic and geometrical features are through summarization (statistical values) or aggregation (feature patches), fed to a random forests classifier. In a first stage, the MoCap UCLIC affective gesture database has been used for training the classifier, which led to an overall recognition rate of 78% using a 10-fold cross-validation (leave-one-out). Subsequently, the trained classifier was tested with different subjects using continuous Kinect data. A performance of 72% was reached in real-time, which proves the efficiency and effectiveness of the proposed system.
advanced concepts for intelligent vision systems | 2005
Tom Caljon; Valentin Enescu; Peter Schelkens; Hichem Sahli
A generic bi-directional scheme is proposed that robustifies the estimation of the maximum-a-posteriori (MAP) sequence of states of a visual object. It enables creative, non-technical users to obtain the path of interesting objects in offline available video material, which can then be used to create interactive movies. To robustify against tracker failure the proposed scheme merges the filtering distributions of a forward tracking particle filter and a backward tracking particle filter at some timesteps, using a reliability-based voting scheme such as in democratic integration. The MAP state sequence is obtained using the Viterbi algorithm on reduced state sets per timestep derived from the merged distributions and is interpolated linearly where tracking failure is suspected. The presented scheme is generic, simple and efficient and shows good results for a color-based particle filter.
human-robot interaction | 2014
Raquel Ros; Alexandre Coninx; Yiannis Demiris; Georgios Patsis; Valentin Enescu; Hichem Sahli
We report first results on children adaptive behavior towards a dance tutoring robot. We can observe that children behavior rapidly evolves through few sessions in order to accommodate with the robotic tutor rhythm and instructions.
IEEE Transactions on Affective Computing | 2017
Meshia Cédric Oveneke; Isabel Gonzalez; Valentin Enescu; Dongmei Jiang; Hichem Sahli
Estimating a persons affective state from facial information is an essential capability for social interaction. Automatizing such a capability has therefore increasingly driven multidisciplinary research for the past decades. At the heart of this issue are very challenging signal processing and artificial intelligence problems driven by the inherent complexity of human affect. We therefore propose a principled framework for designing automated systems capable of continuously estimating the human affective state from an incoming stream of images. First, we model human affect as a dynamical system and define the affective state in terms of valence, arousal and their higher-order derivatives. We then pose the affective state estimation problem as a Bayesian filtering problem and provide a solution based on Kalman filtering (KF) for probabilistic reasoning over time, combined with multiple instance sparse Gaussian processes (MI-SGP) for inferring affect-related measurements from image sequences. We quantitatively and qualitatively evaluate our proposed framework on the AVEC 2012 and AVEC 2014 benchmark datasets and obtain state-of-the-art results using the baseline features as input to our MI-SGP-KF model. We therefore believe that leveraging the Bayesian filtering paradigm can pave the way for further enhancing the design of automated systems for affective state estimation.
european conference on computer vision | 2014
Weiyi Wang; Georgios Athanasopoulos; Georgios Patsis; Valentin Enescu; Hichem Sahli
Emotion perception and interpretation is one of the key desired capabilities of assistive robots, which could largely enhance the quality and naturalness in human-robot interaction. According to psychological studies, bodily communication has an important role in human social behaviours. However, it is very challenging to model such affective bodily expressions, especially in a naturalistic setting, considering the variety of expressive patterns, as well as the difficulty of acquiring reliable data. In this paper, we investigate the spontaneous dimensional emotion prediction problem in a child-robot interaction scenario. The paper presents emotion elicitation, data acquisition, 3D skeletal representation, feature design and machine learning algorithms. Experimental results have shown good predictive performance on the variation trends of emotional dimensions, especially the arousal dimension.
Multimedia Tools and Applications | 2015
Isabel Gonzalez; Francesco Cartella; Valentin Enescu; Hichem Sahli
Being able to automatically analyze finegrained changes in facial expression into action units (AUs), of the Facial Action Coding System (FACS), and their temporal models (i.e., sequences of temporal phases, neutral, onset, apex, and offset), in face videos would greatly benefit for facial expression recognition systems. Previous works, considered combining, per AU, a discriminative frame-based Support Vector Machine (SVM) and a dynamic generative Hidden Markov Models (HMM), to detect the presence of the AU in question and its temporal segments in an input image sequence. The major drawback of HMMs, is that they do not model well time dependent dynamics as the ones of AUs, especially when dealing with spontaneous expressions. To alleviate this problem, in this paper, we exploit efficient duration modeling of the temporal behavior of AUs, and we propose hidden semi-Markov model (HSMM) and variable duration semi-Markov model (VDHMM) to recognize the dynamics of AU’s. Such models allow the parameterization and inference of the AU’s state duration distributions. Within our system, geometrical and appearance based measurements, as well as their first derivatives, modeling both the dynamics and the appearance of AUs, are applied to pair-wise SVM classifiers for a frame-based classification. The output of which are then fed as evidence to the HSMM or VDHMM for inferring AUs temporal phases. A thorough investigation into the aspect of duration modeling and its application to AU recognition through extensive comparison to state-of-art SVM-HMM approaches are presented. For comparison, an average recognition rate of 64.83 % and 64.66 % is achieved for the HSMM and VDHMM respectively. Our framework has several benefits: (1) it models the AU’s temporal phases duration; (2) it does not require any assumption about the underlying structure of the AU events, and (3) compared to HMM, the proposed HSMM and VDHMM duration models reduce the duration error of the temporal phases of an AU, and they are especially better in recognizing the offset ending of an AU.
advanced concepts for intelligent vision systems | 2012
Meshia Cédric Oveneke; Valentin Enescu; Hichem Sahli
We present a cascaded real-time system that recognizes dance patterns from 3D motion capture data. In a first step, the body trajectory, relative to the motion capture sensor, is matched. In a second step, an angular representation of the skeleton is proposed to make the system invariant to anthropometric differences relative to the body trajectory. Coping with non-uniform speed variations and amplitude discrepancies between dance patterns is achieved via a sequence similarity measure based on Dynamic Time Warping (DTW). A similarity threshold for recognition is automatically determined. Using only one good motion exemplar (baseline) per dance pattern, the recognition system is able to find a matching candidate pattern in a continuous stream of data, without prior segmentation. Experiments show the proposed algorithm reaches a good trade-off between simplicity, speed and recognition rate. An average recognition rate of 86.8% is obtained in real-time.