Javier Cruz
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Javier Cruz.
Image and Vision Computing | 2010
Matteo Sorci; Gianluca Antonini; Javier Cruz; Thomas Robin; Michel Bierlaire; J.-Ph. Thiran
A recent internet based survey of over 35,000 samples has shown that when different human observers are asked to assign labels to static human facial expressions, different individuals categorize differently the same image. This fact results in a lack of an unique ground-truth, an assumption held by the large majority of existing models for classification. This is especially true for highly ambiguous expressions, especially in the lack of a dynamic context. In this paper we propose to address this shortcoming by the use of discrete choice models (DCM) to describe the choice a human observer is faced to when assigning labels to static facial expressions. Different models of increasing complexity are specified to capture the causal effect between features of an image and its associated expression, using several combinations of different measurements. The sets of measurements we used are largely inspired by FACS but also introduce some new ideas, specific to a static framework. These models are calibrated using maximum likelihood techniques and they are compared with each other using a likelihood ratio test, in order to test for significance in the improvement resulting from adding supplemental features. Through a cross-validation procedure we assess the validity of our approach against overfitting and we provide a comparison with an alternative model based on Neural Networks for benchmark purposes.
Journal of choice modelling | 2011
Thomas Robin; Michel Bierlaire; Javier Cruz
A generation of new models has been proposed to handle some complex human behaviors. These models account for the data ambiguity, and there- fore extend the application field of the discrete choice modeling. The facial expression recognition (FER) is highly relevant in this context. We develop a dynamic facial expression recognition (DFER) framework based on dis- crete choice models (DCM). The DFER consists in modeling the choice of a person who has to label a video sequence representing a facial expression. The originality is based on the the analysis of videos with discrete choice models as well as the explicit modeling of causal effects between the facial features and the recognition of the expression. Five models are proposed. The first assumes that only the last frame of the video triggers the choice of the expression. The second model has two components. The first captures the perception of the facial expression within each frame in the sequence, while the second determines which frame triggers the choice. The third model is an extension of the second model and assumes that the choice of the expression results from the average of perceptions within a group of frames. The fourth and fifth models integrate the panel effect inherent to the estimation data and are respectively extensing the first and second mod- els. The models are estimated using videos from the Facial Expressions and Emotions Database (FEED). Labeling data on the videos has been obtained using an internet survey available at http://transp-or2.epfl.ch/videosurvey/. The prediction capability of the models is studied in order to check their validity by cross-validation using the estimation data.
International Choice Modelling Conference (2009 : Harrogate, England) | 2010
Matteo Sorci; Thomas Robin; Javier Cruz; Michel Bierlaire; Jean-Philippe Thiran; Gianluca Antonini
Facial expression recognition by human observers is affected by subjective components. Indeed there is no ground truth. We have developed Discrete Choice Models (DCM) to capture the human perception of facial expressions. In a first step, the static case is treated, that is modelling perception of facial images. Image information is extracted using a computer vision tool called Active Appearance Model (AAM). DCMs attributes are based on the Facial Action Coding System (FACS), Expression Descriptive Units (EDUs) and outputs of AAM. Some behavioural data have been collected using an Internet survey, where respondents are asked to label facial images from the Cohn– Kanade database with expressions. Different models were estimated by likelihood maximization using the obtained data. In a second step, the proposed static discrete choice framework is extended to the dynamic case, which considers facial video instead of images. The model theory is described and another Internet survey is currently conducted in order to obtain expressions labels on videos. In this second Internet survey, videos come from the Cohn–Kanade database and the Facial Expressions and Emotions Database (FEED).
Transportation Research Part B-methodological | 2009
Thomas Robin; Gianluca Antonini; Michel Bierlaire; Javier Cruz
Archive | 2008
Javier Cruz; Michel Bierlaire; Jean-Philippe Thiran
Third Workshop on Discrete Choice Models | 2007
Javier Cruz; Thomas Robin; Matteo Sorci; Michel Bierlaire; Jean-Philippe Thiran
World Conference on Transport Research | 2010
Thomas Robin; Michel Bierlaire; Javier Cruz
Workshop on discrete choice models | 2009
Thomas Robin; Michel Bierlaire; Javier Cruz
STRC, 9th Swiss Transportation Research Conference | 2009
Thomas Robin; Michel Bierlaire; Javier Cruz
Archive | 2009
Matteo Sorci; Thomas Robin; Javier Cruz; Michel Bierlaire; Jean-Philippe Thiran