Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adria Ruiz is active.

Publication


Featured researches published by Adria Ruiz.


international conference on computer vision | 2015

From Emotions to Action Units with Hidden and Semi-Hidden-Task Learning

Adria Ruiz; Joost van de Weijer; Xavier Binefa

Limited annotated training data is a challenging problem in Action Unit recognition. In this paper, we investigate how the use of large databases labelled according to the 6 universal facial expressions can increase the generalization ability of Action Unit classifiers. For this purpose, we propose a novel learning framework: Hidden-Task Learning. HTL aims to learn a set of Hidden-Tasks (Action Units) for which samples are not available but, in contrast, training data is easier to obtain from a set of related Visible-Tasks (Facial Expressions). To that end, HTL is able to exploit prior knowledge about the relation between Hidden and Visible-Tasks. In our case, we base this prior knowledge on empirical psychological studies providing statistical correlations between Action Units and universal facial expressions. Additionally, we extend HTL to Semi-Hidden Task Learning (SHTL) assuming that Action Unit training samples are also provided. Performing exhaustive experiments over four different datasets, we show that HTL and SHTL improve the generalization ability of AU classifiers by training them with additional facial expression data. Additionally, we show that SHTL achieves competitive performance compared with state-of-the-art Transductive Learning approaches which face the problem of limited training data by using unlabelled test samples during training.


british machine vision conference | 2014

Regularized Multi-Concept MIL for weakly-supervised facial behavior categorization.

Adria Ruiz; Joost van de Weijer; Xavier Binefa

In this work, we address the problem of estimating high-level semantic labels for videos of recorded people by means of analysing their facial expressions. This problem, to which we refer as facial behavior categorization, is a weakly-supervised learning problem where we do not have access to frame-by-frame facial gesture annotations but only weak-labels at the video level are available. Therefore, the goal is to learn a set of discriminative expressions appearing during the training videos and how they determine these labels. Facial behavior categorization can be posed as a Multi-Instance-Learning (MIL) problem and we propose a novel MIL method called Regularized Multi-Concept MIL to solve it. In contrast to previous approaches applied in facial behavior analysis, RMC-MIL follows a Multi-Concept assumption which allows different facial expressions (concepts) to contribute differently to the video-label. Moreover, to handle with the high-dimensional nature of facial-descriptors, RMC-MIL uses a discriminative approach to model the concepts and structured sparsity regularization to discard non-informative features. RMC-MIL is posed as a convex-constrained optimization problem where all the parameters are jointly learned using the Projected-Quasi-Newton method. In our experiments, we use two public data-sets to show the advantages of the Regularized MultiConcept approach and its improvement compared to existing MIL methods. RMC-MIL outperforms state-of-the-art results in the UNBC data-set for pain detection.


asian conference on computer vision | 2016

Multi-Instance Dynamic Ordinal Random Fields for Weakly-Supervised Pain Intensity Estimation

Adria Ruiz; Ognjen Rudovic; Xavier Binefa; Maja Pantic

In this paper, we address the Multi-Instance-Learning (MIL) problem when bag labels are naturally represented as ordinal variables (Multi--Instance--Ordinal Regression). Moreover, we consider the case where bags are temporal sequences of ordinal instances. To model this, we propose the novel Multi-Instance Dynamic Ordinal Random Fields (MI-DORF). In this model, we treat instance-labels inside the bag as latent ordinal states. The MIL assumption is modelled by incorporating a high-order cardinality potential relating bag and instance-labels,into the energy function. We show the benefits of the proposed approach on the task of weakly-supervised pain intensity estimation from the UNBC Shoulder-Pain Database. In our experiments, the proposed approach significantly outperforms alternative non-ordinal methods that either ignore the MIL assumption, or do not model dynamic information in target data.


Proceedings of the 1st International Workshop on Multimedia Analysis and Retrieval for Multimodal Interaction | 2016

A Multimodal Annotation Schema for Non-Verbal Affective Analysis in the Health-Care Domain

Federico M. Sukno; Mónica Domínguez; Adria Ruiz; Dominik Schiller; Florian Lingenfelser; Louisa Pragst; Ekeni Kamateri; Stefanos Vrochidis

The development of conversational agents with human interaction capabilities requires advanced affective state recognition integrating non-verbal cues from the different modalities constituting what in human communication we perceive as an overall affective state. Each of the modalities is often handled by a different subsystem that conveys only a partial interpretation of the whole and, as such, is evaluated only in terms of its partial view. To tackle this shortcoming, we investigate the generation of a unified multimodal annotation schema of non-verbal cues from the perspective of an inter-disciplinary group of experts. We aim at obtaining a common ground-truth with a unique representation using the Valence and Arousal space and a discrete non-linear scale of values. The proposed annotation schema is demonstrated on a corpus in the health-care domain but is scalable to other purposes. Preliminary results on inter-rater variability show a positive correlation of consensus level with high (absolute) values of Valence and Arousal as well as with the number of annotators labeling a given video sequence.


Revised Selected Papers of the AICOL 2013 International Workshops on AI Approaches to the Complexity of Legal Systems - Volume 8929 | 2013

Consumedia. Functionalities, Emotion Detection and Automation of Services in a ODR Platform

Josep Suquet; Pompeu Casanovas; Xavier Binefa; Oriol Martinez; Adria Ruiz; Jordi Ceballos

This paper presents a legal and technological approach to online mediation. It shows the technologies that are usually employed in this field and presents the prototype of Consumedia, an online mediation platform, as well as its functionalities and technological architecture. Moreover, it uncovers the technology implemented as regards the recognition of emotions in the mediation room. Furthermore, it considers that an online mediation platform may automatically provide the parties with all the required documentation of the process. Thus, it unveils the documents that an online mediation platform should automatically provide to the disputants.


ieee international conference on automatic face gesture recognition | 2017

Head Pose Estimation Based on 3-D Facial Landmarks Localization and Regression

Dmytro Derkach; Adria Ruiz; Federico M. Sukno

In this paper we present a system that is able to estimate head pose using only depth information from consumer RGB-D cameras such as Kinect 2. In contrast to most approaches addressing this problem, we do not rely on tracking and produce pose estimation in terms of pitch, yaw and roll angles using single depth frames as input. Our system combines three different methods for pose estimation: two of them are based on state-ofthe- art landmark detection and the third one is a dictionarybased approach that is able to work in especially challenging scans where landmarks or mesh correspondences are too difficult to obtain. We evaluated our system on the SASE database, which consists of 30K frames from 50 subjects. We obtained average pose estimation errors between 5 and 8 degrees per angle, achieving the best performance in the FG2017 Head Pose Estimation Challenge. Full code of the developed system is available on-line.


ieee international conference on automatic face gesture recognition | 2017

Fusion of Valence and Arousal Annotations through Dynamic Subjective Ordinal Modelling

Adria Ruiz; Oriol Martinez; Xavier Binefa; Federico M. Sukno

An essential issue when training and validating computer vision systems for affect analysis is how to obtain reliable ground-truth labels from a pool of subjective annotations. In this paper, we address this problem when labels are given in an ordinal scale and annotated items are structured as temporal sequences. This problem is of special importance in affective computing, where collected data is typically formed by videos of human interactions annotated according to the Valence and Arousal (V-A) dimensions. Moreover, recent works have shown that inter-observer agreement of V-A annotations can be considerably improved if these are given in a discrete ordinal scale. In this context, we propose a novel framework which explicitly introduces ordinal constraints to model the subjective perception of annotators. We also incorporate dynamic information to take into account temporal correlations between ground-truth labels. In our experiments over synthetic and real data with V-A annotations, we show that the proposed method outperforms alternative approaches which do not take into account either the ordinal structure of labels or their temporal correlation.


CCIA | 2012

Modelling facial expressions dynamics with Gaussian Process Regression.

Adria Ruiz; Xavier Binefa


international conference on 3d vision | 2018

3D Head Pose Estimation Using Tensor Decomposition and Non-linear Manifold Modeling

Dmytro Derkach; Adria Ruiz; Federico M. Sukno


Archive | 2018

Learning Disentangled Representations with Reference-Based Variational Autoencoders

Adria Ruiz; Oriol Martinez; Xavier Binefa; Jakob Verbeek

Collaboration


Dive into the Adria Ruiz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joost van de Weijer

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Maja Pantic

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Josep Suquet

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pompeu Casanovas

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Ognjen Rudovic

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge