Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Albert Clapés is active.

Publication


Featured researches published by Albert Clapés.


international conference on pattern recognition | 2016

ChaLearn Joint Contest on Multimedia Challenges Beyond Visual Analysis: An overview

Hugo Jair Escalante; Víctor Ponce-López; Jun Wan; Michael Riegler; Baiyu Chen; Albert Clapés; Sergio Escalera; Isabelle Guyon; Xavier Baró; Pål Halvorsen; Henning Müller; Martha Larson

This paper provides an overview of the Joint Contest on Multimedia Challenges Beyond Visual Analysis. We organized an academic competition that focused on four problems that require effective processing of multimodal information in order to be solved. Two tracks were devoted to gesture spotting and recognition from RGB-D video, two fundamental problems for human computer interaction. Another track was devoted to a second round of the first impressions challenge of which the goal was to develop methods to recognize personality traits from short video clips. For this second round we adopted a novel collaborative-competitive (i.e., coopetition) setting. The fourth track was dedicated to the problem of video recommendation for improving user experience. The challenge was open for about 45 days, and received outstanding participation: almost 200 participants registered to the contest, and 20 teams sent predictions in the final stage. The main goals of the challenge were fulfilled: the state of the art was advanced considerably in the four tracks, with novel solutions to the proposed problems (mostly relying on deep learning). However, further research is still required. The data of the four tracks will be available to allow researchers to keep making progress in the four tracks.


computer vision and pattern recognition | 2013

Tri-modal Person Re-identification with RGB, Depth and Thermal Features

Andreas Møgelmose; Chris Bahnsen; Thomas B. Moeslund; Albert Clapés; Sergio Escalera

Person re-identification is about recognizing people who have passed by a sensor earlier. Previous work is mainly based on RGB data, but in this work we for the first time present a system where we combine RGB, depth, and thermal data for re-identification purposes. First, from each of the three modalities, we obtain some particular features: from RGB data, we model color information from different regions of the body, from depth data, we compute different soft body biometrics, and from thermal data, we extract local structural information. Then, the three information types are combined in a joined classifier. The tri-modal system is evaluated on a new RGB-D-T dataset, showing successful results in re-identification scenarios.


Pattern Recognition Letters | 2013

Multi-modal user identification and object recognition surveillance system

Albert Clapés; Miguel Reyes; Sergio Escalera

We propose an automatic surveillance system for user identification and object recognition based on multi-modal RGB-Depth data analysis. We model a RGBD environment learning a pixel-based background Gaussian distribution. Then, user and object candidate regions are detected and recognized using robust statistical approaches. The system robustly recognizes users and updates the system in an online way, identifying and detecting new actors in the scene. Moreover, segmented objects are described, matched, recognized, and updated online using view-point 3D descriptions, being robust to partial occlusions and local 3D viewpoint rotations. Finally, the system saves the historic of user-object assignments, being specially useful for surveillance scenarios. The system has been evaluated on a novel data set containing different indoor/outdoor scenarios, objects, and users, showing accurate recognition and better performance than standard state-of-the-art approaches.


european conference on computer vision | 2016

ChaLearn LAP 2016: First Round Challenge on First Impressions - Dataset and Results

Víctor Ponce-López; Baiyu Chen; Marc Oliu; Ciprian A. Corneanu; Albert Clapés; Isabelle Guyon; Xavier Baró; Hugo Jair Escalante; Sergio Escalera

This paper summarizes the ChaLearn Looking at People 2016 First Impressions challenge data and results obtained by the teams in the first round of the competition. The goal of the competition was to automatically evaluate five “apparent” personality traits (the so-called “Big Five”) from videos of subjects speaking in front of a camera, by using human judgment. In this edition of the ChaLearn challenge, a novel data set consisting of 10,000 shorts clips from YouTube videos has been made publicly available. The ground truth for personality traits was obtained from workers of Amazon Mechanical Turk (AMT). To alleviate calibration problems between workers, we used pairwise comparisons between videos, and variable levels were reconstructed by fitting a Bradley-Terry-Luce model with maximum likelihood. The CodaLab open source platform was used for submission of predictions and scoring. The competition attracted, over a period of 2 months, 84 participants who are grouped in several teams. Nine teams entered the final phase. Despite the difficulty of the task, the teams made great advances in this round of the challenge.


Computers in Industry | 2013

Automatic digital biometry analysis based on depth maps

Miguel Reyes; Albert Clapés; José Ramírez; Juan R. Revilla; Sergio Escalera

World Health Organization estimates that 80% of the world population is affected by back-related disorders during his life. Current practices to analyze musculo-skeletal disorders (MSDs) are expensive, subjective, and invasive. In this work, we propose a tool for static body posture analysis and dynamic range of movement estimation of the skeleton joints based on 3D anthropometric information from multi-modal data. Given a set of keypoints, RGB and depth data are aligned, depth surface is reconstructed, keypoints are matched, and accurate measurements about posture and spinal curvature are computed. Given a set of joints, range of movement measurements is also obtained. Moreover, gesture recognition based on joint movements is performed to look for the correctness in the development of physical exercises. The system shows high precision and reliable measurements, being useful for posture reeducation purposes to prevent MSDs, as well as tracking the posture evolution of patients in rehabilitation treatments.


ieee international conference on automatic face gesture recognition | 2017

A Survey on Deep Learning Based Approaches for Action and Gesture Recognition in Image Sequences

Maryam Asadi-Aghbolaghi; Albert Clapés; Marco Bellantonio; Hugo Jair Escalante; Víctor Ponce-López; Xavier Baró; Isabelle Guyon; Shohreh Kasaei; Sergio Escalera

The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions. We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.


articulated motion and deformable objects | 2012

User identification and object recognition in clutter scenes based on RGB-depth analysis

Albert Clapés; Miguel Reyes; Sergio Escalera

We propose an automatic system for user identification and object recognition based on multi-modal RGB-Depth data analysis. We model a RGBD environment learning a pixel-based background Gaussian distribution. Then, user and object candidate regions are detected and recognized online using robust statistical approaches over RGBD descriptions. Finally, the system saves the historic of user-object assignments, being specially useful for surveillance scenarios. The system has been evaluated on a novel data set containing different indoor/outdoor scenarios, objects, and users, showing accurate recognition and better performance than standard state-of-the-art approaches.


machine vision applications | 2018

Action detection fusing multiple Kinects and a WIMU: an application to in-home assistive technology for the elderly

Albert Clapés; Àlex Pardo; Oriol Pujol Vila; Sergio Escalera

We present a vision-inertial system which combines two RGB-Depth devices together with a wearable inertial movement unit in order to detect activities of the daily living. From multi-view videos, we extract dense trajectories enriched with a histogram of normals description computed from the depth cue and bag them into multi-view codebooks. During the later classification step a multi-class support vector machine with a RBF-


Gesture Recognition | 2017

Deep Learning for Action and Gesture Recognition in Image Sequences: A Survey

Maryam Asadi-Aghbolaghi; Albert Clapés; Marco Bellantonio; Hugo Jair Escalante; Víctor Ponce-López; Xavier Baró; Isabelle Guyon; Shohreh Kasaei; Sergio Escalera


Revised Selected and Invited Papers of the International Workshop on Advances in Depth Image Analysis and Applications - Volume 7854 | 2012

Posture Analysis and Range of Movement Estimation Using Depth Maps

Miguel Reyes; Albert Clapés; Sergio Escalera; José Ramírez; Juan R. Revilla

\mathcal {X}^2

Collaboration


Dive into the Albert Clapés's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Miguel Reyes

University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Víctor Ponce-López

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Xavier Baró

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Isabelle Guyon

Université Paris-Saclay

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hugo Jair Escalante

National Institute of Astrophysics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge