Ciprian A. Corneanu
University of Barcelona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ciprian A. Corneanu.
computer vision and pattern recognition | 2016
Sergio Escalera; Mercedes Torres Torres; Brais Martinez; Xavier Baró; Hugo Jair Escalante; Isabelle Guyon; Georgios Tzimiropoulos; Ciprian A. Corneanu; Marc Oliu; Mohammad Ali Bagheri; Michel F. Valstar
We present the 2016 ChaLearn Looking at People and Faces of the World Challenge and Workshop, which ran three competitions on the common theme of face analysis from still images. The first one, Looking at People, addressed age estimation, while the second and third competitions, Faces of the World, addressed accessory classification and smile and gender classification, respectively. We present two crowd-sourcing methodologies used to collect manual annotations. A custom-build application was used to collect and label data about the apparent age of people (as opposed to the real age). For the Faces of the World data, the citizen-science Zooniverse platform was used. This paper summarizes the three challenges and the data used, as well as the results achieved by the participants of the competitions. Details of the ChaLearn LAP FotW competitions can be found at http://gesture.chalearn.org.
european conference on computer vision | 2016
Víctor Ponce-López; Baiyu Chen; Marc Oliu; Ciprian A. Corneanu; Albert Clapés; Isabelle Guyon; Xavier Baró; Hugo Jair Escalante; Sergio Escalera
This paper summarizes the ChaLearn Looking at People 2016 First Impressions challenge data and results obtained by the teams in the first round of the competition. The goal of the competition was to automatically evaluate five “apparent” personality traits (the so-called “Big Five”) from videos of subjects speaking in front of a camera, by using human judgment. In this edition of the ChaLearn challenge, a novel data set consisting of 10,000 shorts clips from YouTube videos has been made publicly available. The ground truth for personality traits was obtained from workers of Amazon Mechanical Turk (AMT). To alleviate calibration problems between workers, we used pairwise comparisons between videos, and variable levels were reconstructed by fitting a Bradley-Terry-Luce model with maximum likelihood. The CodaLab open source platform was used for submission of predictions and scoring. The competition attracted, over a period of 2 months, 84 participants who are grouped in several teams. Nine teams entered the final phase. Despite the difficulty of the task, the teams made great advances in this round of the challenge.
computer vision and pattern recognition | 2015
Ramin Irani; Kamal Nasrollahi; Marc Simón; Ciprian A. Corneanu; Sergio Escalera; Chris Bahnsen; Dennis H. Lundtoft; Thomas B. Moeslund; Tanja L. Pedersen; Maria-Louise Klitgaard; Laura Petrini
Pain is a vital sign of human health and its automatic detection can be of crucial importance in many different contexts, including medical scenarios. While most available computer vision techniques are based on RGB, in this paper, we investigate the effect of combining RGB, depth, and thermal facial images for pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain and recognizes between three intensity levels in 82% of the analyzed frames, improving by more than 6% the results that only consider RGB data.
IET Biometrics | 2016
Marc Simón; Ciprian A. Corneanu; Kamal Nasrollahi; Olegs Nikisins; Sergio Escalera; Yunlian Sun; Haiqing Li; Zhenan Sun; Thomas B. Moeslund; Modris Greitans
Reliable facial recognition systems are of crucial importance in various applications from entertainment to security. Thanks to the deep-learning concepts introduced in the field, a significant improvement in the performance of the unimodal facial recognition systems has been observed in the recent years. At the same time a multimodal facial recognition is a promising approach. This study combines the latest successes in both directions by applying deep learning convolutional neural networks (CNN) to the multimodal RGB, depth, and thermal (RGB-D-T) based facial recognition problem outperforming previously published results. Furthermore, a late fusion of the CNN-based recognition block with various hand-crafted features (local binary patterns, histograms of oriented gradients, Haar-like rectangular features, histograms of Gabor ordinal measures) is introduced, demonstrating even better recognition performance on a benchmark RGB-D-T database. The obtained results in this study show that the classical engineered features and CNN-based features can complement each other for recognition purposes.
asian conference on computer vision | 2016
Marc Oliu; Ciprian A. Corneanu; László A. Jeni; Jeffrey F. Cohn; Takeo Kanade; Sergio Escalera
Recent methods for facial landmark location perform well on close-to-frontal faces but have problems in generalising to large head rotations. In order to address this issue we propose a second order linear regression method that is both compact and robust against strong rotations. We provide a closed form solution, making the method fast to train. We test the method’s performance on two challenging datasets. The first has been intensely used by the community. The second has been specially generated from a well known 3D face dataset. It is considerably more challenging, including a high diversity of rotations and more samples than any other existing public dataset. The proposed method is compared against state-of-the-art approaches, including RCPR, CGPRT, LBF, CFSS, and GSDM. Results upon both datasets show that the proposed method offers state-of-the-art performance on near frontal view data, improves state-of-the-art methods on more challenging head rotation problems and keeps a compact model size.
Archive | 2017
Ikechukwu Ofodile; Kaustubh Kulkarni; Ciprian A. Corneanu; Sergio Escalera; Xavier Baró; Sylwia Julia Hyniewska; Jüri Allik; Gholamreza Anbarjafari
IEEE Transactions on Affective Computing | 2018
Kaustubh Kulkarni; Ciprian A. Corneanu; Ikechukwu Ofodile; Sergio Escalera; Xavier Baró; Sylwia Julia Hyniewska; Jüri Allik; Gholamreza Anbarjafari
IEEE Transactions on Affective Computing | 2018
Ciprian A. Corneanu; Fatemeh Noroozi; Dorota Kamińska; Tomasz Sapiński; Sergio Escalera; Gholamreza Anbarjafari
arXiv: Computer Vision and Pattern Recognition | 2018
Ciprian A. Corneanu; Meysam Madadi; Sergio Escalera
arXiv: Computers and Society | 2017
Sergio Alloza; Flavio Escribano; Sergi Delgado; Ciprian A. Corneanu; Sergio Escalera