Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Javier Orozco is active.

Publication


Featured researches published by Javier Orozco.


Image and Vision Computing | 2013

Hierarchical On-line Appearance-Based Tracking for 3D head pose, eyebrows, lips, eyelids and irises

Javier Orozco; Ognjen Rudovic; Jordi Gonzílez; Maja Pantic

In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can also be used for eyelid and iris tracking, as well as 3D head pose, lips and eyebrows facial actions tracking. Furthermore, our approach applies an on-line learning of changes in the appearance of the tracked target. Hence, the prior training of appearance models, which usually requires a large amount of labeled facial images, is avoided. Moreover, the proposed method is built upon a hierarchical combination of three OABTs, which are optimized using a Levenberg-Marquardt Algorithm (LMA) enhanced with line-search procedures. This, in turn, makes the proposed method robust to changes in lighting conditions, occlusions and translucent textures, as evidenced by our experiments. Finally, the proposed method achieves head and facial actions tracking in real-time.


Image and Vision Computing | 2015

Empirical analysis of cascade deformable models for multi-view face detection

Javier Orozco; Brais Martinez; Maja Pantic

We present a multi-view face detector based on Cascade Deformable Part Models (CDPM). Over the last decade, there have been several attempts to extend the well-established Viola&Jones face detector algorithm to solve the problem of multi-view face detection. Recently a tree structure model for multi-view face detection was proposed. This method is primarily designed for facial landmark detection and consequently a face detection is provided. However, the effort to model inner facial structures by using a detailed facial landmark labelling resulted on a complex and suboptimal system for face detection. Instead, we adopt CDPMs, where the models are learned from partially labelled images using Latent Support Vector Machines (LSVM). Furthermore, LSVM is enhanced with data-mining and bootstrapping procedures to enrich models during the training. Furthermore, a post-optimization procedure is derived to improve the performance. This semi-supervised methodology allows us to build models based on weakly labelled data while incrementally learning latent positive and negative samples. Our results show that the proposed model can deal with highly expressive and partially occluded faces while outperforming the state-of-the-art face detectors by a large margin on challenging benchmarks such as the Face Detection Data Set and Benchmark (FDDB) 1 and the Annotated Facial Landmarks in the Wild (AFLW) 2 databases. In addition, we validate the accuracy of our models under large head pose variation and facial occlusions in the Head Pose Image Database (HPID) 3 and Caltech Occluded Faces in the Wild (COFW) datasets 4, respectively. We also outline the suitability of our models to support facial landmark detection algorithms. Display Omitted We present a state-of-the-art multi-view face detector based on Cascade Deformable Part Models (CDPM).We propose to combine data-mining and bootstrapping to learn CDPM models from weakly labelled data.We report extensive validation of our models in the FDDB, AFLW, HDDB and COFW databases.We show the suitability of our models for face alignment initialization and face detection under partial occlusions.


Journal of Real-time Image Processing | 2007

Real time 3D face and facial feature tracking

Fadi Dornaika; Javier Orozco

Detecting and tracking human faces in video sequences is useful in a number of applications such as gesture recognition and human-machine interaction. In this paper, we show that online appearance models (holistic approaches) can be used for simultaneously tracking the head, the lips, the eyebrows, and the eyelids in monocular video sequences. Unlike previous approaches to eyelid tracking, we show that the online appearance models can be used for this purpose. Neither color information nor intensity edges are used by our proposed approach. More precisely, we show how the classical appearance-based trackers can be upgraded in order to deal with fast eyelid movements. The proposed eyelid tracking is made robust by avoiding eye feature extraction. Experiments on real videos show the usefulness of the proposed tracking schemes as well as their enhancement to our previous approach.


machine vision applications | 2009

Real-time gaze tracking with appearance-based models

Javier Orozco; F. Xavier Roca; Jordi Gonzàlez

Psychological evidence has emphasized the importance of eye gaze analysis in human computer interaction and emotion interpretation. To this end, current image analysis algorithms take into consideration eye-lid and iris motion detection using colour information and edge detectors. However, eye movement is fast and and hence difficult to use to obtain a precise and robust tracking. Instead, our method proposed to describe eyelid and iris movements as continuous variables using appearance-based tracking. This approach combines the strengths of adaptive appearance models, optimization methods and backtracking techniques. Thus, in the proposed method textures are learned on-line from near frontal images and illumination changes, occlusions and fast movements are managed. The method achieves real-time performance by combining two appearance-based trackers to a backtracking algorithm for eyelid estimation and another for iris estimation. These contributions represent a significant advance towards a reliable gaze motion description for HCI and expression analysis, where the strength of complementary methodologies are combined to avoid using high quality images, colour information, texture training, camera settings and other time-consuming processes.


international conference on pattern recognition | 2008

Automatic face and facial features initialization for robust and accurate tracking

M. Al Haj; Javier Orozco; Jordi Gonzàlez; Juan José Villanueva

Face detection and tracking, through image sequences, are primary steps in many applications such as video surveillance, human computer interface, and expression analysis. Many currently existing techniques donpsilat perform well due to pose variations, appearance changes, illumination changes, complex backgrounds, and inaccurate initialization. The last short coming, which is the difficulty to initialize motion regions, is a problem facing any tracker. In this paper, we present an automatic and robust face detection and tracking system for color image sequences. Face detection is done using skin color segmentation and connected components analysis. Later, facial features are detected by active shape models and a face mesh is initialized. Finally, the tracking is done by active appearance models. Experimental detection and tracking results on a pose varying face video are given.


ieee international conference on automatic face & gesture recognition | 2008

Confidence assessment on eyelid and eyebrow expression recognition

Javier Orozco; Ognjen Rudovic; Francesc Xavier Roca; Jordi Gonzàlez

In this paper, we address the recognition of subtle facial expressions by reasoning on the classification confidence. Psychological evidences have determined that eyelids and eyebrows are significant for the recognition of subtle facial expressions and the early perception of human emotions. This early perception results in a more complex problem, which requires a confidence assessment for any provided solution. Thus, traditional score-based classifiers (e.g. k-NN and NN) are not able to produce confident estimates. Instead, we first present five confidence estimators and a confidence classification assessment for Case-Based Reasoning (CBR). Second, we improve the expression retrieval from the database by learning the neighbourhoods dimensions for the expected classification confidences. Third, we reuse the previous classified expressions and the confidence assessment to improve the classification achieved by k-NN. Fourth, we improve the database for generalization with new subjects by learning thresholds to minimize misclassification with low confidence, maximize correct classifications with high confidence and re-arrange misclassification with high confidence. The proposed system represents an effective contribution for both subtle expression recognition and CBR methodology. It achieves an average recognition of 97% plusmn 1% with a confidence of 96% plusmn 2% for expressiveness between 20% and 100%.


computer analysis of images and patterns | 2007

Deterministic and stochastic methods for gaze tracking in real-time

Javier Orozco; F. Xavier Roca; Jordi Gonzàlez

Psychological evidence demonstrates how eye gaze analysis is requested for human computer interaction endowed with emotion recognition capabilities. The existing proposals analyse eyelid and iris motion by using colour information and edge detectors, but eye movements are quite fast and difficult for precise and robust tracking. Instead, we propose to reduce the dimensionality of the image-data by using multi-Gaussian modelling and transition estimations by applying partial differences. The tracking system can handle illumination changes, low-image resolution and occlusions while estimating eyelid and iris movements as continuous variables. Therefore, this is an accurate and robust tracking system for eyelids and irises in 3D for standard image quality.


iberian conference on pattern recognition and image analysis | 2007

Hierarchical Eyelid and Face Tracking

Javier Orozco; Jordi Gonzàlez; Ignasi Rius; Francesc Xavier Roca

Most applications on Human Computer Interaction (HCI) require to extract the movements of user faces, while avoiding high memory and time expenses. Moreover, HCI systems usually use low-cost cameras, while current face tracking techniques strongly depend on the image resolution. In this paper, we tackle the problem of eyelid tracking by using Appearance-Based Models, thus achieving accurate estimations of the movements of the eyelids, while avoiding cues, which require high-resolution faces, such as edge detectors or colour information. Consequently, we can track the fast and spontaneous movements of the eyelids, a very hard task due to the small resolution of the eye regions. Subsequently, we combine the results of eyelid tracking with the estimations of other facial features, such as the eyebrows and the lips. As a result, a hierarchical tracking framework is obtained: we demonstrate that combining two appearance-based trackers allows to get accurate estimates for the eyelid, eyebrows, lips and also the 3D head pose by using low-cost video cameras and in real-time. Therefore, our approach is shown suitable to be used for further facial-expression analysis.


Image and Vision Computing | 2017

Behavioral cues help predict impact of advertising on future sales

Gábor Szirtes; Javier Orozco; István Petrás; Dániel Szolgay; Ákos Utasi; Jeffrey F. Cohn


Image and Vision Computing | 2014

Corrigendum to "Hierarchical online appearance-based tracking for 3D head pose, eyebrows, lips, eyelids, and irises"

Javier Orozco; Ognjen Rudovic; J. Gonzalez; Maja Pantic

Collaboration


Dive into the Javier Orozco's collaboration.

Top Co-Authors

Avatar

Jordi Gonzàlez

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Maja Pantic

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

F. Xavier Roca

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Fadi Dornaika

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Francesc Xavier Roca

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Jordi Gonzílez

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Pau Baiget

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Brais Martinez

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar

István Petrás

Hungarian Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge