José Miguel Buenaposada
King Juan Carlos University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by José Miguel Buenaposada.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011
Juan Bekios-Calfa; José Miguel Buenaposada; Luis Baumela
Emerging applications of computer vision and pattern recognition in mobile devices and networked computing require the development of resource-limited algorithms. Linear classification techniques have an important role to play in this context, given their simplicity and low computational requirements. The paper reviews the state-of-the-art in gender classification, giving special attention to linear techniques and their relations. It discusses why linear techniques are not achieving competitive results and shows how to obtain state-of-the-art performances. Our work confirms previous results reporting very close classification accuracies for Support Vector Machines (SVMs) and boosting algorithms on single-database experiments. We have proven that Linear Discriminant Analysis on a linearly selected set of features also achieves similar accuracies. We perform cross-database experiments and prove that single database experiments were optimistically biased. If enough training data and computational resources are available, SVMs gender classifiers are superior to the rest. When computational resources are scarce but there is enough data, boosting or linear approaches are adequate. Finally, if training data and computational resources are very scarce, then the linear approach is the best choice.
Pattern Analysis and Applications | 2008
José Miguel Buenaposada; Enrique Muñoz; Luis Baumela
We introduce a system that processes a sequence of images of a front-facing human face and recognises a set of facial expressions. We use an efficient appearance-based face tracker to locate the face in the image sequence and estimate the deformation of its non-rigid components. The tracker works in real time. It is robust to strong illumination changes and factors out changes in appearance caused by illumination from changes due to face deformation. We adopt a model-based approach for facial expression recognition. In our model, an image of a face is represented by a point in a deformation space. The variability of the classes of images associated with facial expressions is represented by a set of samples which model a low-dimensional manifold in the space of deformations. We introduce a probabilistic procedure based on a nearest-neighbour approach to combine the information provided by the incoming image sequence with the prior information stored in the expression manifold to compute a posterior probability associated with a facial expression. In the experiments conducted we show that this system is able to work in an unconstrained environment with strong changes in illumination and face location. It achieves an 89% recognition rate in a set of 333 sequences from the Cohn–Kanade database.
international conference on pattern recognition | 2002
José Miguel Buenaposada; Luis Baumela
In this paper we present a method to estimate in real-time the position and orientation of a previously viewed planar patch. The algorithm is based on minimising the sum of squared differences between a previously stored image of the patch and the current image of it. First a linear model for projectively tracking a planar patch is introduced, then a method to compute the 3D position and orientation of the patch in 3D space is presented. In the experiments conducted we show that this method is adequate for tracking not only planar objects, but also non planar objects with limited out-of-plane rotations, as is the case of face tracking.
international conference on pattern recognition | 2006
José Miguel Buenaposada; Enrique Muñoz; Luis Baumela
We introduce a procedure to estimate human face high level animation parameters from a marker-less image sequence in presence of strong illumination changes. We use an efficient appearance-based tracker to stabilise face images and estimate illumination variation. This is achieved by using an appearance model composed by two independent linear subspaces modelling face deformation and illumination changes respectively. The system is very simple to train and is able to re-animate a 3D face model in real-time
computer vision and pattern recognition | 2008
Pablo Márquez-Neila; J. Garcia Miro; José Miguel Buenaposada; Luis Baumela
We introduce a procedure for recognizing and locating planar landmarks for mobile robot navigation, based in the detection and recognition of a set of interest points. We use RANSAC for fitting a homography and locating the landmark. Our main contribution is the introduction of a geometrical constraint that reduces the number of RANSAC iterations by discarding minimal subsets. In the experiments conducted we conclude that this constraint increases RANSAC performance by reducing in about 35% and 75% the number of iterations for affine and projective cameras, respectively.
international conference on computer vision | 2005
Enrique Muñoz; José Miguel Buenaposada; Luis Baumela
Efficient incremental image alignment is a topic of renewed interest in the computer vision community because of its applications in model fitting and model-based object tracking. Successful compositional procedures for aligning 2D and 3D models under weak-perspective imaging conditions have already been proposed. Here we present a mixed compositional and additive algorithm which is applicable to the full projective camera case.
Image and Vision Computing | 2009
José Miguel Buenaposada; Enrique Muñoz; Luis Baumela
One of the major challenges that visual tracking algorithms face nowadays is being able to cope with changes in the appearance of the target during tracking. Linear subspace models have been extensively studied and are possibly the most popular way of modelling target appearance. We introduce a linear subspace representation in which the appearance of a face is represented by the addition of two approximately independent linear subspaces modelling facial expressions and illumination, respectively. This model is more compact than previous bilinear or multilinear approaches. The independence assumption notably simplifies system training. We only require two image sequences. One facial expression is subject to all possible illuminations in one sequence and the face adopts all facial expressions under one particular illumination in the other. This simple model enables us to train the system with no manual intervention. We also revisit the problem of efficiently fitting a linear subspace-based model to a target image and introduce an additive procedure for solving this problem. We prove that Matthews and Bakers inverse compositional approach makes a smoothness assumption on the subspace basis that is equivalent to Hager and Belhumeurs, which worsens convergence. Our approach differs from Hager and Belhumeurs additive and Matthews and Bakers compositional approaches in that we make no smoothness assumptions on the subspace basis. In the experiments conducted we show that the model introduced accurately represents the appearance variations caused by illumination changes and facial expressions. We also verify experimentally that our fitting procedure is more accurate and has better convergence rate than the other related approaches, albeit at the expense of a slight increase in computational cost. Our approach can be used for tracking a human face at standard video frame rates on an average personal computer.
international conference on computer vision | 2009
Enrique Muñoz; José Miguel Buenaposada; Luis Baumela
We present an efficient algorithm for fitting a morphable model to an image sequence. It is built on a projective geometry formulation of perspective projection, which results in a linear mapping from 3D shape to the projective plane, and a factorisation of this mapping into matrices that can be partially computed off-line. This algorithm can cope with full 360 degrees object rotation and linear deformations. We validate our approach using synthetically generated and real sequences. Compared to a plain Lucas-Kanade implementation, we achieve a six fold increase in performance for a rigid object and two fold for a non-rigid face.
international conference on intelligent transportation systems | 2008
Luis Miguel Bergasa; José Miguel Buenaposada; Jesús Nuevo; Pedro Jiménez; Luis Baumela
This paper presents a system for evaluating the attention level of a driver using computer vision. The system detects head movements, facial expressions and the presence of visual cues that are known to reflect the users level of alertness. The fusion of these data allows our system to detect both aspects of inattention (drowsiness and distraction), improving the reliability of the monitoring over previous approaches mainly based on detecting only one (drowsiness). Head movements are estimated by robustly tracking a 3D face model with RANSAC and POSIT methods. The 3D model is automatically initialized. Facial expressions are recognized with a model-based method, where different expressions are represented by a set of samples in a low-dimensional manifold in the space of deformations. The system is able to work with different drivers without specific training. The approach has been tested on video sequences recorded in a driving simulator and in real driving situations. The methods are computationally efficient and the system is able to run in real-time.
british machine vision conference | 2006
José Miguel Buenaposada; Enrique Muñoz; Luis Baumela
We introduce a subspace representation of face appearance which separates facial expressions from illumination variations. The appearance of a face is represented by the addition of two approximately independent linear subspaces modelling facial expressions and illumination respectively. The independence assumption notably simplifies the training of the s ystem. We only require two image sequences. One in which one facial expression is subject to all possible illuminations and another in which the face, under one illumination, performs all facial expressions. This simple model enables us to train the system with no manual intervention. We also introduce an efficient procedure for fitting this model, which can be used for tracki ng a human face in real-time.