Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joan Alabort-i-Medina is active.

Publication


Featured researches published by Joan Alabort-i-Medina.


asian conference on computer vision | 2012

Generic active appearance models revisited

Georgios Tzimiropoulos; Joan Alabort-i-Medina; Stefanos Zafeiriou; Maja Pantic

The proposed Active Orientation Models (AOMs) are generative models of facial shape and appearance. Their main differences with the well-known paradigm of Active Appearance Models (AAMs) are (i) they use a different statistical model of appearance, (ii) they are accompanied by a robust algorithm for model fitting and parameter estimation and (iii) and, most importantly, they generalize well to unseen faces and variations. Their main similarity is computational complexity. The project-out version of AOMs is as computationally efficient as the standard project-out inverse compositional algorithm which is admittedly the fastest algorithm for fitting AAMs. We show that not only does the AOM generalize well to unseen identities, but also it outperforms state-of-the-art algorithms for the same task by a large margin. Finally, we prove our claims by providing Matlab code for reproducing our experiments ( http://ibug.doc.ic.ac.uk/resources ).


IEEE Transactions on Image Processing | 2015

Feature-Based Lucas–Kanade and Active Appearance Models

Epameinondas Antonakos; Joan Alabort-i-Medina; Georgios Tzimiropoulos; Stefanos Zafeiriou

Lucas-Kanade and active appearance models are among the most commonly used methods for image alignment and facial fitting, respectively. They both utilize nonlinear gradient descent, which is usually applied on intensity values. In this paper, we propose the employment of highly descriptive, densely sampled image features for both problems. We show that the strategy of warping the multichannel dense feature image at each iteration is more beneficial than extracting features after warping the intensity image at each iteration. Motivated by this observation, we demonstrate robust and accurate alignment and fitting performance using a variety of powerful feature descriptors. Especially with the employment of histograms of oriented gradient and scale-invariant feature transform features, our method significantly outperforms the current state-of-the-art results on in-the-wild databases.


acm multimedia | 2014

Menpo: A Comprehensive Platform for Parametric Image Alignment and Visual Deformable Models

Joan Alabort-i-Medina; Epameinondas Antonakos; James Booth; Patrick Snape; Stefanos Zafeiriou

The Menpo Project, hosted at http://www.menpo.io, is a BSD-licensed software platform providing a complete and comprehensive solution for annotating, building, fitting and evaluating deformable visual models from image data. Menpo is a powerful and flexible cross-platform framework written in Python that works on Linux, OS X and Windows. Menpo has been designed to allow for easy adaptation of Lucas-Kanade (LK) parametric image alignment techniques, and goes a step further in providing all the necessary tools for building and fitting state-of-the-art deformable models such as Active Appearance Models (AAMs), Constrained Local Models (CLMs) and regression-based methods (such as the Supervised Descent Method (SDM)). These methods are extensively used for facial point localisation although they can be applied to many other deformable objects. Menpo makes it easy to understand and evaluate these complex algorithms, providing tools for visualisation, analysis, and performance assessment. A key challenge in building deformable models is data annotation; Menpo expedites this process by providing a simple web-based annotation tool hosted at http://www.landmarker.io. The Menpo Project is thoroughly documented and provides extensive examples for all of its features. We believe the project is ideal for researchers, practitioners and students alike.


computer vision and pattern recognition | 2015

Active Pictorial Structures

Epameinondas Antonakos; Joan Alabort-i-Medina; Stefanos Zafeiriou

In this paper we present a novel generative deformable model motivated by Pictorial Structures (PS) and Active Appearance Models (AAMs) for object alignment in-the-wild. Inspired by the tree structure used in PS, the proposed Active Pictorial Structures (APS)1 model the appearance of the object using multiple graph-based pairwise normal distributions (Gaussian Markov Random Field) between the patches extracted from the regions around adjacent landmarks. We show that this formulation is more accurate than using a single multivariate distribution (Principal Component Analysis) as commonly done in the literature. APS employ a weighted inverse compositional Gauss-Newton optimization with fixed Jacobian and Hessian that achieves close to real-time performance and state-of-the-art results. Finally, APS have a spring-like graph-based deformation prior term that makes them robust to bad initializations. We present extensive experiments on the task of face alignment, showing that APS outperform current state-of-the-art methods. To the best of our knowledge, the proposed method is the first weighted inverse compositional technique that proves to be so accurate and efficient at the same time.


international conference on image processing | 2014

HOG active appearance models

Epameinondas Antonakos; Joan Alabort-i-Medina; Georgios Tzimiropoulos; Stefanos Zafeiriou

We propose the combination of dense Histogram of Oriented Gradients (HOG) features with Active Appearance Models (AAMs). We employ the efficient Inverse Compositional optimization technique and show results for the task of face fitting. By taking advantage of the descriptive characteristics of HOG features, we build robust and accurate AAMs that generalize well to unseen faces with illumination, identity, pose and occlusion variations. Our experiments on challenging in-the-wild databases show that HOG AAMs significantly outperfrom current state-of-the-art results of discriminative methods trained on larger databases.


computer vision and pattern recognition | 2015

Unifying holistic and Parts-Based Deformable Model fitting

Joan Alabort-i-Medina; Stefanos Zafeiriou

The construction and fitting of deformable models that capture the degrees of freedom of articulated objects is one of the most popular areas of research in computer vision. Two of the most popular approaches are: Holistic Deformable Models (HDMs), which try to represent the object as a whole, and Parts-Based Deformable Models (PBDMs), which model object parts independently. Both models have been shown to have their own advantages. In this paper we try to marry the previous two approaches into a unified one that potentially combines the advantages of both. We do so by merging the well-established frameworks of Active Appearance Models (holistic) and Constrained Local Models (part-based) using a novel probabilistic formulation of the fitting problem. We show that our unified holistic and part-based formulation achieves state-of-the-art results in the problem of face alignment in-the-wild. Finally, in order to encourage open research and facilitate future comparisons with the proposed method, our code will be made publicly available to the research community.


IEEE Transactions on Information Forensics and Security | 2014

Active Orientation Models for Face Alignment In-the-Wild

Georgios Tzimiropoulos; Joan Alabort-i-Medina; Stefanos Zafeiriou; Maja Pantic

We present Active Orientation Models (AOMs), generative models of facial shape and appearance, which extend the well-known paradigm of Active Appearance Models (AAMs) for the case of generic face alignment under unconstrained conditions. Robustness stems from the fact that the proposed AOMs employ a statistically robust appearance model based on the principal components of image gradient orientations. We show that when incorporated within standard optimization frameworks for AAM learning and fitting, this kernel Principal Component Analysis results in robust algorithms for model fitting. At the same time, the resulting optimization problems maintain the same computational cost. As a result, the main similarity of AOMs with AAMs is the computational complexity. In particular, the project-out version of AOMs is as computationally efficient as the standard project-out inverse compositional algorithm, which is admittedly one of the fastest algorithms for fitting AAMs. We verify experimentally that: 1) AOMs generalize well to unseen variations and 2) outperform all other state-of-the-art AAM methods considered by a large margin. This performance improvement brings AOMs at least in par with other contemporary methods for face alignment. Finally, we provide MATLAB code at http://ibug.doc.ic.ac.uk/resources.


International Journal of Computer Vision | 2017

A Unified Framework for Compositional Fitting of Active Appearance Models

Joan Alabort-i-Medina; Stefanos Zafeiriou

Active appearance models (AAMs) are one of the most popular and well-established techniques for modeling deformable objects in computer vision. In this paper, we study the problem of fitting AAMs using compositional gradient descent (CGD) algorithms. We present a unified and complete view of these algorithms and classify them with respect to three main characteristics: (i) cost function; (ii) type of composition; and (iii) optimization method. Furthermore, we extend the previous view by: (a) proposing a novelBayesian cost function that can be interpreted as a general probabilistic formulation of the well-known project-out loss; (b) introducing two new types of composition, asymmetric and bidirectional, that combine the gradients of both image and appearance model to derive better convergent and more robust CGD algorithms; and (c) providing new valuable insights into existent CGD algorithms by reinterpreting them as direct applications of the Schur complement and the Wiberg method. Finally, in order to encourage open research and facilitate future comparisons with our work, we make the implementation of the algorithms studied in this paper publicly available as part of the Menpo Project (http://www.menpo.org).


european conference on computer vision | 2014

Statistically Learned Deformable Eye Models

Joan Alabort-i-Medina; Bingqing Qu; Stefanos Zafeiriou

In this paper we study the feasibility of using standard deformable model fitting techniques to accurately track the deformation and motion of the human eye. To this end, we propose two highly detailed shape annotation schemes (open and close eyes), with \(+30\) feature landmark points, high resolution eye images. We build extremely detailed Active Appearance Models (AAM), Constrained Local Models (CLM) and Supervised Descent Method (SDM) models of the human eye and report preliminary experiments comparing the relative performance of the previous techniques on the problem of eye alignment.


computer vision and pattern recognition | 2016

Estimating Correspondences of Deformable Objects “In-the-Wild”

Yuxiang Zhou; Epameinondas Antonakos; Joan Alabort-i-Medina; Anastasios Roussos; Stefanos Zafeiriou

During the past few years we have witnessed the development of many methodologies for building and fitting Statistical Deformable Models (SDMs). The construction of accurate SDMs requires careful annotation of images with regards to a consistent set of landmarks. However, the manual annotation of a large amount of images is a tedious, laborious and expensive procedure. Furthermore, for several deformable objects, e.g. human body, it is difficult to define a consistent set of landmarks, and, thus, it becomes impossible to train humans in order to accurately annotate a collection of images. Nevertheless, for the majority of objects, it is possible to extract the shape by object segmentation or even by shape drawing. In this paper, we show for the first time, to the best of our knowledge, that it is possible to construct SDMs by putting object shapes in dense correspondence. Such SDMs can be built with much less effort for a large battery of objects. Additionally, we show that, by sampling the dense model, a part-based SDM can be learned with its parts being in correspondence. We employ our framework to develop SDMs of human arms and legs, which can be used for the segmentation of the outline of the human body, as well as to provide better and more consistent annotations for body joints.

Collaboration


Dive into the Joan Alabort-i-Medina's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Booth

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Maja Pantic

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bingqing Qu

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Yuxiang Zhou

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Anastasios Roussos

National Technical University of Athens

View shared research outputs
Researchain Logo
Decentralizing Knowledge