Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Villamizar is active.

Publication


Featured researches published by Michael Villamizar.


computer vision and pattern recognition | 2010

Efficient rotation invariant object detection using boosted Random Ferns

Michael Villamizar; Francesc Moreno-Noguer; Juan Andrade-Cetto; Alberto Sanfeliu

We present a new approach for building an efficient and robust classifier for the two class problem, that localizes objects that may appear in the image under different orientations. In contrast to other works that address this problem using multiple classifiers, each one specialized for a specific orientation, we propose a simple two-step approach with an estimation stage and a classification stage. The estimator yields an initial set of potential object poses that are then validated by the classifier. This methodology allows reducing the time complexity of the algorithm while classification results remain high. The classifier we use in both stages is based on a boosted combination of Random Ferns over local histograms of oriented gradients (HOGs), which we compute during a preprocessing step. Both the use of supervised learning and working on the gradient space makes our approach robust while being efficient at run-time. We show these properties by thorough testing on standard databases and on a new database made of motorbikes under planar rotations, and with challenging conditions such as cluttered backgrounds, changing illumination conditions and partial occlusions.


Pattern Recognition | 2012

Bootstrapping Boosted Random Ferns for discriminative and efficient object classification

Michael Villamizar; Juan Andrade-Cetto; Alberto Sanfeliu; Francesc Moreno-Noguer

In this paper we show that the performance of binary classifiers based on Boosted Random Ferns can be significantly improved by appropriately bootstrapping the training step. This results in a classifier which is both highly discriminative and computationally efficient and is particularly suitable when only small sets of training images are available. During the learning process, a small set of labeled images is used to train the boosting binary classifier. The classifier is then evaluated over the training set and warped versions of the classified and misclassified patches are progressively added into the positive and negative sample sets for a new re-training step. In this paper we thoroughly study the conditions under which this bootstrapping scheme improves the detection rates. In particular we assess the quality of detection both as a function of the number of bootstrapping iterations and the size of the training set. We compare our algorithm against state-of-the-art approaches for several databases including faces, cars, motorbikes and horses, and show remarkable improvements in detection rates with just a few bootstrapping steps.


international conference on pattern recognition | 2006

Computation of Rotation Local Invariant Features using the Integral Image for Real Time Object Detection

Michael Villamizar; Alberto Sanfeliu; Juan Andrade-Cetto

We present a framework for object detection that is invariant to object translation, scale, rotation, and to some degree, occlusion, achieving high detection rates, at 14 fps in color images and at 30 fps in gray scale images. Our approach is based on boosting over a set of simple local features. In contrast to previous approaches, and to efficiently cope with orientation changes, we propose the use of non-Gaussian steerable filters, together with a new orientation integral image for a speedy computation of local orientation


british machine vision conference | 2011

Efficient 3D Object Detection using Multiple Pose-Specific Classifiers

Michael Villamizar; Helmut Grabner; Juan Andrade-Cetto; Alberto Sanfeliu; Luc Van Gool; Francesc Moreno-Noguer

We propose an efficient method for object localization and 3D pose estimation. A two-step approach is used. In the first step, a pose estimator is evaluated in the input images in order to estimate potential object locations and poses. These candidates are then validated, in the second step, by the corresponding pose-specific classifier. The result is a detection approach that avoids the inherent and expensive cost of testing the complete set of specific classifiers over the entire image. A further speedup is achieved by feature sharing. Features are computed only once and are then used for evaluating the pose estimator and all specific classifiers. The proposed method has been validated on two public datasets for the problem of detecting of cars under several views. The results show that the proposed approach yields high detection rates while keeping efficiency.


international conference on robotics and automation | 2009

Combining color-based invariant gradient detector with HoG descriptors for robust image detection in scenes under cast shadows

Michael Villamizar; Jorge Scandaliaris; Alberto Sanfeliu; Juan Andrade-Cetto

In this work we present a robust detection method in outdoor scenes under cast shadows using color based invariant gradients in combination with HoG local features. The method achieves good detection rates in urban scene classification and person detection outperforming traditional methods based on intensity gradient detectors which are sensible to illumination variations but not to cast shadows. The method uses color based invariant gradients that emphasize material changes and extract relevant and invariant features for detection while neglecting shadow contours. This method allows to train and detect objects and scenes independently of scene illumination, cast and self shadows. Moreover, it allows to do training in one shot, that is, when the robot visits the scene for the first time.


international conference on pattern recognition | 2010

Shared Random Ferns for Efficient Detection of Multiple Categories

Michael Villamizar; Francesc Moreno-Noguer; Juan Andrade-Cetto; Alberto Sanfeliu

We propose a new algorithm for detecting multiple object categories that exploits the fact that different categories may share common features but with different geometric distributions. This yields an efficient detector which, in contrast to existing approaches, considerably reduces the computation cost at runtime, where the feature computation step is traditionally the most expensive. More specifically, at the learning stage we compute common features by applying the same Random Ferns over the Histograms of Oriented Gradients on the training images. We then apply a boosting step to build discriminative weak classifiers, and learn the specific geometric distribution of the Random Ferns for each class. At runtime, only a few Random Ferns have to be densely computed over each input image, and their geometric distribution allows performing the detection. The proposed method has been validated in public datasets achieving competitive detection results, which are comparable with state-of-the-art methods that use specific features per class.


robot and human interactive communication | 2013

Proactive behavior of an autonomous mobile robot for human-assisted learning

Anaís Garrell; Michael Villamizar; Francesc Moreno-Noguer; Alberto Sanfeliu

During the last decade, there has been a growing interest in making autonomous social robots able to interact with people. However, there are still many open issues regarding the social capabilities that robots should have in order to perform these interactions more naturally. In this paper we present the results of several experiments conducted at the Barcelona Robot Lab in the campus of the “Universitat Politècnica de Catalunya” in which we have analyzed different important aspects of the interaction between a mobile robot and nontrained human volunteers. First, we have proposed different robot behaviors to approach a person and create an engagement with him/her. In order to perform this task we have provided the robot with several perception and action capabilities, such as that of detecting people, planning an approach and verbally communicating its intention to initiate a conversation. Once the initial engagement has been created, we have developed further communication skills in order to let people assist the robot and improve its face recognition system. After this assisted and online learning stage, the robot becomes able to detect people under severe changing conditions, which, in turn enhances the number and the manner that subsequent human-robot interactions are performed.


international conference on robotics and automation | 2014

Fast online learning and detection of natural landmarks for autonomous aerial robots

Michael Villamizar; Alberto Sanfeliu; Francesc Moreno-Noguer

We present a method for efficiently detecting natural landmarks that can handle scenes with highly repetitive patterns and targets progressively changing its appearance. At the core of our approach lies a Random Ferns classifier, that models the posterior probabilities of different views of the target using multiple and independent Ferns, each containing features at particular positions of the target. A Shannon entropy measure is used to pick the most informative locations of these features. This minimizes the number of Ferns while maximizing its discriminative power, allowing thus, for robust detections at low computational costs. In addition, after offline initialization, the new incoming detections are used to update the posterior probabilities on the fly, and adapt to changing appearances that can occur due to the presence of shadows or occluding objects. All these virtues, make the proposed detector appropriate for UAV navigation. Besides the synthetic experiments that will demonstrate the theoretical benefits of our formulation, we will show applications for detecting landing areas in regions with highly repetitive patterns, and specific objects under the presence of cast shadows or sudden camera motions.


Multimodal Interaction in Image and Video Applications | 2013

Robot interactive learning through human assistance

Gonzalo Ferrer; Anaís Garrell; Michael Villamizar; Ivan Huerta; Alberto Sanfeliu

This chapter presents some real-life examples using the interactive multimodal framework; in this work, the robot is capable of learning through human assistance. The basic idea is to use the human feedback to improve the learning behavior of the robot when it deals with human beings.We show two different prototypes that have been developed for the following topics: interactive motion learning for robot companion; and on-line face learning using robot vision. On the one hand, the objective of the first prototype is to learn how a robot has to approach to a pedestrian who is going to a destination, minimizing the disturbances to the expected person’s path. On the other hand, the objectives of the second prototype are twofold, first, the robot invites a person to approach the robot to initiate a dialogue, and second, the robot learns the face of the person that is invited for a dialogue. The two prototypes have been tested in real-life conditions and the results are very promising.


iberoamerican congress on pattern recognition | 2006

Orientation invariant features for multiclass object recognition

Michael Villamizar; Alberto Sanfeliu; Juan Andrade-Cetto

We present a framework for object recognition based on simple scale and orientation invariant local features that when combined with a hierarchical multiclass boosting mechanism produce robust classifiers for a limited number of object classes in cluttered backgrounds. The system extracts the most relevant features from a set of training samples and builds a hierarchical structure of them. By focusing on those features common to all trained objects, and also searching for those features particular to a reduced number of classes, and eventually, to each object class. To allow for efficient rotation invariance, we propose the use of non-Gaussian steerable filters, together with an Orientation Integral Image for a speedy computation of local orientation.

Collaboration


Dive into the Michael Villamizar's collaboration.

Top Co-Authors

Avatar

Alberto Sanfeliu

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Francesc Moreno-Noguer

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Juan Andrade-Cetto

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Anaís Garrell

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Jorge Scandaliaris

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Antonio Rubio

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Arnau Ramisa

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Fernando Herrero

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Juan Andrade Cetto

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Luis Ferraz

Pompeu Fabra University

View shared research outputs
Researchain Logo
Decentralizing Knowledge