Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xavier P. Burgos-Artizzu is active.

Publication


Featured researches published by Xavier P. Burgos-Artizzu.


international conference on computer vision | 2013

Robust Face Landmark Estimation under Occlusion

Xavier P. Burgos-Artizzu; Pietro Perona; Piotr Dollár

Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPRs performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall.


Pattern Recognition | 2008

A vision-based method for weeds identification through the Bayesian decision theory

Alberto Tellaeche; Xavier P. Burgos-Artizzu; Gonzalo Pajares; Angela Ribeiro

One of the objectives of precision agriculture is to minimize the volume of herbicides that are applied to the fields through the use of site-specific weed management systems. This paper outlines an automatic computer vision-based approach for the detection and differential spraying of weeds in corn crops. The method is designed for post-emergence herbicide applications where weeds and corn plants display similar spectral signatures and the weeds appear irregularly distributed within the crops field. The proposed strategy involves two processes: image segmentation and decision making. Image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based measuring relationships between crop and weeds. The decision making determines the cells to be sprayed based on the computation of a posterior probability under a Bayesian framework. The a priori probability in this framework is computed taking into account the dynamic of the physical system (tractor) where the method is embedded. The main contributions of this paper are: (1) the combination of the image segmentation and decision making processes and (2) the decision making itself which exploits a previous knowledge which is mapped as the a priori probability. The performance of the method is illustrated by comparative analysis against some existing strategies.


soft computing | 2011

A computer vision approach for weeds identification through Support Vector Machines

Alberto Tellaeche; Gonzalo Pajares; Xavier P. Burgos-Artizzu; Angela Ribeiro

This paper outlines an automatic computer vision system for the identification of avena sterilis which is a special weed seed growing in cereal crops. The final goal is to reduce the quantity of herbicide to be sprayed as an important and necessary step for precision agriculture. So, only areas where the presence of weeds is important should be sprayed. The main problems for the identification of this kind of weed are its similar spectral signature with respect the crops and also its irregular distribution in the field. It has been designed a new strategy involving two processes: image segmentation and decision making. The image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and weeds. The decision making is based on the Support Vector Machines and determines if a cell must be sprayed. The main findings of this paper are reflected in the combination of the segmentation and the Support Vector Machines decision processes. Another important contribution of this approach is the minimum requirements of the system in terms of memory and computation power if compared with other previous works. The performance of the method is illustrated by comparative analysis against some existing strategies.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Automated measurement of mouse social behaviors using depth sensing, video tracking, and machine learning

Weizhe Hong; Ann Kennedy; Xavier P. Burgos-Artizzu; Moriel Zelikowsky; Santiago G. Navonne; Pietro Perona; David J. Anderson

Significance Accurate, quantitative measurement of animal social behaviors is critical, not only for researchers in academic institutions studying social behavior and related mental disorders, but also for pharmaceutical companies developing drugs to treat disorders affecting social interactions, such as autism and schizophrenia. Here we describe an integrated hardware and software system that combines video tracking, depth-sensing technology, machine vision, and machine learning to automatically detect and score innate social behaviors, such as aggression, mating, and social investigation, between mice in a home-cage environment. This technology has the potential to have a transformative impact on the study of the neural mechanisms underlying social behavior and the development of new drug therapies for psychiatric disorders in humans. A lack of automated, quantitative, and accurate assessment of social behaviors in mammalian animal models has limited progress toward understanding mechanisms underlying social interactions and their disorders such as autism. Here we present a new integrated hardware and software system that combines video tracking, depth sensing, and machine learning for automatic detection and quantification of social behaviors involving close and dynamic interactions between two mice of different coat colors in their home cage. We designed a hardware setup that integrates traditional video cameras with a depth camera, developed computer vision tools to extract the body “pose” of individual animals in a social context, and used a supervised learning algorithm to classify several well-described social behaviors. We validated the robustness of the automated classifiers in various experimental settings and used them to examine how genetic background, such as that of Black and Tan Brachyury (BTBR) mice (a previously reported autism model), influences social behavior. Our integrated approach allows for rapid, automated measurement of social behaviors across diverse experimental designs and also affords the ability to develop new, objective behavioral metrics.


Image and Vision Computing | 2010

Analysis of natural images processing for the extraction of agricultural elements

Xavier P. Burgos-Artizzu; Angela Ribeiro; Alberto Tellaeche; Gonzalo Pajares; César Fernández-Quintanilla

This work presents several developed computer-vision-based methods for the estimation of percentages of weed, crop and soil present in an image showing a region of interest of the crop field. The visual detection of weed, crop and soil is an arduous task due to physical similarities between weeds and crop and to the natural and therefore complex environments (with non-controlled illumination) encountered. The image processing was divided in three different stages at which each different agricultural element is extracted: (1) segmentation of vegetation against non-vegetation (soil), (2) crop row elimination (crop) and (3) weed extraction (weed). For each stage, different and interchangeable methods are proposed, each one using a series of input parameters which value can be changed for further refining the processing. A genetic algorithm was then used to find the best value of parameters and method combination for different sets of images. The whole system was tested on several images from different years and fields, resulting in an average correlation coefficient with real data (bio-mass) of 84%, with up to 96% correlation using the best methods on winter cereal images and of up to 84% on maize images. Moreover, the methods low computational complexity leads to the possibility, as future work, of adapting them to real-time processing.


Sensors | 2011

Mapping Wide Row Crops with Video Sequences Acquired from a Tractor Moving at Treatment Speed

Nadir Sainz-Costa; Angela Ribeiro; Xavier P. Burgos-Artizzu; María Guijarro; Gonzalo Pajares

This paper presents a mapping method for wide row crop fields. The resulting map shows the crop rows and weeds present in the inter-row spacing. Because field videos are acquired with a camera mounted on top of an agricultural vehicle, a method for image sequence stabilization was needed and consequently designed and developed. The proposed stabilization method uses the centers of some crop rows in the image sequence as features to be tracked, which compensates for the lateral movement (sway) of the camera and leaves the pitch unchanged. A region of interest is selected using the tracked features, and an inverse perspective technique transforms the selected region into a bird’s-eye view that is centered on the image and that enables map generation. The algorithm developed has been tested on several video sequences of different fields recorded at different times and under different lighting conditions, with good initial results. Indeed, lateral displacements of up to 66% of the inter-row spacing were suppressed through the stabilization process, and crop rows in the resulting maps appear straight.


british machine vision conference | 2013

Merging pose estimates across space and time

Xavier P. Burgos-Artizzu; David Hall; Pietro Perona; Piotr Dollár

Numerous ‘non-maximum suppression’ (NMS) post-processing schemes have been proposed for merging multiple independent object detections. We propose a generalization of NMS beyond bounding boxes to merge multiple pose estimates in a single frame. The final estimates are centroids rather than medoids as in standard NMS, thus being more accurate than any of the individual candidates. Using the same mathematical framework, we extend our approach to the multi-frame setting, merging multiple independent pose estimates across space and time and outputting both the number and pose of the objects present in a scene. Our approach sidesteps many of the inherent challenges associated with full tracking (e.g. objects entering/leaving a scene, extended periods of occlusion, etc.). We show its versatility by applying it to two distinct state-of-the-art pose estimation algorithms in three domains: human bodies, faces and mice. Our approach improves both detection accuracy (by helping disambiguate correspondences) as well as pose estimation quality and is computationally efficient.


european conference on computer vision | 2014

Detecting Social Actions of Fruit Flies

Eyrun Eyjolfsdottir; Steve Branson; Xavier P. Burgos-Artizzu; Eric D Hoopfer; Jonathan Schor; David J. Anderson; Pietro Perona

We describe a system that tracks pairs of fruit flies and automatically detects and classifies their actions. We compare experimentally the value of a frame-level feature representation with the more elaborate notion of ‘bout features’ that capture the structure within actions. Similarly, we compare a simple sliding window classifier architecture with a more sophisticated structured output architecture, and find that window based detectors outperform the much slower structured counterparts, and approach human performance. In addition we test our top performing detector on the CRIM13 mouse dataset, finding that it matches the performance of the best published method. Our Fly-vs-Fly dataset contains 22 hours of video showing pairs of fruit flies engaging in 10 social interactions in three different contexts; it is fully annotated by experts, and published with articulated pose trajectory features.


Sensors | 2011

An image segmentation based on a genetic algorithm for determining soil coverage by crop residues

Angela Ribeiro; Juan Ranz; Xavier P. Burgos-Artizzu; Gonzalo Pajares; L. Navarrete

Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain).


international conference on mechatronics and machine vision in practice | 2008

Real-time Image Processing for the Guidance of a Small Agricultural Field Inspection Vehicle

Richard Gottschalk; Xavier P. Burgos-Artizzu; Angela Ribeiro; Gonzalo Pajares; Alvaro Sainchez-Miralles

This paper describes the image processing for an autonomous field inspection vehicle that uses a webcam for the navigation between two rows of agricultural crop. The relative vehicle position is calculated by segmentation and classification of the images and by then extracting geometrical lines corresponding to the crop rows. An autonomous vehicle was built and tested successfully in an agricultural environment.

Collaboration


Dive into the Xavier P. Burgos-Artizzu's collaboration.

Top Co-Authors

Avatar

Angela Ribeiro

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Gonzalo Pajares

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Alberto Tellaeche

National University of Distance Education

View shared research outputs
Top Co-Authors

Avatar

Pietro Perona

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

María Guijarro

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

David J. Anderson

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

I. Riomoros

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Pedro Javier Herrera

Complutense University of Madrid

View shared research outputs
Researchain Logo
Decentralizing Knowledge