Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José Jesús Guerrero is active.

Publication


Featured researches published by José Jesús Guerrero.


international conference on robotics and automation | 2007

SURF features for efficient robot localization with omnidirectional images

Ana C. Murillo; José Jesús Guerrero; Carlos Sagüés

Many robotic applications work with visual reference maps, which usually consist of sets of more or less organized images. In these applications, there is a compromise between the density of reference data stored and the capacity to identify later the robot localization, when it is not exactly in the same position as one of the reference views. Here we propose the use of a recently developed feature, SURF, to improve the performance of appearance-based localization methods that perform image retrieval in large data sets. This feature is integrated with a vision-based algorithm that allows both topological and metric localization using omnidirectional images in a hierarchical approach. It uses pyramidal kernels for the topological localization and three-view geometric constraints for the metric one. Experiments with several omnidirectional images sets are shown, including comparisons with other typically used features (radial lines and SIFT). The advantages of this approach are proved, showing the use of SURF as the best compromise between efficiency and accuracy in the results.


EURASIP Journal on Advances in Signal Processing | 2006

Robust estimator for non-line-of-sight error mitigation in indoor localization

Roberto Casas; Álvaro Marco; José Jesús Guerrero; Jorge L. Falcó

Indoor localization systems are undoubtedly of interest in many application fields. Like outdoor systems, they suffer from non-line-of-sight (NLOS) errors which hinder their robustness and accuracy. Though many ad hoc techniques have been developed to deal with this problem, unfortunately most of them are not applicable indoors due to the high variability of the environment (movement of furniture and of people, etc.). In this paper, we describe the use of robust regression techniques to detect and reject NLOS measures in a location estimation using multilateration. We show how the least-median-of-squares technique can be used to overcome the effects of NLOS errors, even in environments with little infrastructure, and validate its suitability by comparing it to other methods described in the bibliography. We obtained remarkable results when using it in a real indoor positioning system that works with Bluetooth and ultrasound (BLUPS), even when nearly half the measures suffered from NLOS or other coarse errors.


Robotics and Autonomous Systems | 2008

Visual door detection integrating appearance and shape cues

Ana C. Murillo; Jana Kosecka; José Jesús Guerrero; Carlos Sagüés

An important component of human-robot interaction is the capability to associate semantic concepts with encountered locations and objects. This functionality is essential for visually guided navigation as well as location and object recognition. In this paper we focus on the problem of door detection using visual information only. Doors are frequently encountered in structured man-made environments and function as transitions between different places. We adopt a probabilistic approach for door detection, by defining the likelihood of various features for generated door hypotheses. Differing from previous approaches, the proposed model captures both the shape and appearance of the door. This is learned from a few training examples, exploiting additional assumptions about the structure of indoor environments. After the learning stage, we describe a hypothesis generation process and several approaches to evaluate the likelihood of the generated hypotheses. The approach is tested on numerous examples of indoor environment. It shows a good performance provided that the door extent in the images is sufficiently large and well supported by low level feature measurements.


Robotics and Autonomous Systems | 2007

From omnidirectional images to hierarchical localization

Ana C. Murillo; Carlos Sagüés; José Jesús Guerrero; Toon Goedemé; Tinne Tuytelaars; L. Van Gool

We propose a new vision-based method for global robot localization using an omnidirectional camera. Topological and metric localization information are combined in an efficient, hierarchical process, with each step being more complex and accurate than the previous one but evaluating fewer images. This allows us to work with large reference image sets in a reasonable amount of time. Simultaneously, thanks to the use of 1D three-view geometry, accurate metric localization can be achieved based on just a small number of nearby reference images. Owing to the wide baseline features used, the method deals well with illumination changes and occlusions, while keeping the computational load small. The simplicity of the radial line features used speeds up the process without affecting the accuracy too much. We show experiments with two omnidirectional image data sets to evaluate the performance of the method and compare the results using the proposed radial lines with results from state-of-the-art wide-baseline matching techniques.


IEEE Transactions on Robotics | 2013

Localization in Urban Environments Using a Panoramic Gist Descriptor

Ana C. Murillo; Gautam Singh; Jana Kosecka; José Jesús Guerrero

Vision-based topological localization and mapping for autonomous robotic systems have received increased research interest in recent years. The need to map larger environments requires models at different levels of abstraction and additional abilities to deal with large amounts of data efficiently. Most successful approaches for appearance-based localization and mapping with large datasets typically represent locations using local image features. We study the feasibility of performing these tasks in urban environments using global descriptors instead and taking advantage of the increasingly common panoramic datasets. This paper describes how to represent a panorama using the global gist descriptor, while maintaining desirable invariance properties for location recognition and loop detection. We propose different gist similarity measures and algorithms for appearance-based localization and an online loop-closure detection method, where the probability of loop closure is determined in a Bayesian filtering framework using the proposed image representation. The extensive experimental validation in this paper shows that their performance in urban environments is comparable with local-feature-based approaches when using wide field-of-view images.


Computer Vision and Image Understanding | 2012

Calibration of omnidirectional cameras in practice: A comparison of methods

Luis Puig; Jesús Bermúdez; Peter F. Sturm; José Jesús Guerrero

Omnidirectional cameras are becoming increasingly popular in computer vision and robotics. Camera calibration is a step before performing any task involving metric scene measurement, required in nearly all robotics tasks. In recent years many different methods to calibrate central omnidirectional cameras have been developed, based on different camera models and often limited to a specific mirror shape. In this paper we review the existing methods designed to calibrate any central omnivision system and analyze their advantages and drawbacks doing a deep comparison using simulated and real data. We choose methods available as OpenSource and which do not require a complex pattern or scene. The evaluation protocol of calibration accuracy also considers 3D metric reconstruction combining omnidirectional images. Comparative results are shown and discussed in detail.


Robotics and Autonomous Systems | 2008

Switching visual control based on epipoles for mobile robots

Gonzalo López-Nicolás; Carlos Sagüés; José Jesús Guerrero; Danica Kragic; Patric Jensfelt

In this paper, we present a visual control approach consisting in a switching control scheme based on the epipolar geometry. The method facilitates a classical teach-by-showing approach where a reference image is used to control the robot to the desired pose (position and orientation). As a result of our proposal a mobile robot carries out a smooth trajectory towards the target and the epipolar geometry model is used through the whole motion. The control scheme developed considers the motion constraints of the mobile platform in a framework based on the epipolar geometry that does not rely on artificial markers or specific models of the environment. The proposed method is designed in order to cope with the degenerate estimation case of the epipolar geometry with short baseline. Experimental evaluation has been performed in realistic indoor and outdoor settings.


Robotics and Autonomous Systems | 2005

Visual correction for mobile robot homing

Carlos Sagüés; José Jesús Guerrero

Abstract We present a method to send a mobile robot to locations specified by images previously taken from these positions, which sometimes has been referred as homing. Classically this has been carried out using the fundamental matrix, but the fundamental matrix is ill conditioned with planar scenes, which are quite usual in man made environments. Many times in robot homing, small baseline images with high disparity due to rotation are compared, where the fundamental matrix also gives bad results. We use a monocular vision system and we compute motion through an homography obtained from automatically matched lines. In this work we compare the use of the homography and the fundamental matrix and we propose the correction of motion directly from the parameters of the 2D homography, which only needs one calibration parameter. It is shown that it is robust, sufficiently accurate and simple.


IEEE Transactions on Robotics | 2008

Localization and Matching Using the Planar Trifocal Tensor With Bearing-Only Data

José Jesús Guerrero; Ana C. Murillo; Carlos Sagüés

This paper addresses the robot and landmark localization problem from bearing-only data in three views, simultaneously to the robust association of this data. The localization algorithm is based on the 1-D trifocal tensor, which relates linearly the observed data and the robot localization parameters. The aim of this work is to bring this useful geometric construction from computer vision closer to robotic applications. One contribution is the evaluation of two linear approaches of estimating the 1-D tensor: the commonly used approach that needs seven bearing-only correspondences and another one that uses only five correspondences plus two calibration constraints. The results in this paper show that the inclusion of these constraints provides a simpler and faster solution and better estimation of robot and landmark locations in the presence of noise. Moreover, a new method that makes use of scene planes and requires only four correspondences is presented. This proposal improves the performance of the two previously mentioned methods in typical man-made scenarios with dominant planes, while it gives similar results in other cases. The three methods are evaluated with simulation tests as well as with experiments that perform automatic real data matching in conventional and omnidirectional images. The results show sufficient accuracy and stability to be used in robotic tasks such as navigation, global localization or initialization of simultaneous localization and mapping (SLAM) algorithms.


International Journal of Computer Vision | 2011

Calibration of Central Catadioptric Cameras Using a DLT-Like Approach

Luis Puig; Yalin Bastanlar; Peter F. Sturm; José Jesús Guerrero; João Pedro Barreto

In this study, we present a calibration technique that is valid for all single-viewpoint catadioptric cameras. We are able to represent the projection of 3D points on a catadioptric image linearly with a 6×10 projection matrix, which uses lifted coordinates for image and 3D points. This projection matrix can be computed from 3D–2D correspondences (minimum 20 points distributed in three different planes). We show how to decompose it to obtain intrinsic and extrinsic parameters. Moreover, we use this parameter estimation followed by a non-linear optimization to calibrate various types of cameras. Our results are based on the sphere camera model which considers that every central catadioptric system can be modeled using two projections, one from 3D points to a unitary sphere and then a perspective projection from the sphere to the image plane. We test our method both with simulations and real images, and we analyze the results performing a 3D reconstruction from two omnidirectional images.

Collaboration


Dive into the José Jesús Guerrero's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis Puig

University of Zaragoza

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge