Josechu J. Guerrero
University of Zaragoza
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Josechu J. Guerrero.
systems man and cybernetics | 2010
Gonzalo López-Nicolás; Nicholas R. Gans; Sourabh Bhattacharya; Carlos Sagüés; Josechu J. Guerrero; Seth Hutchinson
In this paper, we present a visual servo controller that effects optimal paths for a nonholonomic differential drive robot with field-of-view constraints imposed by the vision system. The control scheme relies on the computation of homographies between current and goal images, but unlike previous homography-based methods, it does not use the homography to compute estimates of pose parameters. Instead, the control laws are directly expressed in terms of individual entries in the homography matrix. In particular, we develop individual control laws for the three path classes that define the language of optimal paths: rotations, straight-line segments, and logarithmic spirals. These control laws, as well as the switching conditions that define how to sequence path segments, are defined in terms of the entries of homography matrices. The selection of the corresponding control law requires the homography decomposition before starting the navigation. We provide a controllability and stability analysis for our system and give experimental results.
IEEE Systems Journal | 2016
Aitor Aladren; Gonzalo López-Nicolás; Luis Puig; Josechu J. Guerrero
Navigation assistance for visually impaired (NAVI) refers to systems that are able to assist or guide people with vision loss, ranging from partially sighted to totally blind, by means of sound commands. In this paper, a new system for NAVI is presented based on visual and range information. Instead of using several sensors, we choose one device, a consumer RGB-D camera, and take advantage of both range and visual information. In particular, the main contribution is the combination of depth information with image intensities, resulting in the robust expansion of the range-based floor segmentation. On one hand, depth information, which is reliable but limited to a short range, is enhanced with the long-range visual information. On the other hand, the difficult and prone-to-error image processing is eased and improved with depth information. The proposed system detects and classifies the main structural elements of the scene providing the user with obstacle-free paths in order to navigate safely across unknown scenarios. The proposed system has been tested on a wide variety of scenarios and data sets, giving successful results and showing that the system is robust and works in challenging indoor environments.
international conference on robotics and automation | 2015
Daniel Gutiérrez-Gómez; Walterio W. Mayol-Cuevas; Josechu J. Guerrero
In this paper we present a dense visual odometry system for RGB-D cameras performing both photometric and geometric error minimisation to estimate the camera motion between frames. Contrary to most works in the literature, we parametrise the geometric error by the inverse depth instead of the depth, which translates into a better fit of the distribution of the geometric error to the used robust cost functions. We also provide a unified evaluation under the same framework of different estimators and ways of computing the scale of the residuals which can be found spread along the related literature. For the comparison of our approach with state-of-the-art approaches we use the popular dataset from the TUM for RGB-D benchmarking. Our approach shows to be competitive with state-of-the-art methods in terms of drift in meters per second, even compared to methods performing loop closure too. When comparing to approaches performing pure odometry like ours, our method outperforms them in the majority of the tested datasets. Additionally we show that our approach is able to work in real time and we provide a qualitative evaluation on our own sequences showing a low drift in the 3D reconstructions.
computer vision and pattern recognition | 2012
Ana C. Murillo; Daniel Gutiérrez-Gómez; Alejandro Rituerto; Luis Puig; Josechu J. Guerrero
Autonomous navigation and recognition of the environment are fundamental abilities for people extensively studied in computer vision and robotics fields. Expansion of low cost wearable sensing provides interesting opportunities for assistance systems that augment people navigation and recognition capabilities. This work presents our wearable omnidirectional vision system and a novel two-phase localization approach running on it. It runs state-of-the-art real time visual odometry adapted to catadioptric images augmented with topological-semantic information. The presented approach benefits from using wearable sensors to improve visual odometry results with true scaled solution. The wide field of view of catadioptric vision system used makes features last longer in the field of view and allows more compact location representation which facilitates topological place recognition. Experiments in this paper show promising ego-localization results in realistic settings, providing good true scaled visual odometry estimation and recognition of indoor regions.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014
Luis Puig; Josechu J. Guerrero; Kostas Daniilidis
In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.
International Journal of Computer Vision | 2015
Jesus Bermudez-Cameo; Gonzalo López-Nicolás; Josechu J. Guerrero
Revolution symmetry is a realistic assumption for modelling the majority of catadioptric and dioptric cameras. In central systems it can be described by a projection model based on radially symmetric distortion. In these systems straight lines are projected on curves called line-images. These curves have in general more than two degrees of freedom and their shape strongly depends on the particular camera configuration. Therefore, the existing line-extraction methods for this kind of omnidirectional cameras require the camera calibration by contrast with the perspective case where the calibration is not involved in the shape of the projected line-image. However, this drawback can be considered as an advantage because the shape of the line-images can be used for self-calibration. In this paper, we present a novel method to extract line-images in uncalibrated omnidirectional images which is valid for radially symmetric central systems. In this method we propose using the plumb-line constraint to find closed form solutions for different types of camera systems, dioptric or catadioptric. The inputs of the proposed method are points belonging to the line-images and their intensity gradient. The gradient information allows to reduce the number of points needed in the minimal solution improving the result and the robustness of the estimation. The scheme is used in a line-image extraction algorithm to obtain lines from uncalibrated omnidirectional images without any assumption about the scene. The algorithm is evaluated with synthetic and real images showing good performance. The results of this work have been implemented in an open source Matlab toolbox for evaluation and research purposes.
international conference on pattern recognition | 2014
Jesus Bermudez-Cameo; Gonzalo López-Nicolás; Josechu J. Guerrero
The projection surface of a 3D line in a non-central camera is a ruled surface, containing the complete information of the 3D line. The resulting line-image is a curve which contains the 4 degrees of freedom of the 3D line. In this paper we investigate the properties of the line-image in conical catadioptric systems. This curve is a particular quartic that can be described by only six homogeneous parameters. We present the relation between the line-image description and the geometry of the mirror. This result reveals the coupling between the depth of the line and the distance from the camera to the mirror. If this distance is unknown the 3D information of a projected line can be recovered up to scale. Knowing this distance allows obtaining the 3D metric reconstruction. The proposed parametrization also allows to simultaneously reconstruct the 3D line and computing the aperture angle of the mirror from five projected points on the line-image. We analytically solve the metric distance from a point to a line-image and we evaluate the proposal with real images.
british machine vision conference | 2013
Jesus Bermudez-Cameo; Gonzalo López-Nicolás; Josechu J. Guerrero
In omnidirectional cameras, straight lines in the scene are projected onto curves called line-images. The shape of these curves is strongly dependent of the particular camera configuration. The great diversity of omnidirectional camera systems makes harder the line-image extraction in a general way. Therefore, it is difficult to design uncalibrated general approaches, and existing methods to extract lines in omnidirectional images require the camera calibration. In this paper, we present a novel method to extract lineimages in uncalibrated images which is valid for radially symmetric central systems. In our proposal, the distortion function is analytically solved for different types of camera systems, dioptric or catadioptric. We present the unified line-image constraints to extract the projection plane of each line and main calibration parameter of the camera from a single line-image. The use of gradient-based information allows computing both from a minimum of two image points. This scheme is used in a line-image extraction algorithm to obtain lines from uncalibrated omnidirectional images without any assumption about the scene. The algorithm is evaluated with synthetic and real images showing good performance.
Proceedings of the 4th International SenseCam & Pervasive Imaging Conference on | 2013
Alejandro Rituerto; Ana C. Murillo; Josechu J. Guerrero
Wearable computer vision systems provide plenty of opportunities to develop human assistive devices. This work contributes on visual scene understanding techniques using a helmet-mounted omnidirectional vision system. The goal is to extract semantic information of the environment, such as the type of environment being traversed or the basic 3D layout of the place, to build assistive navigation systems. We propose a novel line-based image global descriptor that encloses the structure of the scene observed. This descriptor is designed with omnidirectional imagery in mind, where observed lines are longer than in conventional images. Our experiments show that the proposed descriptor can be used for indoor scene recognition comparing its results to state-of-the-art global descriptors. Besides, we demonstrate additional advantages of particular interest for wearable vision systems: higher robustness to rotation, compactness, and easier integration with other scene understanding steps.
Pattern Recognition Letters | 2017
Jesus Bermudez-Cameo; Olivier Saurer; Gonzalo López-Nicolás; Josechu J. Guerrero; Marc Pollefeys
Comparison among non-central systems for single-view line metric reconstruction.Non-Manhattan line metric reconstruction from single image in non-central panoramas.Automatic line-image extraction in non-central panoramas. In certain non-central imaging systems, straight lines are projected via a non-planar surface encapsulating the 4 degrees of freedom of the 3D line. Consequently the geometry of the 3D line can be recovered from a minimum of four image points. However, with classical non-central catadioptric systems there is not enough effective baseline for a practical implementation of the method. In this paper we propose a multi-camera system configuration resembling the circular panoramic model which results in a particular non-central projection allowing the stitching of a non-central panorama. From a single panorama we obtain well-conditioned 3D reconstruction of lines, which are specially interesting in texture-less scenarios. No previous information about the direction or arrangement of the lines in the scene is assumed. The proposed method is evaluated on both synthetic and real images.