Luis Puig
University of Zaragoza
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luis Puig.
Computer Vision and Image Understanding | 2012
Luis Puig; Jesús Bermúdez; Peter F. Sturm; José Jesús Guerrero
Omnidirectional cameras are becoming increasingly popular in computer vision and robotics. Camera calibration is a step before performing any task involving metric scene measurement, required in nearly all robotics tasks. In recent years many different methods to calibrate central omnidirectional cameras have been developed, based on different camera models and often limited to a specific mirror shape. In this paper we review the existing methods designed to calibrate any central omnivision system and analyze their advantages and drawbacks doing a deep comparison using simulated and real data. We choose methods available as OpenSource and which do not require a complex pattern or scene. The evaluation protocol of calibration accuracy also considers 3D metric reconstruction combining omnidirectional images. Comparative results are shown and discussed in detail.
International Journal of Computer Vision | 2011
Luis Puig; Yalin Bastanlar; Peter F. Sturm; José Jesús Guerrero; João Pedro Barreto
In this study, we present a calibration technique that is valid for all single-viewpoint catadioptric cameras. We are able to represent the projection of 3D points on a catadioptric image linearly with a 6×10 projection matrix, which uses lifted coordinates for image and 3D points. This projection matrix can be computed from 3D–2D correspondences (minimum 20 points distributed in three different planes). We show how to decompose it to obtain intrinsic and extrinsic parameters. Moreover, we use this parameter estimation followed by a non-linear optimization to calibrate various types of cameras. Our results are based on the sphere camera model which considers that every central catadioptric system can be modeled using two projections, one from 3D points to a unitary sphere and then a perspective projection from the sphere to the image plane. We test our method both with simulations and real images, and we analyze the results performing a 3D reconstruction from two omnidirectional images.
international conference on pattern recognition | 2010
Alejandro Rituerto; Luis Puig; José Jesús Guerrero
In this work we integrate the Spherical Camera Model for catadioptric systems in a Visual-SLAM application. The Spherical Camera Model is a projection model that unifies central catadioptric and conventional cameras. To integrate this model into the Extended Kalman Filter-based SLAM we require to linearize the direct and the inverse projection. We have performed an initial experimentation with omni directional and conventional real sequences including challenging trajectories. The results confirm that the omni directional camera gives much better orientation accuracy improving the estimated camera trajectory.
IEEE Systems Journal | 2016
Aitor Aladren; Gonzalo López-Nicolás; Luis Puig; Josechu J. Guerrero
Navigation assistance for visually impaired (NAVI) refers to systems that are able to assist or guide people with vision loss, ranging from partially sighted to totally blind, by means of sound commands. In this paper, a new system for NAVI is presented based on visual and range information. Instead of using several sensors, we choose one device, a consumer RGB-D camera, and take advantage of both range and visual information. In particular, the main contribution is the combination of depth information with image intensities, resulting in the robust expansion of the range-based floor segmentation. On one hand, depth information, which is reliable but limited to a short range, is enhanced with the long-range visual information. On the other hand, the difficult and prone-to-error image processing is eased and improved with depth information. The proposed system detects and classifies the main structural elements of the scene providing the user with obstacle-free paths in order to navigate safely across unknown scenarios. The proposed system has been tested on a wide variety of scenarios and data sets, giving successful results and showing that the system is robust and works in challenging indoor environments.
international conference on computer vision | 2011
Luis Puig; José Jesús Guerrero
In this paper we propose a new approach to compute the scale space of any omnidirectional image acquired with a central catadioptric system. When these cameras are central they are explained using the sphere camera model, which unifies in a single model, conventional, paracatadioptric and hypercatadioptric systems. Scale space is essential in the detection and matching of interest points, in particular scale invariant points based on Laplacian of Gaussians, like the well known SIFT. We combine the sphere camera model and the partial differential equations framework on manifolds, to compute the Laplace-Beltrami (LB) operator which is a second order differential operator required to perform the Gaussian smoothing on catadioptric images. We perform experiments with synthetic and real images to validate the generalization of our approach to any central catadioptric system.
intelligent robots and systems | 2012
Daniel Gutiérrez-Gómez; Luis Puig; José Jesús Guerrero
In the last years monocular SLAM has been widely used to obtain highly accurate maps and trajectory estimations of a moving camera. However, one of the issues of this approach is that, due to the impossibility of the depth being measured in a single image, global scale is not observable and scene and camera motion can only be recovered up to scale. This problem gets aggravated as we deal with larger scenes since it is more likely that scale drift arises between different map portions and their corresponding motion estimates. To compute the absolute scale we need to know some kind of dimension of the scene (e.g., actual size of an element of the scene, velocity of the camera or baseline between two frames) and somehow integrate it in the SLAM estimation. In this paper, we present a method to recover the scale of the scene using an omnidirectional camera mounted on a helmet. The high precision of visual SLAM allows the head vertical oscillation during walking to be perceived in the trajectory estimation. By performing a spectral analysis on the camera vertical displacement, we can measure the step frequency. We relate the step frequency to the speed of the camera by an empirical formula based on biomedical experiments on human walking. This speed measurement is integrated in a particle filter to estimate the current scale factor and the 3D motion estimation with its true scale. We evaluated our approach using image sequences acquired while a person walks. Our experiments show that the proposed approach is able to cope with scale drift.
Robotics and Autonomous Systems | 2012
Jesus Bermudez-Cameo; Luis Puig; José Jesús Guerrero
In central catadioptric systems 3D lines are projected into conics. In this paper we present a new approach to extract conics in the raw catadioptric image, which correspond to projected straight lines in the scene. Using the internal calibration and two image points we are able to compute analytically these conics which we name hypercatadioptric line images. We obtain the error propagation from the image points to the 3D line projection in function of the calibration parameters. We also perform an exhaustive analysis on the elements that can affect the conic extraction accuracy. Besides that, we exploit the presence of parallel lines in man-made environments to compute the dominant vanishing points (VPs) in the omnidirectional image. In order to obtain the intersection of two of these conics we analyze the self-polar triangle common to this pair. With the information contained in the vanishing points we are able to obtain the 3D orientation of the catadioptric system. This method can be used either in a vertical stabilization system required by autonomous navigation or to rectify images required in applications where the vertical orientation of the catadioptric system is assumed. We use synthetic and real images to test the proposed method. We evaluate the 3D orientation accuracy with a ground truth given by a goniometer and with an inertial measurement unit (IMU). We also test our approach performing vertical and full rectifications in sequences of real images.
computer vision and pattern recognition | 2012
Ana C. Murillo; Daniel Gutiérrez-Gómez; Alejandro Rituerto; Luis Puig; Josechu J. Guerrero
Autonomous navigation and recognition of the environment are fundamental abilities for people extensively studied in computer vision and robotics fields. Expansion of low cost wearable sensing provides interesting opportunities for assistance systems that augment people navigation and recognition capabilities. This work presents our wearable omnidirectional vision system and a novel two-phase localization approach running on it. It runs state-of-the-art real time visual odometry adapted to catadioptric images augmented with topological-semantic information. The presented approach benefits from using wearable sensors to improve visual odometry results with true scaled solution. The wide field of view of catadioptric vision system used makes features last longer in the field of view and allows more compact location representation which facilitates topological place recognition. Experiments in this paper show promising ego-localization results in realistic settings, providing good true scaled visual odometry estimation and recognition of indoor regions.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014
Luis Puig; Josechu J. Guerrero; Kostas Daniilidis
In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.
intelligent robots and systems | 2014
Thomas Koletschka; Luis Puig; Kostas Daniilidis
The ego motion estimation from an image sequence, commonly known as visual odometry, has been thoroughly studied in recent years. Different solutions have been developed depending on the particular scenario the system interacts in. In highly textured environments point features are abundant and visual odometry approaches focus on complementary steps, such as sparse bundle adjustment or keyframe techniques, to improve the accuracy of the motion estimation. In textureless scenarios, the absence of point features motivates the use of different image features. Lines have proven to be an interesting alternative to points in man-made environments, but very few visual odometry approaches have been developed using these types of features. Moreover, the combination of point and line features has not been considered in the development of real-time visual odometry algorithms. In this paper, we explore the combination of point and line features to robustly compute the six degree of freedom motion transformation between consecutive stereo frames. Additionally, we deal with the problem of line stereo matching, since our approach is based on 3D-2D correspondences to estimate motion. We develop an efficient algorithm to compute the stereo line matching, even in situations where one of the endpoints describing the line segment in the left image is not visible in the right image. Several experiments with synthetic and real image sequences show that a simple but effective combination of point and line features improves the motion estimate compared to approaches using only one type of these features with a slight increase in computational cost.