Luz Abril Torres-Méndez
CINVESTAV
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luz Abril Torres-Méndez.
IEEE Computer | 2007
Gregory Dudek; Philippe Giguère; Chris Prahacs; Shane Saunderson; Junaed Sattar; Luz Abril Torres-Méndez; Michael Jenkin; Andrew German; Andrew Hogue; Arlene Ripsman; James E. Zacher; Evangelos E. Milios; Hui Liu; Pifu Zhang; Martin Buehler; Christina Georgiades
AQUA, an amphibious robot that swims via the motion of its legs rather than using thrusters and control surfaces for propulsion, can walk along the shore, swim along the surface in open water, or walk on the bottom of the ocean. The vehicle uses a variety of sensors to estimate its position with respect to local visual features and provide a global frame of reference
energy minimization methods in computer vision and pattern recognition | 2005
Luz Abril Torres-Méndez; Gregory Dudek
In this paper, we consider the problem of color restoration using statistical priors. This is applied to color recovery for underwater images, using an energy minimization formulation. Underwater images present a challenge when trying to correct the blue-green monochrome look to bring out the color we know marine life has. For aquatic robot tasks, the quality of the images is crucial and needed in real-time. Our method enhances the color of the images by using a Markov Random Field (MRF) to represent the relationship between color depleted and color images. The parameters of the MRF model are learned from the training data and then the most probable color assignment for each pixel in the given color depleted image is inferred by using belief propagation (BP). This allows the system to adapt the color restoration algorithm to the current environmental conditions and also to the task requirements. Experimental results on a variety of underwater scenes demonstrate the feasibility of our method.
workshop on applications of computer vision | 2002
Luz Abril Torres-Méndez; Gregory Dudek
In this paper a range synthesis algorithm is proposed as an initial solution to the problem of 3D environment modeling from sparse data. We develop a statistical learning method for inferring and extrapolating range data from as little as one intensity image and from those (sparse) regions where both range and intensity information is available. Our work is related to methods for texture synthesis using Markov Random Field methods. We demonstrate that MRF methods can also be applied to general intensity images with little associated range information and used to estimate range values where needed without making any strong assumptions about the kind of surfaces in the world Experimental results show the feasibility of our method.
International Journal of Computer Vision | 2008
Luz Abril Torres-Méndez; Gregory Dudek
Abstract In this article we present a method for automatically recovering complete and dense depth maps of an indoor environment by fusing incomplete data for the 3D environment modeling problem. The geometry of indoor environments is usually extracted by acquiring a huge amount of range data and registering it. By acquiring a small set of intensity images and a very limited amount of range data, the acquisition process is considerably simplified, saving time and energy consumption. In our method, the intensity and partial range data are registered first by using an image-based registration algorithm. Then, the missing geometric structures are inferred using a statistical learning method that integrates and analyzes the statistical relationships between the visual data and the available depth on terms of small patches. Experiments on real-world data on a variety of sampling strategies demonstrate the feasibility of our method.
mexican international conference on artificial intelligence | 2007
Mario Castelán; Ana J. Almazán-Delfín; Marco I. Ramírez-Sosa-Morán; Luz Abril Torres-Méndez
We present a method for recovering facial shape using an image of a face and a reference model. The zenith angle of the surface normal is recovered directly from the intensities of the image. The azimuth angle of the reference model is then combined with the calculated zenith angle in order to get a new field of surface normals. After integration of the needle map, the recovered surface has the effect of mapped facial features over the reference model. Experiments demonstrate that for the lambertian case, surface recovery is achieved with high accuracy. For non-Lambertian cases, experiments suggest potential for face recognition applications.
canadian conference on computer and robot vision | 2004
Luz Abril Torres-Méndez; Gregory Dudek; P. Di Marco
This paper developed prior work which incrementally completes a sparse depth map based on inter-image statistics information. In that prior work, we have observed that pixel ordering of the incremental recovery is critical to the quality of the final results. In this paper we demonstrate improved performance using an information-driven recovery policy to determine this ordering. We have also observed that the reconstruction across depth discontinuities was often problematic as there was comparatively little constraint for probabilistic inference at those locations. Further, such locations are often identified with edges in both the range and intensity maps. We address this problem by deferring the reconstruction of voxels close to intensity or depth discontinuities, leading to improved results. We also show that color information can improve reconstruction quality. Experimental results are presented to demonstrate the quality of the recover and to illustrate some new application domains such as deblurring and underwater scattering compensation.
intelligent robots and systems | 2004
Luz Abril Torres-Méndez; Gregory Dudek
We address the problem of computing dense range maps of indoor locations using only intensity images and partial depth. We allow a mobile robot to navigate the environment, take some pictures and few range data. Our method is based on interpolating the existing range data using statistical inferences learned from the available intensity image and from those (sparse) regions where both range and intensity information is present. The spatial relationships between the variations in intensity and range can be efficiently captured by the neighborhood system of a Markov random field (MRF). In contrast to classical approaches to depth recovery (i.e. stereo, shape from shading), we can afford to make only weak assumptions regarding specific surface geometries or surface reflectance functions since we compute the relationship between existing range data and the images we started with. Experimental results show the feasibility of our method.
International Journal of Intelligent Unmanned Systems | 2014
Edgar A. Martínez-García; Luz Abril Torres-Méndez; Mohan Rajesh Elara
Purpose – The purpose of this paper is to establish analytical and numerical solutions of a navigational law to estimate displacements of hyper-static multi-legged mobile robots, which combines: monocular vision (optical flow of regional invariants) and legs dynamics. Design/methodology/approach – In this study the authors propose a Euler-Lagrange equation that control legs’ joints to control robots displacements. Robots rotation and translational velocities are feedback by motion features of visual invariant descriptors. A general analytical solution of a derivative navigation law is proposed for hyper-static robots. The feedback is formulated with the local speed rate obtained from optical flow of visual regional invariants. The proposed formulation includes a data association algorithm aimed to correlate visual invariant descriptors detected in sequential images through monocular vision. The navigation law is constrained by a set of three kinematic equilibrium conditions for navigational scenarios: c...
Optical Engineering | 2011
Guangyi Chen; Gregory Dudek; Luz Abril Torres-Méndez
The acquisition of a three-dimensional (3-D) model in a real-world environment by scanning only sparsely can save us a great ammount range-sensing time. We address a new method for inferring missing range data based on the given intensity image and sparse range data. It is assumed that the known range data are given on a number of scan lines with 1 pixel width. This assumption is natural for a range sensor to acquire range data in a 3-D real-world environment. Both edge information of the intensity image and linear interpolation of the range data are used. Experiments show that this method gives very good results in inferring missing range data. It outperforms both the previous method and bilinear interpolation when a very small percentage of range data are known.
canadian conference on computer and robot vision | 2005
Guangyi Chen; Gregory Dudek; Luz Abril Torres-Méndez
This paper addresses an approach to scene reconstruction by inferring missing range data in a partial range map based on intensity image and sparse initial range data. It is assumed that the initial known range data is given on a number of scan lines one pixel width. This assumption is natural for a range sensor to acquire range data in a 3D real world environment. Both edge information of the intensity image and linear interpolation of the range data are used. Experiments show that this method gives very good results in inferring missing range data. It outperforms both the previous method and bilinear interpolation when a very small percentage of range data is known.