Alejandro Pérez-Yus
University of Zaragoza
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alejandro Pérez-Yus.
european conference on computer vision | 2014
Alejandro Pérez-Yus; Gonzalo López-Nicolás; José Jesús Guerrero
In this paper we deal with the perception task of a wearable navigation assistant. Specifically, we have focused on the detection of staircases because of the important role they play in indoor navigation due to the multi-floor reaching possibilities they bring and the lack of security they cause, specially for those who suffer from visual deficiencies. We use the depth sensing capacities of the modern RGB-D cameras to segment and classify the different elements that integrate the scene and then carry out the stair detection and modelling algorithm to retrieve all the information that might interest the user, i.e. the location and orientation of the staircase, the number of steps and the step dimensions. Experiments prove that the system is able to perform in real-time and works even under partial occlusions of the stairway.
Computer Vision and Image Understanding | 2017
Alejandro Pérez-Yus; Daniel Gutiérrez-Gómez; Gonzalo López-Nicolás; José Jesús Guerrero
A novel method to detect stairs with a RGB-D camera is proposed.We get the fully measured model with pose to validate and help in navigation.The system is designed to be wearable and aimed to assist the visually impaired.Visual odometry is computed to enhance the navigation system in video sequences.On-line stair measurements are used to correct the drift of the visual odometry. Stairs are one of the most common structures present in human-made scenarios, but also one of the most dangerous for those with vision problems. In this work we propose a complete method to detect, locate and parametrise stairs with a wearable RGB-D camera. Our algorithm uses the depth data to determine if the horizontal planes in the scene are valid steps of a staircase judging their dimensions and relative positions. As a result we obtain a scaled model of the staircase with the spatial location and orientation with respect to the subject. The visual odometry is also estimated to continuously recover the current position and orientation of the user while moving. This enhances the system giving the ability to come back to previously detected features and providing location awareness of the user during the climb. Simultaneously, the detection of the staircase during the traversal is used to correct the drift of the visual odometry. A comparison of results of the stair detection with other state-of-the-art algorithms was performed using public dataset. Additional experiments have also been carried out, recording our own natural scenes with a chest-mounted RGB-D camera in indoor scenarios. The algorithm is robust enough to work in real-time and even under partial occlusions of the stair.
international conference on pattern recognition | 2016
Alejandro Pérez-Yus; Gonzalo López-Nicolás; Josechu J. Guerrero
We introduce a novel hybrid camera configuration composed by a fisheye camera attached to an RGB-D system. Current RGB-D sensors provide the 3D information and scale of the scene, but they are limited by a small field of view. In contrast, wide field of view cameras capture a larger portion of the scene, but providing highly distorted images that require specific algorithms. By coupling a fisheye camera to an RGB-D system we take advantage of both types of cameras overcoming their drawbacks. The system provides a portion of the fisheye image with depth data and we use this seed information to perform scaled operations in the complete image. We also present a calibration procedure of the system to map depth information to the wide angle image. With this purpose, we propose a depth-fisheye calibration algorithm nurturing from state of the art camera models and methods. Several experiments test the accuracy of the system with real images.
european conference on computer vision | 2016
Alejandro Pérez-Yus; Gonzalo López-Nicolás; José Jesús Guerrero
Consumer RGB-D cameras have become very useful in the last years, but their field of view is too narrow for certain applications. We propose a new hybrid camera system composed by a conventional RGB-D and a fisheye camera to extend the field of view over 180\(^{\circ }\). With this system we have a region of the hemispherical image with depth certainty, and color data in the periphery that is used to extend the structural information of the scene. We have developed a new method to generate scaled layout hypotheses from relevant corners, combining the extraction of lines in the fisheye image and the depth information. Experiments with real images from different scenarios validate our layout recovery method and the advantages of this camera system, which is also able to overcome severe occlusions. As a result, we obtain a scaled 3D model expanding the original depth information with the wide scene reconstruction. Our proposal expands successfully the depth map more than eleven times in a single shot.
international conference on robotics and automation | 2018
Alejandro Pérez-Yus; Eduardo Fernandez-Moral; Gonzalo López-Nicolás; Josechu J. Guerrero; Patrick Rives
This letter presents a novel method to estimate the relative poses between RGB and depth cameras without the requirement of an overlapping field of view, thus providing flexibility to calibrate a variety of sensor configurations. This calibration problem is relevant to robotic applications that can benefit of using several cameras to increase the field of view. In our approach, we extract and match lines of the scene in the RGB and depth cameras, and impose geometric constraints to find the relative poses between the sensors. An analysis of the observability properties of the problem is presented. We have validated our method in both synthetic and real scenarios with different camera configurations, demonstrating that our approach achieves good accuracy and is very simple to apply, in contrast with previous methods based on trajectory matching using visual odometry or simultaneous localization and mapping.
International Workshop on Understanding Human Activities through 3D Sensors | 2016
Alejandro Pérez-Yus; Luis Puig; Gonzalo López-Nicolás; José Jesús Guerrero; Dieter Fox
Tracking the pose of objects is a relevant topic in computer vision, which potentially allows to recover meaningful information for other applications such as task supervision, robot manipulation or activity recognition. In the last years, RGB-D cameras have been widely adopted for this problem with impressive results. However, there are certain objects whose surface properties or complex shapes prevents the depth sensor from returning good depth measurements, and only color-based methods can be applied. In this work, we show how the depth information of the surroundings of the object can still be useful in the object pose tracking with RGB-D even in this situation. Specifically, we propose using the depth information to handle occlusions in a state of the art region-based object pose tracking algorithm. Experiments with recordings of humans naturally interacting with difficult objects have been performed, showing the advantages of our contribution in several image sequences.
arXiv: Computer Vision and Pattern Recognition | 2018
Clara Fernandez-Labrador; José M. Fácil; Alejandro Pérez-Yus; Cédric Demonceaux; Josechu J. Guerrero
ACTAS V Congreso Internacional de Turismo para Todos: VI Congreso Internacional de Diseño, Redes de Investigación y Tecnología para todos DRT4ALL, 2015, 2015, ISBN 978-84-7993-277-0, págs. 285-312 | 2015
José Jesús Guerrero; A. Gutiérrez Gómez; Alejandro Rituerto; G. López-Nicola; Alejandro Pérez-Yus
international conference on robotics and automation | 2018
Clara Fernandez-Labrador; Alejandro Pérez-Yus; Gonzalo López-Nicolás; José Jesús Guerrero
international conference on computer vision | 2017
Alejandro Pérez-Yus; Jesus Bermudez-Cameo; José Jesús Guerrero; Gonzalo López-Nicolás