Pablo Fernández Alcantarilla
University of Alcalá
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pablo Fernández Alcantarilla.
british machine vision conference | 2013
Pablo Fernández Alcantarilla; Jesús Nuevo; Adrien Bartoli
We propose a novel and fast multiscale feature detection and description approach that exploits the benefits of nonlinear scale spaces. Previous attempts to detect and describe features in nonlinear scale spaces such as KAZE [1] and BFSIFT [6] are highly time consuming due to the computational burden of creating the nonlinear scale space. In this paper we propose to use recent numerical schemes called Fast Explicit Diffusion (FED) [3, 4] embedded in a pyramidal framework to dramatically speed-up feature detection in nonlinear scale spaces. In addition, we introduce a Modified-Local Difference Binary (M-LDB) descriptor that is highly efficient, exploits gradient information from the nonlinear scale space, is scale and rotation invariant and has low storage requirements. Our features are called Accelerated-KAZE (A-KAZE) due to the dramatic speed-up introduced by FED schemes embedded in a pyramidal framework.
Sensors | 2012
Alberto Rodríguez; J. Javier Yebes; Pablo Fernández Alcantarilla; Luis Miguel Bergasa; Javier Almazán; Andres F. Cela
The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system.
ieee intelligent vehicles symposium | 2008
Pablo Fernández Alcantarilla; Luis Miguel Bergasa; Pedro Jiménez; Miguel Ángel Sotelo; Ignacio Parra; D. Fernandez; S.S. Mayoral
In this paper we present an effective system for detecting vehicles in front of a camera-assisted vehicle (preceding vehicles traveling in the same direction and oncoming vehicles traveling in the opposite direction) during night time driving conditions in order to automatically change vehicle head lights between low beams and high beams avoiding glares for the drivers. Accordingly, high beams output will be selected when no other traffic is present and will be turned on low beams when other vehicles are detected. Our systemuses a B&W micro-camera mounted in the windshield area and looking at forward of the vehicle. Digital image processing techniques are applied to analyze light sources and to detect vehicles in the images. The algorithm is efficient and able to run in real-time. Some experimental results and conclusions are presented.
international conference on intelligent transportation systems | 2009
Sebastián Bronte; Luis Miguel Bergasa; Pablo Fernández Alcantarilla
In this document, a real-time fog detection system using an on-board low cost b&w camera, for a driving application, is presented. This system is based on two clues: estimation of the visibility distance, which is calculated from the camera projection equations and the blurring due to the fog. Because of the water particles floating in the air, sky light gets diffuse and, focus on the road zone, which is one of the darkest zones on the image. The apparent effect is that some part of the sky introduces in the road. Also in foggy scenes, the border strength is reduced in the upper part of the image. These two sources of information are used to make this system more robust. The final purpose of this system is to develop an automatic vision-based diagnostic system for warning ADAS of possible wrong working conditions. Some experimental results and the conclusions about this work are presented.
international conference on robotics and automation | 2010
Pablo Fernández Alcantarilla; Luis Miguel Bergasa; Frank Dellaert
One of the main drawbacks of standard visual EKF-SLAM techniques is the assumption of a general camera motion model. Usually this motion model has been implemented in the literature as a constant linear and angular velocity model. Because of this, most approaches cannot deal with sudden camera movements, causing them to lose accurate camera pose and leading to a corrupted 3D scene map. In this work we propose increasing the robustness of EKF-SLAM techniques by replacing this general motion model with a visual odometry prior, which provides a real-time relative pose prior by tracking many hundreds of features from frame to frame. We perform fast pose estimation using the two-stage RANSAC-based approach from [1]: a two-point algorithm for rotation followed by a one-point algorithm for translation. Then we integrate the estimated relative pose into the prediction step of the EKF. In the measurement update step, we only incorporate a much smaller number of landmarks into the 3D map to maintain real-time operation. Incorporating the visual odometry prior in the EKF process yields better and more robust localization and mapping results when compared to the constant linear and angular velocity model case. Our experimental results, using a handheld stereo camera as the only sensor, clearly show the benets of our method against the standard constant velocity model.
international conference on robotics and automation | 2012
Pablo Fernández Alcantarilla; José Javier Yebes; Javier Almazán; Luis Miguel Bergasa
In this paper, we introduce the concept of dense scene flow for visual SLAM applications. Traditional visual SLAM methods assume static features in the environment and that a dominant part of the scene changes only due to camera egomotion. These assumptions make traditional visual SLAM methods prone to failure in crowded real-world dynamic environments with many independently moving objects, such as the typical environments for the visually impaired. By means of a dense scene flow representation, moving objects can be detected. In this way, the visual SLAM process can be improved considerably, by not adding erroneous measurements into the estimation, yielding more consistent and improved localization and mapping results. We show large-scale visual SLAM results in challenging indoor and outdoor crowded environments with real visually impaired users. In particular, we performed experiments inside the Atocha railway station and in the city-center of Alcalá de Henares, both in Madrid, Spain. Our results show that the combination of visual SLAM and dense scene flow allows to obtain an accurate localization, improving considerably the results of traditional visual SLAM methods and GPS-based approaches.
machine vision applications | 2011
Pablo Fernández Alcantarilla; Luis Miguel Bergasa; Pedro Jiménez; Ignacio Parra; David Fernández Llorca; Miguel Ángel Sotelo; S.S. Mayoral
In this article, we present an effective system for detecting vehicles in front of a camera-assisted vehicle (preceding vehicles traveling in the same direction and oncoming vehicles traveling in the opposite direction) during night-time driving conditions in order to automatically change vehicle head lights between low beams and high beams avoiding glares for the drivers. Accordingly, high beams output will be selected when no other traffic is present and will turn low beams on when other vehicles are detected. In addition, low beams output will be selected when the vehicle is in a well lit or urban area. LightBeam Controller is used to assist drivers in controlling vehicle’s beams increasing its correct use, since normally drivers do not switch between high beams and low beams or vice versa when needed. Our system uses a B&W forward looking micro-camera mounted in the windshield area of a C4-Picasso prototype car. Image processing techniques are applied to analyse light sources and to detect vehicles in the images. Furthermore, the system is able to classify between vehicle lights and road signs reflections or nuisance artifacts by means of support vector machines. The algorithm is efficient and able to run in real time. The system has been tested with different video sequences (more than 7xa0h of video sequences) under real night driving conditions in different roads of Spain. Experimental results, a comparison with other representative state of the art methods and conclusions about the system performance are presented.
international conference on intelligent transportation systems | 2008
Pablo Fernández Alcantarilla; Miguel Ángel Sotelo; Luis Miguel Bergasa
This paper presents an automatic road traffic control and monitoring system for day time sequences using a B & W camera. Important road traffic information such as mean speed, dimension and vehicles counting are obtained using computer vision methods. Firstly, moving objects are extracted from the scene by means of a frame-differencing algorithm and texture information based on grey scale intensity. However, shadows of moving objects belong also to the foreground. Shadows are removed from the foreground objects using top hat transformations and morphological operators. Finally, objects are tracked in a Kalman filtering process, and parameters such as position, dimensions, distance and speed of moving objects are measured. Then, according to these parameters moving objects are classified as vehicles (trucks or cars) or nuisance artifacts. For results visualization, a 3D model is projected onto vehicles in the image plane. Some experimental results using real outdoor sequences of images are shown. These results demonstrate the accuracy of the proposed system under daytime interurban traffic conditions.
international conference on robotics and automation | 2010
Pablo Fernández Alcantarilla; Sang Min Oh; Gian Luca Mariottini; Luis Miguel Bergasa; Frank Dellaert
We aim to perform robust and fast vision-based localization using a pre-existing large map of the scene. A key step in localization is associating the features extracted from the image with the map elements at the current location. Although the problem of data association has greatly benefited from recent advances in appearance-based matching methods, less attention has been paid to the effective use of the geometric relations between the 3D map and the camera in the matching process. In this paper we propose to exploit the geometric relationship between the 3D map and the camera pose to determine the visibility of the features. In our approach, we model the visibility of every map feature with respect to the camera pose using a non-parametric distribution model. We learn these non-parametric distributions during the 3D reconstruction process, and develop efficient algorithms to predict the visibility of features during localization. With this approach, the matching process only uses those map features with the highest visibility score, yielding a much faster algorithm and superior localization results. We demonstrate an integrated system based on the proposed idea and highlight its potential benefits for the localization in large and cluttered environments
british machine vision conference | 2014
Luca Carlone; Pablo Fernández Alcantarilla; Han-Pang Chiu; Frank Dellaert
© 2014. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.