Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis Miguel Bergasa is active.

Publication


Featured researches published by Luis Miguel Bergasa.


IEEE Transactions on Intelligent Transportation Systems | 2007

Combination of Feature Extraction Methods for SVM Pedestrian Detection

Ignacio Parra Alonso; David Fernández Llorca; Miguel Ángel Sotelo; Luis Miguel Bergasa; Pedro Revenga De Toro; Jesús Nuevo; Manuel Ocaña; Miguel Aangel Garcia Garrido

This paper describes a comprehensive combination of feature extraction methods for vision-based pedestrian detection in Intelligent Transportation Systems. The basic components of pedestrians are first located in the image and then combined with a support-vector-machine-based classifier. This poses the problem of pedestrian detection in real cluttered road images. Candidate pedestrians are located using a subtractive clustering attention mechanism based on stereo vision. A components-based learning approach is proposed in order to better deal with pedestrian variability, illumination conditions, partial occlusions, and rotations. Extensive comparisons have been carried out using different feature extraction methods as a key to image understanding in real traffic conditions. A database containing thousands of pedestrian samples extracted from real traffic images has been created for learning purposes at either daytime or nighttime. The results achieved to date show interesting conclusions that suggest a combination of feature extraction methods as an essential clue for enhanced detection performance


Autonomous Robots | 2004

A Color Vision-Based Lane Tracking System for Autonomous Driving on Unmarked Roads

Miguel Ángel Sotelo; Francisco Rodríguez; Luis Magdalena; Luis Miguel Bergasa; Luciano Boquete

This work describes a color Vision-based System intended to perform stable autonomous driving on unmarked roads. Accordingly, this implies the development of an accurate road surface detection system that ensures vehicle stability. Although this topic has already been documented in the technical literature by different research groups, the vast majority of the already existing Intelligent Transportation Systems are devoted to assisted driving of vehicles on marked extra urban roads and highways. The complete system was tested on the BABIECA prototype vehicle, which was autonomously driven for hundred of kilometers accomplishing different navigation missions on a private circuit that emulates an urban quarter. During the tests, the navigation system demonstrated its robustness with regard to shadows, road texture, and weather and changing illumination conditions.


Sensors | 2012

Assisting the Visually Impaired: Obstacle Detection and Warning System by Acoustic Feedback

Alberto Rodríguez; J. Javier Yebes; Pablo Fernández Alcantarilla; Luis Miguel Bergasa; Javier Almazán; Andres F. Cela

The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system.


Image and Vision Computing | 2000

Unsupervised and adaptive Gaussian skin-color model

Luis Miguel Bergasa; Manuel Mazo; Alfredo Gardel; Miguel Ángel Sotelo; Luciano Boquete

Abstract In this article a segmentation method is described for the face skin of people of any race in real time, in an adaptive and unsupervised way, based on a Gaussian model of the skin color (that will be referred to as Unsupervised and Adaptive Gaussian Skin-Color Model, UAGM). It is initialized by clustering and it is not required that the user introduces any initial parameters. It works with complex color images, with random backgrounds and it is robust to lighting and background changes. The clustering method used, based on the Vector Quantization (VQ) algorithm, is compared to other optimum model selection methods, based on the EM algorithm, using synthetic data. Finally, real results of the proposed method and conclusions are shown.


international conference on pattern recognition | 2000

EOG guidance of a wheelchair using neural networks

Rafael Barea; Luciano Boquete; Manuel Mazo; Elena López; Luis Miguel Bergasa

Presents a method to control and guide mobile robots. In this case, to send different commands we have used electrooculography (EOG) techniques, so that, control is made by means of the ocular position (eye displacement into its orbit). A neural network is used to identify the inverse eye model, therefore the saccadic eye movements can be detected and where the user is looking can be determined. This control technique can be useful in multiple applications, but in this work it is used to guide an autonomous robot (wheelchair) as a system to help to people with severe disabilities. The system consists of a standard electric wheelchair with an on-board computer, sensors and graphical user interface running on a computer.


The International Journal of Robotics Research | 2003

Electro-Oculographic Guidance of a Wheelchair Using Eye Movements Codification

Rafael Barea; Luciano Boquete; Luis Miguel Bergasa; Elena López; Manuel Mazo

In this paper we present a new method to guide mobile robots. An eye-control device based on electro-oculography (EOG) is designed to develop a system for assisted mobility. Control is made by means eye movements detected using electro-oculographic potential. Using an inverse eye model, the saccadic eye movements can be detected and know where the user is looking. This control technique can be useful in multiple applications, but in this work it is used to guide a wheelchair for helping people with severe disabilities. The system consists of a standard electric wheelchair, an on-board computer, sensors and a graphical user interface. Finally, we comment on some experimental results and conclusions about electro-oculographic guidance using ocular commands.


IEEE Transactions on Intelligent Transportation Systems | 2009

Real-Time Hierarchical Outdoor SLAM Based on Stereovision and GPS Fusion

David Schleicher; Luis Miguel Bergasa; Manuel Ocaña; Rafael Barea; María Elena López

This paper presents a new real-time hierarchical (topological/metric) simultaneous localization and mapping (SLAM) system. It can be applied to the robust localization of a vehicle in large-scale outdoor urban environments, improving the current vehicle navigation systems, most of which are only based on Global Positioning System (GPS). Then, it can be used on autonomous vehicle guidance with recurrent trajectories (bus journeys, theme park internal journeys, etc.). It is exclusively based on the information provided by both a low-cost, wide-angle stereo camera and a low-cost GPS. Our approach divides the whole map into local submaps identified by the so-called fingerprints (vehicle poses). In this submap level (low-level SLAM), a metric approach is carried out. There, a 3-D sequential mapping of visual natural landmarks and the vehicle location/orientation are obtained using a top-down Bayesian method to model the dynamic behavior. GPS measurements are integrated within this low-level improving vehicle positioning. A higher topological level (high-level SLAM) based on fingerprints and the multilevel relaxation (MLR) algorithm has been added to reduce the global error within the map, keeping real-time constraints. This level provides nearly consistent estimation, keeping a small degradation with GPS unavailability. Some experimental results for large-scale outdoor urban environments are presented, showing an almost constant processing time.


intelligent vehicles symposium | 2014

DriveSafe: An app for alerting inattentive drivers and scoring driving behaviors

Luis Miguel Bergasa; Daniel Almeria; Javier Almazán; J. Javier Yebes; Roberto Arroyo

This paper presents DriveSafe, a new driver safety app for iPhones that detects inattentive driving behaviors and gives corresponding feedback to drivers, scoring their driving and alerting them in case their behaviors are unsafe. It uses computer vision and pattern recognition techniques on the iPhone to assess whether the driver is drowsy or distracted using the rear-camera, the microphone, the inertial sensors and the GPS. We present the general architecture of DriveSafe and evaluate its performance using data from 12 drivers in two different studies. The first one evaluates the detection of some inattentive driving behaviors obtaining an overall precision of 82% at 92% of recall. The second one compares the scores between DriveSafe vs the commercial AXA Drive app obtaining a better valuation to its operation. DriveSafe is the first app for smartphones based on inbuilt sensors able to detect inattentive behaviors evaluating the quality of the driving at the same time. It represents a new disruptive technology because, on the one hand, it provides similar ADAS features that found in luxury cars, and on the other hand, it presents a viable alternative for the “blackboxes” installed in vehicles by the insurance companies.


ieee intelligent vehicles symposium | 2008

Night time vehicle detection for driving assistance lightbeam controller

Pablo Fernández Alcantarilla; Luis Miguel Bergasa; Pedro Jiménez; Miguel Ángel Sotelo; Ignacio Parra; D. Fernandez; S.S. Mayoral

In this paper we present an effective system for detecting vehicles in front of a camera-assisted vehicle (preceding vehicles traveling in the same direction and oncoming vehicles traveling in the opposite direction) during night time driving conditions in order to automatically change vehicle head lights between low beams and high beams avoiding glares for the drivers. Accordingly, high beams output will be selected when no other traffic is present and will be turned on low beams when other vehicles are detected. Our systemuses a B&W micro-camera mounted in the windshield area and looking at forward of the vehicle. Digital image processing techniques are applied to analyze light sources and to detect vehicles in the images. The algorithm is efficient and able to run in real-time. Some experimental results and conclusions are presented.


ieee intelligent vehicles symposium | 2012

Vision-based drowsiness detector for real driving conditions

I. Garcı́a; Sebastián Bronte; Luis Miguel Bergasa; Javier Almazán; José Javier Yebes

This paper presents a non-intrusive approach for drowsiness detection, based on computer vision. It is installed in a car and it is able to work under real operation conditions. An IR camera is placed in front of the driver, in the dashboard, in order to detect his face and obtain drowsiness clues from their eyes closure. It works in a robust and automatic way, without prior calibration. The presented system is composed of 3 stages. The first one is preprocessing, which includes face and eye detection and normalization. The second stage performs pupil position detection and characterization, combining it with an adaptive lighting filtering to make the system capable of dealing with outdoor illumination conditions. The final stage computes PERCLOS from eyes closure information. In order to evaluate this system, an outdoor database was generated, consisting of several experiments carried out during more than 25 driving hours. A study about the performance of this proposal, showing results from this testbench, is presented.

Collaboration


Dive into the Luis Miguel Bergasa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge