Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J. Javier Yebes is active.

Publication


Featured researches published by J. Javier Yebes.


Sensors | 2012

Assisting the Visually Impaired: Obstacle Detection and Warning System by Acoustic Feedback

Alberto Rodríguez; J. Javier Yebes; Pablo Fernández Alcantarilla; Luis Miguel Bergasa; Javier Almazán; Andres F. Cela

The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system.


intelligent vehicles symposium | 2014

DriveSafe: An app for alerting inattentive drivers and scoring driving behaviors

Luis Miguel Bergasa; Daniel Almeria; Javier Almazán; J. Javier Yebes; Roberto Arroyo

This paper presents DriveSafe, a new driver safety app for iPhones that detects inattentive driving behaviors and gives corresponding feedback to drivers, scoring their driving and alerting them in case their behaviors are unsafe. It uses computer vision and pattern recognition techniques on the iPhone to assess whether the driver is drowsy or distracted using the rear-camera, the microphone, the inertial sensors and the GPS. We present the general architecture of DriveSafe and evaluate its performance using data from 12 drivers in two different studies. The first one evaluates the detection of some inattentive driving behaviors obtaining an overall precision of 82% at 92% of recall. The second one compares the scores between DriveSafe vs the commercial AXA Drive app obtaining a better valuation to its operation. DriveSafe is the first app for smartphones based on inbuilt sensors able to detect inattentive behaviors evaluating the quality of the driving at the same time. It represents a new disruptive technology because, on the one hand, it provides similar ADAS features that found in luxury cars, and on the other hand, it presents a viable alternative for the “blackboxes” installed in vehicles by the insurance companies.


IEEE Transactions on Intelligent Transportation Systems | 2014

Text Detection and Recognition on Traffic Panels From Street-Level Imagery Using Visual Appearance

Álvaro González; Luis Miguel Bergasa; J. Javier Yebes

Traffic sign detection and recognition has been thoroughly studied for a long time. However, traffic panel detection and recognition still remains a challenge in computer vision due to its different types and the huge variability of the information depicted in them. This paper presents a method to detect traffic panels in street-level images and to recognize the information contained on them, as an application to intelligent transportation systems (ITS). The main purpose can be to make an automatic inventory of the traffic panels located in a road to support road maintenance and to assist drivers. Our proposal extracts local descriptors at some interest keypoints after applying blue and white color segmentation. Then, images are represented as a “bag of visual words” and classified using Naïve Bayes or support vector machines. This visual appearance categorization method is a new approach for traffic panel detection in the state of the art. Finally, our own text detection and recognition method is applied on those images where a traffic panel has been detected, in order to automatically read and save the information depicted in the panels. We propose a language model partly based on a dynamic dictionary for a limited geographical area using a reverse geocoding service. Experimental results on real images from Google Street View prove the efficiency of the proposed method and give way to using street-level images for different applications on ITS.


ieee intelligent vehicles symposium | 2013

Full auto-calibration of a smartphone on board a vehicle using IMU and GPS embedded sensors

Javier Almazán; Luis Miguel Bergasa; J. Javier Yebes; Rafael Barea; Roberto Arroyo

Nowadays, smartphones are widely used in the world, and generally, they are equipped with many sensors. In this paper we study how powerful the low-cost embedded IMU and GPS could become for Intelligent Vehicles. The information given by accelerometer and gyroscope is useful if the relations between the smartphone reference system, the vehicle reference system and the world reference system are known. Commonly, the magnetometer sensor is used to determine the orientation of the smartphone, but its main drawback is the high influence of electromagnetic interference. In view of this, we propose a novel automatic method to calibrate a smartphone on board a vehicle using its embedded IMU and GPS, based on longitudinal vehicle acceleration. To the best of our knowledge, this is the first attempt to estimate the yaw angle of a smartphone relative to a vehicle in every case, even on non-zero slope roads. Furthermore, in order to decrease the impact of IMU noise, an algorithm based on Kalman Filter and fitting a mixture of Gaussians is introduced. The results show that the system achieves high accuracy, the typical error is 1%, and is immune to electromagnetic interference.


Expert Systems With Applications | 2015

Expert video-surveillance system for real-time detection of suspicious behaviors in shopping malls

Roberto Arroyo; J. Javier Yebes; Luis Miguel Bergasa; Iván García Daza; Javier Almazán

Tracking-by-detection based on segmentation, Kalman predictions and LSAP association.Occlusion management: SVM kernel metric for GCH+LBP+HOG image features.Overall performance near to 85% while tracking under occlusions in CAVIAR dataset.Human behavior analysis (exits, loitering, etc.) in naturalistic scenes in shops.Real-time multi-camera performance with a processing capacity near to 50fps/camera. Expert video-surveillance systems are a powerful tool applied in varied scenarios with the aim of automatizing the detection of different risk situations and helping human security officers to take appropriate decisions in order to enhance the protection of assets. In this paper, we propose a complete expert system focused on the real-time detection of potentially suspicious behaviors in shopping malls. Our video-surveillance methodology contributes several innovative proposals that compose a robust application which is able to efficiently track the trajectories of people and to discover questionable actions in a shop context. As a first step, our system applies an image segmentation to locate the foreground objects in scene. In this case, the most effective background subtraction algorithms of the state of the art are compared to find the most suitable for our expert video-surveillance application. After the segmentation stage, the detected blobs may represent full or partial people bodies, thus, we have implemented a novel blob fusion technique to group the partial blobs into the final human targets. Then, we contribute an innovative tracking algorithm which is not only based on people trajectories as the most part of state-of-the-art methods, but also on people appearance in occlusion situations. This tracking is carried out employing a new two-step method: (1) the detections-to-tracks association is solved by using Kalman filtering combined with an own-designed cost optimization for the Linear Sum Assignment Problem (LSAP); and (2) the occlusion management is based on SVM kernels to compute distances between appearance features such as GCH, LBP and HOG. The application of these three features for recognizing human appearance provides a great performance compared to other description techniques, because color, texture and gradient information are effectively combined to obtain a robust visual description of people. Finally, the resultant trajectories of people obtained in the tracking stage are processed by our expert video-surveillance system for analyzing human behaviors and identifying potential shopping mall alarm situations, as are shop entry or exit of people, suspicious behaviors such as loitering and unattended cash desk situations. With the aim of evaluating the performance of some of the main contributions of our proposal, we use the publicly available CAVIAR dataset for testing the proposed tracking method with a success near to 85% in occlusion situations. According to this performance, we corroborate in the presented results that the precision and efficiency of our tracking method is comparable and slightly superior to the most recent state-of-the-art works. Furthermore, the alarms given off by our application are evaluated on a naturalistic private dataset, where it is evidenced that our expert video-surveillance system can effectively detect suspicious behaviors with a low computational cost in a shopping mall context.


intelligent vehicles symposium | 2014

Bidirectional loop closure detection on panoramas for visual navigation

Roberto Arroyo; Pablo F. Alcantarilla; Luis Miguel Bergasa; J. Javier Yebes; Sergio Gámez

Visual loop closure detection plays a key role in navigation systems for intelligent vehicles. Nowadays, state-of-the-art algorithms are focused on unidirectional loop closures, but there are situations where they are not sufficient for identifying previously visited places. Therefore, the detection of bidirectional loop closures when a place is revisited in a different direction provides a more robust visual navigation. We propose a novel approach for identifying bidirectional loop closures on panoramic image sequences. Our proposal combines global binary descriptors and a matching strategy based on cross-correlation of sub-panoramas, which are defined as the different parts of a panorama. A set of experiments considering several binary descriptors (ORB, BRISK, FREAK, LDB) is provided, where LDB excels as the most suitable. The proposed matching proffers a reliable bidirectional loop closure detection, which is not efficiently solved in any other previous research. Our method is successfully validated and compared against FAB-MAP and BRIEF-Gist. The Ford Campus and the Oxford New College datasets are considered for evaluation.


Sensors | 2014

Fusion of Optimized Indicators from Advanced Driver Assistance Systems (ADAS) for Driver Drowsiness Detection

Iván García Daza; Luis Miguel Bergasa; Sebastián Bronte; J. Javier Yebes; Javier Almazán; Roberto Arroyo

This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study.


intelligent robots and systems | 2014

Fast and effective visual place recognition using binary codes and disparity information

Roberto Arroyo; Pablo F. Alcantarilla; Luis Miguel Bergasa; J. Javier Yebes; Sebastián Bronte

We present a novel approach for place recognition and loop closure detection based on binary codes and disparity information using stereo images. Our method (ABLE-S) applies the Local Difference Binary (LDB) descriptor in a global framework to obtain a robust global image description, which is initially based on intensity and gradient pairwise comparisons. LDB has a higher descriptiveness power than other popular alternatives such as BRIEF, which only relies on intensity. In addition, we integrate disparity information into the binary descriptor (D-LDB). Disparity provides valuable information which decreases the effect of some typical problems in place recognition such as perceptual aliasing. The KITTI Odometry dataset is mainly used to test our approach due to its varied environments, challenging situations and length. Additionally, a loop closure ground-truth is introduced in this work for the KITTI Odometry benchmark with the aim of standardizing a robust evaluation methodology for comparing different previous algorithms against our method and for future benchmarking of new proposals. Attending to the presented results, our method allows a fast and more effective visual loop closure detection compared to state-of-the-art algorithms such as FAB-MAP, WI-SURF and BRIEF-Gist.


Sensors | 2013

Complete low-cost implementation of a teleoperated control system for a humanoid robot.

Andres F. Cela; J. Javier Yebes; Roberto Arroyo; Luis Miguel Bergasa; Rafael Barea; Elena López

Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robots back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system.


intelligent vehicles symposium | 2014

Supervised learning and evaluation of KITTI's cars detector with DPM

J. Javier Yebes; Luis Miguel Bergasa; Roberto Arroyo; Alberto Lázaro

This paper carries out a discussion on the supervised learning of a car detector built as a Discriminative Part-based Model (DPM) from images in the recently published KITTI benchmark suite as part of the object detection and orientation estimation challenge. We present a wide set of experiments and many hints on the different ways to supervise and enhance the well-known DPM on a challenging and naturalistic urban dataset as KITTI. The evaluation algorithm and metrics, the selection of a clean but representative subset of training samples and the DPM tuning are key factors to learn an object detector in a supervised fashion. We provide evidence of subtle differences in performance depending on these aspects. Besides, the generalization of the trained models to an independent dataset is validated by 5-fold cross-validation.

Collaboration


Dive into the J. Javier Yebes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge