Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Valiente is active.

Publication


Featured researches published by David Valiente.


Journal of Robotics | 2012

Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images

David Valiente; Arturo Gil Aparicio

In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of the motion transformation between two poses is incrementally computed, since only the processing of two consecutive omnidirectional images is required. Particularly, we exploited the versatility of the information gathered by omnidirectional images to perform both an appearance-based and a feature-based method to obtain visual odometry results. We carried out a set of experiments in real indoor environments to test the validity and suitability of both methods. The data used in the experiments consists of a large sets of omnidirectional images captured along the robot’s trajectory in three different real scenarios. Experimental results demonstrate the accuracy of the estimations and the capability of both methods to work in real-time.In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of the motion transformation between two poses is incrementally computed, since only the processing of two consecutive omnidirectional images is required. Particularly, we exploited the versatility of the information gathered by omnidirectional images to perform both an appearance-based and a feature-based method to obtain visual odometry results. We carried out a set of experiments in real indoor environments to test the validity and suitability of both methods. The data used in the experiments consists of a large sets of omnidirectional images captured along the robots trajectory in three different real scenarios. Experimental results demonstrate the accuracy of the estimations and the capability of both methods to work in real-time.


Robotics and Autonomous Systems | 2014

A comparison of EKF and SGD applied to a view-based SLAM approach with omnidirectional images

David Valiente; Arturo Gil; Lorenzo Fernández; íscar Reinoso

The problem of Simultaneous Localization and Mapping (SLAM) is essential in mobile robotics. The obtention of a feasible map of the environment poses a complex challenge, since the presence of noise arises as a major problem which may gravely affect the estimated solution. Consequently, a SLAM algorithm has to cope with this issue but also with the data association problem. The Extended Kalman Filter (EKF) is one of the most traditionally implemented algorithms in visual SLAM. It linearizes the movement and the observation model to provide an effective online estimation. This solution is highly sensitive to non-linear observation models as it is the omnidirectional visual model. The Stochastic Gradient Descent (SGD) emerges in this work as an offline alternative to minimize the non-linear effects which deteriorate and compromise the convergence of traditional estimators. This paper compares both methods applied to the same approach: a navigation robot supported by an efficient map model, established by a reduced set of omnidirectional image views. We present a series of real data experiments to assess the behavior and effectiveness of both methods in terms of accuracy, robustness against errors and speed of convergence.


Information Sciences | 2014

A modified stochastic gradient descent algorithm for view-based SLAM using omnidirectional images

David Valiente; Arturo Gil; Lorenzo Fernández; Oscar Reinoso

Abstract This paper describes an approach to the problem of Simultaneous Localization and Mapping (SLAM) based on Stochastic Gradient Descent (SGD) and using omnidirectional images. In the field of mobile robot applications, SGD techniques have never been evaluated with information gathered by visual sensors. This work proposes a SGD algorithm within a SLAM system which makes use of the beneficial characteristics of a single omnidirectional camera. The nature of the sensor has led to a modified version of the standard SGD to adapt it to omnidirectional geometry. Besides, the angular unscaled observation measurement needs to be considered. This upgraded SGD approach minimizes the non-linear effects which impair and compromise the convergence of traditional estimators. Moreover, we suggest a strategy to improve the convergence speed of the SLAM solution, which inputs several constraints in the SGD algorithm simultaneously, in contrast to former SGD approaches, which process only constraint independently. In particular, we focus on an efficient map model, established by a reduced set of image views. We present a series of experiments obtained with both simulated and real data. We validate the new SGD approach, compare the efficiency versus a standard SGD and demonstrate the suitability and the reliability of the approach to support real applications.


Sensors | 2017

Improved Omnidirectional Odometry for a View-Based Mapping Approach

David Valiente; Arturo Gil; Oscar Reinoso; Miguel Juliá; Mathew Holloway

This work presents an improved visual odometry using omnidirectional images. The main purpose is to generate a reliable prior input which enhances the SLAM (Simultaneous Localization and Mapping) estimation tasks within the framework of navigation in mobile robotics, in detriment of the internal odometry data. Generally, standard SLAM approaches extensively use data such as the main prior input to localize the robot. They also tend to consider sensory data acquired with GPSs, lasers or digital cameras, as the more commonly acknowledged to re-estimate the solution. Nonetheless, the modeling of the main prior is crucial, and sometimes especially challenging when it comes to non-systematic terms, such as those associated with the internal odometer, which ultimately turn to be considerably injurious and compromise the convergence of the system. This omnidirectional odometry relies on an adaptive feature point matching through the propagation of the current uncertainty of the system. Ultimately, it is fused as the main prior input in an EKF (Extended Kalman Filter) view-based SLAM system, together with the adaption of the epipolar constraint to the omnidirectional geometry. Several improvements have been added to the initial visual odometry proposal so as to produce better performance. We present real data experiments to test the validity of the proposal and to demonstrate its benefits, in contrast to the internal odometry. Furthermore, SLAM results are included to assess its robustness and accuracy when using the proposed prior omnidirectional odometry.


Sensors | 2018

Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching

David Valiente; Luis Payá; Luis M. Jiménez; José M. Sebastián; Oscar Reinoso

This work presents a visual information fusion approach for robust probability-oriented feature matching. It is sustained by omnidirectional imaging, and it is tested in a visual localization framework, in mobile robotics. General visual localization methods have been extensively studied and optimized in terms of performance. However, one of the main threats that jeopardizes the final estimation is the presence of outliers. In this paper, we present several contributions to deal with that issue. First, 3D information data, associated with SURF (Speeded-Up Robust Feature) points detected on the images, is inferred under the Bayesian framework established by Gaussian processes (GPs). Such information represents a probability distribution for the feature points’ existence, which is successively fused and updated throughout the robot’s poses. Secondly, this distribution can be properly sampled and projected onto the next 2D image frame in t+1, by means of a filter-motion prediction. This strategy permits obtaining relevant areas in the image reference system, from which probable matches could be detected, in terms of the accumulated probability of feature existence. This approach entails an adaptive probability-oriented matching search, which accounts for significant areas of the image, but it also considers unseen parts of the scene, thanks to an internal modulation of the probability distribution domain, computed in terms of the current uncertainty of the system. The main outcomes confirm a robust feature matching, which permits producing consistent localization estimates, aided by the odometer’s prior to estimate the scale factor. Publicly available datasets have been used to validate the design and operation of the approach. Moreover, the proposal has been compared, firstly with a standard feature matching and secondly with a localization method, based on an inverse depth parametrization. The results confirm the validity of the approach in terms of feature matching, localization accuracy, and time consumption.


Robotics and Autonomous Systems | 2015

Information-based view initialization in visual SLAM with a single omnidirectional camera

David Valiente; Maani Ghaffari Jadidi; Jaime Valls Miro; Arturo Gil; Oscar Reinoso

This paper presents a novel mechanism to initiate new views within the map building process for an EKF-based visual SLAM (Simultaneous Localization and Mapping) approach using omnidirectional images. In presence of non-linearities, the EKF is very likely to compromise the final estimation. Particularly, the omnidirectional observation model induces non-linear errors, thus it becomes a potential source of uncertainty. To deal with this issue we propose a novel mechanism for view initialization which accounts for information gain and losses more efficiently. The main outcome of this contribution is the reduction of the map uncertainty and thus the higher consistency of the final estimation. Its basis relies on a Gaussian Process to infer an information distribution model from sensor data. This model represents feature points existence probabilities and their information content analysis leads to the proposed view initialization scheme. To demonstrate the suitability and effectiveness of the approach we present a series of real data experiments conducted with a robot equipped with a camera sensor and map model solely based on omnidirectional views. The results reveal a beneficial reduction on the uncertainty but also on the error in the pose and the map estimate. Novel mechanism for the view initialization process in an EKF-based visual SLAM approach.An efficient strategy which accounts for information gain and losses.Probabilistic representation of features and correlation learning by Gaussian regression.Bounding the uncertainty mitigates the non-linear effects which compromise the solution.Accuracy and robustness comparison versus a traditional EKF-based SLAM approach.


Robot | 2014

Visual Hybrid SLAM: An Appearance-Based Approach to Loop Closure

Lorenzo Fernández; Luis Payá; Oscar Reinoso; Arturo Gil; David Valiente

This paper proposes an appearance-based method to detect loop closure in visual SLAM (Simultaneous Localization and Mapping). To solve this problem, we make use of omnidirectional images and the internal odometry captured by a robot in a real indoor environment. We build an appearance-based model and, subsequently, two maps of the environment are constructed, one metric and other topological with relationships between them. These relationships are updated in each step of our hybrid approach. The topological map is a graph built from the appearance information in the scenes. A new node is added when the new visual information is different enough from the previous information. At the same time, we check a possible topological loop closure with previous nodes. On the other hand we estimate the metric position of the new pose using a Monte-Carlo approach with the aim of building a metric map. The experimental results demonstrate the reasonable performance of our method.


Archive | 2014

Visual SLAM Based on Single Omnidirectional Views

David Valiente; Arturo Gil; Lorenzo Fernández; Oscar Reinoso

This chapter focuses on the problem of Simultaneous Localization and Mapping (SLAM) using visual information from the environment. We exploit the versatility of a single omnidirectional camera to carry out this task. Traditionally, visual SLAM approaches concentrate on the estimation of a set of visual 3D points of the environment, denoted as visual landmarks. As the number of visual landmarks increases the computation of the map becomes more complex. In this work we suggest a different representation of the environment which simplifies the computation of the map and provides a more compact representation. Particularly, the map is composed by a reduced set of omnidirectional images, denoted as views, acquired at certain poses of the environment. Each view consists of a position and orientation in the map and a set of 2D interest points extracted from the image reference frame. The information gathered by these views is stored to find corresponding points between the current view captured at the current robot pose and the views stored in the map. Once a set of corresponding points is found, a motion transformation can be computed to retrieve the position of both views. This fact allows us to estimate the current pose of the robot and build the map. Moreover, with the intention of performing a more reliable approach, we propose a new method to find correspondences since it is a troublesome issue in this framework. Its basis relies on the generation of a gaussian distribution to propagate the current error on the map to the the matching process. We present a series of experiments with real data to validate the ideas and the SLAM solution proposed in this work.


advanced concepts for intelligent vision systems | 2017

Omnidirectional Localization in vSLAM with Uncertainty Propagation and Bayesian Regression.

David Valiente; Oscar Reinoso; Arturo Gil; Luis Payá; Mónica Ballesta

This article presents a visual localization technique based solely on the use of omnidirectional images, within the framework of mobile robotics. The proposal makes use of the epipolar constraint, adapted to the omnidirectional reference, in order to deal with matching point detection, which ultimately determines a motion transformation for localizing the robot. The principal contributions lay on the propagation of the current uncertainty to the matching. Besides, a Bayesian regression technique is also implemented, in order te reinforce the robustness. As a result, we provide a reliable adaptive matching, which proves its stability and consistency against non-linear and dynamic effects affecting the image frame, and consequently the final application. In particular, the search for matching points is highly reduced, thus aiding in the search and avoiding false correspondes. The final outcome is reflected by real data experiments, which confirm the benefit of these contributions, and also test the suitability of the localization when it is embedded on a vSLAM application.


international conference on informatics in control automation and robotics | 2014

Visual odometry using the global-appearance of omnidirectional images

Francisco Amorós; Luis Payá; David Valiente; Arturo Gil; Oscar Reinoso

This work presents a purely visual topologic odometry system for robot navigation. Our system is based on a Multi-Scale analysis that allows us to estimate the relative displacement between consecutive omnidirectional images. This analysis uses global appearance techniques to describe the scenes. The visual odometry system also makes use of global appearance descriptors of panoramic images to estimate the phase lag between consecutive images and to detect loop closures. When a previous mapped area is recognized during the navigation, the system re-estimates the pose of the scenes included in the map, reducing the error of the path. The algorithm is validated using our own database captured in an indoor environment under real dynamic conditions. The results demonstrate that our system permits estimating the path followed by the robot with accuracy comparing to the real route.

Collaboration


Dive into the David Valiente's collaboration.

Top Co-Authors

Avatar

Oscar Reinoso

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

Arturo Gil

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

Luis Payá

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

Lorenzo Fernández

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

Adrián Peidró

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

José M. Sebastián

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar

José María Marín

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Miguel Juliá

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

Mónica Ballesta

Universidad Miguel Hernández de Elche

View shared research outputs
Researchain Logo
Decentralizing Knowledge