Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dorian Gálvez-López is active.

Publication


Featured researches published by Dorian Gálvez-López.


IEEE Transactions on Robotics | 2012

Bags of Binary Words for Fast Place Recognition in Image Sequences

Dorian Gálvez-López; Juan D. Tardós

We propose a novel method for visual place recognition using bag of words obtained from accelerated segment test (FAST)+BRIEF features. For the first time, we build a vocabulary tree that discretizes a binary descriptor space and use the tree to speed up correspondences for geometrical verification. We present competitive results with no false positives in very different datasets, using exactly the same vocabulary and settings. The whole technique, including feature extraction, requires 22 ms/frame in a sequence with 26 300 images that is one order of magnitude faster than previous approaches.


intelligent robots and systems | 2011

Towards semantic SLAM using a monocular camera

Javier Civera; Dorian Gálvez-López; Luis Riazuelo; Juan D. Tardós; J. M. M. Montiel

Monocular SLAM systems have been mainly focused on producing geometric maps just composed of points or edges; but without any associated meaning or semantic content. In this paper, we propose a semantic SLAM algorithm that merges in the estimated map traditional meaningless points with known objects. The non-annotated map is built using only the information extracted from a monocular image sequence. The known object models are automatically computed from a sparse set of images gathered by cameras that may be different from the SLAM camera. The models include both visual appearance and tridimensional information. The semantic or annotated part of the map -the objects- are estimated using the information in the image sequence and the precomputed object models.


intelligent robots and systems | 2011

Real-time loop detection with bags of binary words

Dorian Gálvez-López; Juan D. Tardós

We present a method for detecting revisited places in a image sequence in real time by using efficient features. We introduce three important novelties to the bag-of-words plus geometrical checking approach. We use FAST keypoints and BRIEF descriptors, which are binary and very fast to compute (less that 20µs per point). To perform image comparisons, we make use of a bag of words that discretises the binary descriptor space and an inverse index. We also introduce the use of a direct index to take advantage of the bag of words to obtain correspondence points between two images efficiently, avoiding a matching of complexity Θ(n2). To detect loop closure candidates, we propose managing matches in groups to increase the reliability of the candidates returned by the bag of words. We present results in three real and public datasets, with 0.7–1.7 Km long trajectories. We obtain high precision and recall rates, spending 16 ms on average per image for the feature computation and the whole loop detection process in sequences with 19000 images, one order of magnitude less than other similar techniques.


IEEE Transactions on Robotics | 2012

Robust Place Recognition With Stereo Sequences

Cesar Cadena; Dorian Gálvez-López; Juan D. Tardós; José L. Neira

We propose a place recognition algorithm for simultaneous localization and mapping (SLAM) systems using stereo cameras that considers both appearance and geometric information of points of interest in the images. Both near and far scene points provide information for the recognition process. Hypotheses about loop closings are generated using a fast appearance-only technique based on the bag-of-words (BoW) method. We propose several important improvements to BoWs that profit from the fact that, in this problem, images are provided in sequence. Loop closing candidates are evaluated using a novel normalized similarity score that measures similarity in the context of recent images in the sequence. In cases where similarity is not sufficiently clear, loop closing verification is carried out using a method based on conditional random fields (CRFs). We build on CRF matching with two main novelties: We use both image and 3-D geometric information, and we carry out inference on a minimum spanning tree (MST), instead of a densely connected graph. Our results show that MSTs provide an adequate representation of the problem, with the additional advantages that exact inference is possible and that the computational cost of the inference process is limited. We compare our system with the state of the art using visual indoor and outdoor data from three different locations and show that our system can attain at least full precision (no false positives) for a higher recall (fewer false negatives).


intelligent robots and systems | 2010

Robust place recognition with stereo cameras

Cesar Cadena; Dorian Gálvez-López; Fabio Ramos; Juan D. Tardós; José L. Neira

Place recognition is a challenging task in any SLAM system. Algorithms based on visual appearance are becoming popular to detect locations already visited, also known as loop closures, because cameras are easily available and provide rich scene detail. These algorithms typically result in pairs of images considered depicting the same location. To avoid mismatches, most of them rely on epipolar geometry to check spatial consistency. In this paper we present an alternative system that makes use of stereo vision and combines two complementary techniques: bag-of-words to detect loop closing candidate images, and conditional random fields to discard those which are not geometrically consistent. We evaluate this system in public indoor and outdoor datasets from the Rawseeds project, with hundred-metre long trajectories. Our system achieves more robust results than using spatial consistency based on epipolar geometry.


IEEE Transactions on Automation Science and Engineering | 2015

RoboEarth Semantic Mapping: A Cloud Enabled Knowledge-Based Approach

Luis Riazuelo; Moritz Tenorth; Daniel Di Marco; Marta Salas; Dorian Gálvez-López; Lorenz Mösenlechner; Lars Kunze; Michael Beetz; Juan D. Tardós; Luis Montano; J. M. Martínez Montiel

The vision of the RoboEarth project is to design a knowledge-based system to provide web and cloud services that can transform a simple robot into an intelligent one. In this work, we describe the RoboEarth semantic mapping system. The semantic map is composed of: 1) an ontology to code the concepts and relations in maps and objects and 2) a SLAM map providing the scene geometry and the object locations with respect to the robot. We propose to ground the terminological knowledge in the robot perceptions by means of the SLAM map of objects. RoboEarth boosts mapping by providing: 1) a subdatabase of object models relevant for the task at hand, obtained by semantic reasoning, which improves recognition by reducing computation and the false positive rate; 2) the sharing of semantic maps between robots; and 3) software as a service to externalize in the cloud the more intensive mapping computations, while meeting the mandatory hard real time constraints of the robot. To demonstrate the RoboEarth cloud mapping system, we investigate two action recipes that embody semantic map building in a simple mobile robot. The first recipe enables semantic map building for a novel environment while exploiting available prior information about the environment. The second recipe searches for a novel object, with the efficiency boosted thanks to the reasoning on a semantically annotated map. Our experimental results demonstrate that, by using RoboEarth cloud services, a simple robot can reliably and efficiently build the semantic maps needed to perform its quotidian tasks. In addition, we show the synergetic relation of the SLAM map of objects that grounds the terminological knowledge coded in the ontology.


Robotics and Autonomous Systems | 2016

Real-time monocular object SLAM

Dorian Gálvez-López; Marta Salas; Juan D. Tardós; J. M. M. Montiel

We present a real-time object-based SLAM system that leverages the largest object database to date. Our approach comprises two main components: (1) a monocular SLAM algorithm that exploits object rigidity constraints to improve the map and find its real scale, and (2) a novel object recognition algorithm based on bags of binary words, which provides live detections with a database of 500 3D objects. The two components work together and benefit each other: the SLAM algorithm accumulates information from the observations of the objects, anchors object features to especial map landmarks and sets constrains on the optimization. At the same time, objects partially or fully located within the map are used as a prior to guide the recognition algorithm, achieving higher recall. We evaluate our proposal on five real environments showing improvements on the accuracy of the map and efficiency with respect to other state-of-the-art techniques. Complete monocular Visual SLAM system that creates maps of points and 3D objects.Novel SLAM back-end formulation optimizes map points, object poses and map scale.Novel object detection algorithm manages large-scale object databases in real time.SLAM and object detection work together to enhance mapping and object recognition.Accuracy results outperform other state-of-the-art RGB and RGB-D SLAM approaches.


international conference on robotics and automation | 2012

Creating and using RoboEarth object models

Daniel Di Marco; Andreas Koch; Oliver Zweigle; Kai Häussermann; Björn Schiessle; Paul Levi; Dorian Gálvez-López; Luis Riazuelo; Javier Civera; J. M. M. Montiel; Moritz Tenorth; Alexander Clifford Perzylo; Markus Waibel; René van de Molengraft

This paper presented an approach to create 3D object models for robotic and vision applications in a fast and inexpensive way compared to established approaches. By using the RoboEarth system for storing the created object models users have world-wide access to the data and can immediately reuse a model as soon as it was created and uploaded. The approach shows general applicability for different kinds of cameras. In this work this was shown by two example implementations for the recognition process of objects. The quality of the recognition can be verified in the video. Combined with the knowledge saved in the RoboEarth database the objects can also be properly classified.


intelligent robots and systems | 2011

Adaptive appearance based loop-closing in heterogeneous environments

Andras Majdik; Dorian Gálvez-López; Gheorghe Lazea; José A. Castellanos

The work described in this paper concerns the problem of detecting loop-closure situations whenever an autonomous vehicle returns to previously visited places in the navigation area. An appearance-based perspective is considered by using images gathered by the on-board vision sensors for navigation tasks in heterogeneous environments characterized by the presence of buildings and urban furniture together with pedestrians and different types of vegetation. We propose a novel probabilistic on-line weight updating algorithm for the bag-of-words description of the gathered images which takes into account both prior knowledge derived from an off-line learning stage and the accuracy of the decisions taken by the algorithm along time. An intuitive measure of the ability of a certain word to contribute to the detection of a correct loop-closure is presented. The proposed strategy is extensively tested using well-known datasets obtained from challenging large-scale environments which emphasize the large improvement on its performance over previously reported works in the literature.


international conference on robotics and automation | 2011

RoboEarth - A World Wide Web for Robots

Markus Waibel; Michael Beetz; Raffaello D'Andrea; Rob Janssen; Moritz Tenorth; Javier Civera; J Jos Elfring; Dorian Gálvez-López; Kai Häussermann; J. M. M. Montiel; Alexander Clifford Perzylo; Björn Schießle; Oliver Zweigle; René van de Molengraft

Collaboration


Dive into the Dorian Gálvez-López's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Koch

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge