Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cesar Cadena is active.

Publication


Featured researches published by Cesar Cadena.


IEEE Transactions on Robotics | 2016

Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age

Cesar Cadena; Luca Carlone; Henry Carrillo; Yasir Latif; Davide Scaramuzza; José L. Neira; Ian D. Reid; John J. Leonard

Simultaneous Localization and Mapping (SLAM) consists in the concurrent construction of a representation of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. The paper serves as a tutorial for the non-expert reader. It is also a position paper: by looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: do robots need SLAM? Is SLAM solved?


The International Journal of Robotics Research | 2013

Robust loop closing over time for pose graph SLAM

Yasir Latif; Cesar Cadena; José L. Neira

Long-term autonomous mobile robot operation requires considering place recognition decisions with great caution. A single incorrect decision that is not detected and reconsidered can corrupt the environment model that the robot is trying to build and maintain. This work describes a consensus-based approach to robust place recognition over time, that takes into account all the available information to detect and remove past incorrect loop closures. The main novelties of our work are: (1) the ability of realizing that, in light of new evidence, an incorrect past loop closing decision has been made; the incorrect information can be removed thus recovering the correct estimation with a novel algorithm; (2) extending our proposal to incremental operation; and (3) handling multi-session, spatially related or unrelated scenarios in a unified manner. We demonstrate our proposal, the RRR algorithm, on different odometry systems, e.g. visual or laser, using different front-end loop-closing techniques. For our experiments we use the efficient graph optimization framework g2o as back-end. We back our claims up with several experiments carried out on real data, in single and multi-session experiments showing better results than those obtained by state-of-the-art methods, comparisons against whom are also presented.


IEEE Transactions on Robotics | 2012

Robust Place Recognition With Stereo Sequences

Cesar Cadena; Dorian Gálvez-López; Juan D. Tardós; José L. Neira

We propose a place recognition algorithm for simultaneous localization and mapping (SLAM) systems using stereo cameras that considers both appearance and geometric information of points of interest in the images. Both near and far scene points provide information for the recognition process. Hypotheses about loop closings are generated using a fast appearance-only technique based on the bag-of-words (BoW) method. We propose several important improvements to BoWs that profit from the fact that, in this problem, images are provided in sequence. Loop closing candidates are evaluated using a novel normalized similarity score that measures similarity in the context of recent images in the sequence. In cases where similarity is not sufficiently clear, loop closing verification is carried out using a method based on conditional random fields (CRFs). We build on CRF matching with two main novelties: We use both image and 3-D geometric information, and we carry out inference on a minimum spanning tree (MST), instead of a densely connected graph. Our results show that MSTs provide an adequate representation of the problem, with the additional advantages that exact inference is possible and that the computational cost of the inference process is limited. We compare our system with the state of the art using visual indoor and outdoor data from three different locations and show that our system can attain at least full precision (no false positives) for a higher recall (fewer false negatives).


Robotics and Autonomous Systems | 2013

Real-time 6-DOF multi-session visual SLAM over large-scale environments

John McDonald; Michael Kaess; Cesar Cadena; José L. Neira; John J. Leonard

This paper describes a system for performing real-time multi-session visual mapping in large-scale environments. Multi-session mapping considers the problem of combining the results of multiple simultaneous localisation and mapping (SLAM) missions performed repeatedly over time in the same environment. The goal is to robustly combine multiple maps in a common metrical coordinate system, with consistent estimates of uncertainty. Our work employs incremental smoothing and mapping (iSAM) as the underlying SLAM state estimator and uses an improved appearance-based method for detecting loop closures within single mapping sessions and across multiple sessions. To stitch together pose graph maps from multiple visual mapping sessions, we employ spatial separator variables, called anchor nodes, to link together multiple relative pose graphs. The system architecture consists of a separate front-end for computing visual odometry and windowed bundle adjustment on individual sessions, in conjunction with a back-end for performing the place recognition and multi-session mapping. We provide experimental results for real-time multi-session visual mapping on wheeled and handheld datasets in the MIT Stata Center. These results demonstrate key capabilities that will serve as a foundation for future work in large-scale persistent visual mapping.


intelligent robots and systems | 2010

Robust place recognition with stereo cameras

Cesar Cadena; Dorian Gálvez-López; Fabio Ramos; Juan D. Tardós; José L. Neira

Place recognition is a challenging task in any SLAM system. Algorithms based on visual appearance are becoming popular to detect locations already visited, also known as loop closures, because cameras are easily available and provide rich scene detail. These algorithms typically result in pairs of images considered depicting the same location. To avoid mismatches, most of them rely on epipolar geometry to check spatial consistency. In this paper we present an alternative system that makes use of stereo vision and combines two complementary techniques: bag-of-words to detect loop closing candidate images, and conditional random fields to discard those which are not geometrically consistent. We evaluate this system in public indoor and outdoor datasets from the Rawseeds project, with hundred-metre long trajectories. Our system achieves more robust results than using spatial consistency based on epipolar geometry.


international conference on robotics and automation | 2014

Semantic segmentation with heterogeneous sensor coverages.

Cesar Cadena; Jana Kosecka

We propose a new approach to semantic parsing, which can seamlessly integrate evidence from multiple sensors with overlapping but possibly different fields of view (FOV), account for missing data and predict semantic labels over the spatial union of sensors coverages. The existing approaches typically carry out semantic segmentation using only one modality, incorrectly interpolate measurements of other modalities or at best assign semantic labels only to the spatial intersection of coverages of different sensors. In this work we remedy these problems by proposing an effective and efficient strategy for inducing the graph structure of Conditional Random Field used for inference and a novel method for computing the sensor domain dependent potentials. We focus on RGB cameras and 3D data from lasers or depth sensors. The proposed approach achieves superior performance, compared to state of the art and obtains labels for the union of spatial coverages of both sensors, while effectively using appearance or 3D cues when they are available. The efficiency of the approach is amenable to realtime implementation. We quantitatively validate our proposal in two publicly available datasets from indoors and outdoors real environments. The obtained semantic understanding of the acquired sensory information can enable higher level tasks for autonomous mobile robots and facilitate semantic mapping of the environments.


The International Journal of Robotics Research | 2015

Semantic parsing for priming object detection in indoors RGB-D scenes

Cesar Cadena; Jana Kosecka

The semantic mapping of the environment requires simultaneous segmentation and categorization of the acquired stream of sensory information. The existing methods typically consider the semantic mapping as the final goal and differ in the number and types of considered semantic categories. We envision semantic understanding of the environment as an on-going process and seek representations which can be refined and adapted depending on the task and robot’s interaction with the environment. In this work we propose a novel and efficient method for semantic parsing, which can be adapted to the task at hand and enables localization of objects of interest in indoor environments. For basic mobility tasks we demonstrate how to obtain initial semantic segmentation of the scene into ground, structure, furniture and props categories which constitute the first level of hierarchy. Then, we propose a simple and efficient method for predicting locations of objects that based on their size afford a manipulation task. In our experiments we use the publicly available NYU V2 dataset and obtain better or comparable results than the state of the art at a fraction of the computational cost. We show the generalization of our approach on two more publicly available datasets.


international conference on robotics and automation | 2017

From perception to decision: A data-driven approach to end-to-end motion planning for autonomous ground robots

Mark Pfeiffer; Michael Schaeuble; Juan I. Nieto; Roland Siegwart; Cesar Cadena

Learning from demonstration for motion planning is an ongoing research topic. In this paper we present a model that is able to learn the complex mapping from raw 2D-laser range findings and a target position to the required steering commands for the robot. To our best knowledge, this work presents the first approach that learns a target-oriented end-to-end navigation model for a robotic platform. The supervised model training is based on expert demonstrations generated in simulation with an existing motion planner. We demonstrate that the learned navigation model is directly transferable to previously unseen virtual and, more interestingly, real-world environments. It can safely navigate the robot through obstacle-cluttered environments to reach the provided targets. We present an extensive qualitative and quantitative evaluation of the neural network-based motion planner, and compare it to a grid-based global approach, both in simulation and in real-world experiments.


intelligent robots and systems | 2014

Robust graph SLAM back-ends: A comparative analysis

Yasir Latif; Cesar Cadena; José L. Neira

In this work, we provide an in-depth analysis of several recent robust Simultaneous Localization And Mapping (SLAM) back-end techniques that aim to recover the correct graph estimate in the presence of outliers in loop closure constraints. We present a benchmark dataset for evaluation of such methods by augmenting the KITTI Vision Benchmark with ground truth as well as generated loop closure hypotheses and present a detailed analysis of recently proposed robust SLAM methods using this benchmark. We also look into how these methods achieve the desired robustness and what are the implications for the SLAM problem. We discuss the issues involved in using the output of these robust back-ends for tasks such as path planning and how they can be addressed. The problem of robustness needs to be addressed adequately in order to have a complete and reliable solution to the SLAM problem.


intelligent robots and systems | 2012

Realizing, reversing, recovering: Incremental robust loop closing over time using the iRRR algorithm

Yasir Latif; Cesar Cadena; José L. Neira

The ability to reconsider information over time allows to detect failures and is crucial for long term robust autonomous robot applications. This applies to loop closure decisions in localization and mapping systems. This paper describes a method to analyze all available information up to date in order to robustly remove past incorrect loop closures from the optimization process. The main novelties of our algorithm are: 1. incrementally reconsidering loop closures and 2. handling multi-session, spatially related or unrelated experiments. We validate our proposal in real multi-session experiments showing better results than those obtained by state of the art methods.

Collaboration


Dive into the Cesar Cadena's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian D. Reid

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar

John J. Leonard

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasir Latif

University of Zaragoza

View shared research outputs
Researchain Logo
Decentralizing Knowledge