Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elena Stumm is active.

Publication


Featured researches published by Elena Stumm.


intelligent robots and systems | 2013

Probabilistic place recognition with covisibility maps

Elena Stumm; Christopher Mei; Simon Lacroix

In order to diminish the influence of pose choice during appearance-based mapping, a more natural representation of location models is established using covisibility graphs. As the robot moves through the environment, visual landmarks are detected, and connected if seen as covisible. The introduction of a novel generative model allows relevant subgraphs of the covisibility map to be compared to a given query without needing to normalize over all previously seen locations. The use of probabilistic methods provides a unified framework to incorporate sensor error, perceptual aliasing, decision thresholds, and multiple location matches. The system is evaluated and compared with other state-of-the-art methods.


The International Journal of Robotics Research | 2012

Tensor-voting-based navigation for robotic inspection of 3D surfaces using lidar point clouds

Elena Stumm; Andreas Breitenmoser; François Pomerleau; Cédric Pradalier; Roland Siegwart

This paper describes a solution to robot navigation on curved 3D surfaces. The navigation system is composed of three successive subparts: a perception and representation, a path planning, and a control subsystem. The environment structure is modeled from noisy lidar point clouds using a tool known as tensor voting. Tensor voting propagates structural information from points within a point cloud in order to estimate the saliency and orientation of surfaces or curves found in the environment. A specialized graph-based planner establishes connectivities between robot states iteratively, while considering robot kinematics as well as structural constraints inferred by tensor voting. The resulting sparse graph structure eliminates the need to generate an explicit surface mesh, yet allows for efficient planning of paths along the surface, while remaining feasible and safe for the robot to traverse. The control scheme eventually transforms the path from 3D space into 2D space by projecting movements into local surface planes, allowing for 2D trajectory tracking. All three subparts of our navigation system are evaluated on simulated as well as real data. The methods are further implemented on the MagneBike climbing robot, and validated in several physical experiments related to the scenario of industrial inspection for power plants.


international conference on robotics and automation | 2016

Point cloud descriptors for place recognition using sparse visual information

Titus Cieslewski; Elena Stumm; Abel Gawel; Mike Bosse; Simon Lynen; Roland Siegwart

Place recognition is a core component in simultaneous localization and mapping (SLAM), limiting positional drift over space and time to unlock precise robot navigation. Determining which previously visited places belong together continues to be a highly active area of research as robotic applications demand increasingly higher accuracies. A large number of place recognition algorithms have been proposed, capable of consuming a variety of sensor data including laser, sonar and depth readings. The best performing solutions, however, have utilized visual information by either matching entire images or parts thereof. Most commonly, vision based approaches are inspired by information retrieval and utilize 3D-geometry information about the observed scene as a post-verification step. In this paper we propose to use the 3D-scene information from sparse-visual feature maps directly at the core of the place recognition pipeline. We propose a novel structural descriptor which aggregates sparse triangulated landmarks from SLAM into a compact signature. The resulting 3D-features provide a discriminative fingerprint to recognize places over seasonal and viewpoint changes which are particularly challenging for approaches based on sparse visual descriptors. We evaluate our system on publicly available datasets and show how its complementary nature can provide an improvement over visual place recognition.


computer vision and pattern recognition | 2016

Robust Visual Place Recognition with Graph Kernels

Elena Stumm; Christopher Mei; Simon Lacroix; Juan I. Nieto; Marco Hutter; Roland Siegwart

A novel method for visual place recognition is introduced and evaluated, demonstrating robustness to perceptual aliasing and observation noise. This is achieved by increasing discrimination through a more structured representation of visual observations. Estimation of observation likelihoods are based on graph kernel formulations, utilizing both the structural and visual information encoded in covisibility graphs. The proposed probabilistic model is able to circumvent the typically difficult and expensive posterior normalization procedure by exploiting the information available in visual observations. Furthermore, the place recognition complexity is independent of the size of the map. Results show improvements over the state-of-theart on a diverse set of both public datasets and novel experiments, highlighting the benefit of the approach.


international conference on robotics and automation | 2017

SegMatch: Segment based place recognition in 3D point clouds

Renaud Dubé; Daniel Dugas; Elena Stumm; Juan I. Nieto; Roland Siegwart; Cesar Cadena

Place recognition in 3D data is a challenging task that has been commonly approached by adapting image-based solutions. Methods based on local features suffer from ambiguity and from robustness to environment changes while methods based on global features are viewpoint dependent. We propose SegMatch, a reliable place recognition algorithm based on the matching of 3D segments. Segments provide a good compromise between local and global descriptions, incorporating their strengths while reducing their individual drawbacks. SegMatch does not rely on assumptions of ‘perfect segmentation’, or on the existence of ‘objects’ in the environment, which allows for reliable execution on large scale, unstructured environments. We quantitatively demonstrate that SegMatch can achieve accurate localization at a frequency of 1Hz on the largest sequence of the KITTI odometry dataset. We furthermore show how this algorithm can reliably detect and close loops in real-time, during online operation. In addition, the source code for the SegMatch algorithm is made publicly available1.


international conference on robotics and automation | 2017

Visual place recognition with probabilistic voting

Mathias Gehrig; Elena Stumm; Timo Hinzmann; Roland Siegwart

We propose a novel scoring concept for visual place recognition based on nearest neighbor descriptor voting and demonstrate how the algorithm naturally emerges from the problem formulation. Based on the observation that the number of votes for matching places can be evaluated using a binomial distribution model, loop closures can be detected with high precision. By casting the problem into a probabilistic framework, we not only remove the need for commonly employed heuristic parameters but also provide a powerful score to classify matching and non-matching places. We present methods for both a 2D-2D image matching and a 2D-3D landmark matching based on the above scoring. The approach maintains accuracy while being efficient enough for online application through the use of compact (low-dimensional) descriptors and fast nearest neighbor retrieval techniques. The proposed methods are evaluated on several challenging datasets in varied environments, showing state-of-the-art results with high precision and high recall.


international conference on robotics and automation | 2017

Map quality evaluation for visual localization

Hamza Merzic; Elena Stumm; Marcin Dymczyk; Roland Siegwart; Igor Gilitschenski

A variety of end-user devices involving keypoint-based mapping systems are about to hit the market e.g. as part of smartphones, cars, robotic platforms, or virtual and augmented reality applications. Thus, the generated map data requires automated evaluation procedures that do not require experienced personnel or ground truth knowledge of the underlying environment. A particularly important question enabling commercial applications is whether a given map is of sufficient quality for localization. This paper proposes a framework for predicting localization performance in the context of visual landmark-based mapping. Specifically, we propose an algorithm for predicting performance of vision-based localization systems from different poses within the map. To achieve this, a metric is defined that assigns a score to a given query pose based on the underlying map structure. The algorithm is evaluated on two challenging datasets involving indoor data generated using a handheld device and outdoor data from an autonomous fixed-wing unmanned aerial vehicle (UAV). Using these, we are able to show that the score provided by our method is highly correlated to the true localization performance. Furthermore, we demonstrate how the predicted map quality can be used within a belief based path planning framework in order to provide reliable trajectories through high-quality areas of the map.


intelligent robots and systems | 2016

Erasing bad memories: Agent-side summarization for long-term mapping

Marcin Dymczyk; Thomas Schneider; Igor Gilitschenski; Roland Siegwart; Elena Stumm

Precisely estimating the pose of an agent in a global reference frame is a crucial goal that unlocks a multitude of robotic applications, including autonomous navigation and collaboration. In order to achieve this, current state-of-the-art localization approaches collect data provided by one or more agents and create a single, consistent localization map, maintained over time. However, with the introduction of lengthier sorties and the growing size of the environments, data transfers between the backend server where the global map is stored and the agents are becoming prohibitively large. While some existing methods partially address this issue by building compact summary maps, the data transfer from the agents to the backend can still easily become unmanageable. In this paper, we propose a method that is designed to reduce the amount of data that needs to be transferred from the agent to the backend, functioning in large-scale, multi-session mapping scenarios. Our approach is based upon a landmark selection method that exploits information coming from multiple, possibly weak and correlated, landmark utility predictors; fused using learned feature coefficients. Such a selection yields a drastic reduction in data transfer while maintaining localization performance and the ability to efficiently summarize environments over time. We evaluate our approach on a data set that was autonomously collected in a dynamic indoor environment over a period of several months.


arXiv: Robotics | 2016

SegMatch: Segment based loop-closure for 3D point clouds.

Renaud Dubé; Daniel Dugas; Elena Stumm; Juan I. Nieto; Roland Siegwart; Cesar Cadena


intelligent robots and systems | 2016

Appearance-based landmark selection for efficient long-term visual localization

Mathias Bürki; Igor Gilitschenski; Elena Stumm; Roland Siegwart; Juan I. Nieto

Collaboration


Dive into the Elena Stumm's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge