Marcin Dymczyk
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marcin Dymczyk.
international conference on robotics and automation | 2015
Marcin Dymczyk; Simon Lynen; Titus Cieslewski; Michael Bosse; Roland Siegwart; Paul Timothy Furgale
Robust, scalable place recognition is a core competency for many robotic applications. However, when revisiting places over and over, many state-of-the-art approaches exhibit reduced performance in terms of computation and memory complexity and in terms of accuracy. For successful deployment of robots over long time scales, we must develop algorithms that get better with repeated visits to the same environment, while still working within a fixed computational budget. This paper presents and evaluates an algorithm that alternates between online place recognition and offline map maintenance with the goal of producing the best performance with a fixed map size. At the core of the algorithm is the concept of a Summary Map, a reduced map representation that includes only the landmarks that are deemed most useful for place recognition. To assign landmarks to the map, we use a scoring function that ranks the utility of each landmark and a sampling policy that selects the landmarks for each place. The Summary Map can then be used by any descriptor-based inference method for constant-complexity online place recognition. We evaluate a number of scoring functions and sampling policies and show that it is possible to build and maintain maps of a constant size and that place-recognition performance improves over multiple visits.
intelligent robots and systems | 2015
Marcin Dymczyk; Simon Lynen; Michael Bosse; Roland Siegwart
Robust, scalable localization unlocks path-planning, obstacle avoidance as well as manipulation and thus is a core competency for many robotic applications. However, as we leave the lab and move out in the world, models of the environment no longer span distances of meters but kilometers in length. Now, gigabytes instead of megabytes of memory are required to hold the model of the environment required for localization. Discarding data and keeping the map representation compact is thus essential for any meaningful application. This paper presents and evaluates a map compression algorithm that approaches this data-reduction as an constrained optimization problem. At the core of the algorithm is the concept of a Summary Map, a reduced map representation that includes only the landmarks that are deemed most useful for place recognition. To assign landmarks to the map we have to satisfy the conflicting goals of map coverage and localizability as well as our tight memory budget. While using an optimization approach for compression is not novel, in this paper we propose adaptations to drastically reduce the computational requirements. Our approach improves scalability from trajectories of a few tens of meters manageable by the state of the art to virtually unlimited dataset sizes in our system. We evaluate the performance of various compression levels as well as several methods for selecting the best localization landmarks from outdoor datasets.
international conference on robotics and automation | 2017
Antonio Loquercio; Marcin Dymczyk; Bernhard Zeisl; Simon Lynen; Igor Gilitschenski; Roland Siegwart
Many robotics and Augmented Reality (AR) systems that use sparse keypoint-based visual maps operate in large and highly repetitive environments, where pose tracking and localization are challenging tasks. Additionally, these systems usually face further challenges, such as limited computational power, or insufficient memory for storing large maps of the entire environment. Thus, developing compact map representations and improving retrieval is of considerable interest for enabling large-scale visual place recognition and loop-closure. In this paper, we propose a novel approach to compress descriptors while increasing their discriminability and match-ability, based on recent advances in neural networks. At the same time, we target resource-constrained robotics applications in our design choices. The main contributions of this work are twofold. First, we propose a linear projection from descriptor space to a lower-dimensional Euclidean space, based on a novel supervised learning strategy employing a triplet loss. Second, we show the importance of including contextual appearance information to the visual feature in order to improve matching under strong viewpoint, illumination and scene changes. Through detailed experiments on three challenging datasets, we demonstrate significant gains in performance over state-of-the-art methods.
intelligent robots and systems | 2016
Timo Hinzmann; Thomas Schneider; Marcin Dymczyk; Amir Melzer; Thomas Mantel; Roland Siegwart; Igor Gilitschenski
Accurate and robust real-time map generation onboard of a fixed-wing UAV is essential for obstacle avoidance, path planning, and critical maneuvers such as autonomous take-off and landing. Due to the computational constraints, the required robustness and reliability, it remains a challenge to deploy a fixed-wing UAV with an online-capable, accurate and robust map generation framework. While photogrammetric approaches have underlying assumptions on the structure and the view of the camera, generic simultaneous localization and mapping (SLAM) approaches are computationally demanding. This paper presents a framework that uses the autopilots state estimate as a prior for sliding window bundle adjustment and map generation. Our approach outputs an accurate geo-referenced dense point-cloud which was validated in simulation on a synthetic dataset and on two real-world scenarios based on ground control points.
international conference on robotics and automation | 2017
Hamza Merzic; Elena Stumm; Marcin Dymczyk; Roland Siegwart; Igor Gilitschenski
A variety of end-user devices involving keypoint-based mapping systems are about to hit the market e.g. as part of smartphones, cars, robotic platforms, or virtual and augmented reality applications. Thus, the generated map data requires automated evaluation procedures that do not require experienced personnel or ground truth knowledge of the underlying environment. A particularly important question enabling commercial applications is whether a given map is of sufficient quality for localization. This paper proposes a framework for predicting localization performance in the context of visual landmark-based mapping. Specifically, we propose an algorithm for predicting performance of vision-based localization systems from different poses within the map. To achieve this, a metric is defined that assigns a score to a given query pose based on the underlying map structure. The algorithm is evaluated on two challenging datasets involving indoor data generated using a handheld device and outdoor data from an autonomous fixed-wing unmanned aerial vehicle (UAV). Using these, we are able to show that the score provided by our method is highly correlated to the true localization performance. Furthermore, we demonstrate how the predicted map quality can be used within a belief based path planning framework in order to provide reliable trajectories through high-quality areas of the map.
international symposium on visual computing | 2016
Timo Hinzmann; Thomas Schneider; Marcin Dymczyk; Andreas Schaffner; Simon Lynen; Roland Siegwart; Igor Gilitschenski
Precise real-time information about the position and orientation of robotic platforms as well as locally consistent point-clouds are essential for control, navigation, and obstacle avoidance. For years, GPS has been the central source of navigational information in airborne applications, yet as we aim for robotic operations close to the terrain and urban environments, alternatives to GPS need to be found. Fusing data from cameras and inertial measurement units in a nonlinear recursive estimator has shown to allow precise estimation of 6-Degree-of-Freedom (DoF) motion without relying on GPS signals. While related methods have shown to work in lab conditions since several years, only recently real-world robotic applications using visual-inertial state estimation found wider adoption. Due to the computational constraints, and the required robustness and reliability, it remains a challenge to employ a visual-inertial navigation system in the field. This paper presents our tightly integrated system involving hardware and software efforts to provide an accurate visual-inertial navigation system for low-altitude fixed-wing unmanned aerial vehicles (UAVs) without relying on GPS or visual beacons. In particular, we present a sliding window based visual-inertial Simultaneous Localization and Mapping (SLAM) algorithm which provides real-time 6-DoF estimates for control. We demonstrate the performance on a small unmanned aerial vehicle and compare the estimated trajectory to a GPS based reference solution.
intelligent robots and systems | 2016
Marcin Dymczyk; Thomas Schneider; Igor Gilitschenski; Roland Siegwart; Elena Stumm
Precisely estimating the pose of an agent in a global reference frame is a crucial goal that unlocks a multitude of robotic applications, including autonomous navigation and collaboration. In order to achieve this, current state-of-the-art localization approaches collect data provided by one or more agents and create a single, consistent localization map, maintained over time. However, with the introduction of lengthier sorties and the growing size of the environments, data transfers between the backend server where the global map is stored and the agents are becoming prohibitively large. While some existing methods partially address this issue by building compact summary maps, the data transfer from the agents to the backend can still easily become unmanageable. In this paper, we propose a method that is designed to reduce the amount of data that needs to be transferred from the agent to the backend, functioning in large-scale, multi-session mapping scenarios. Our approach is based upon a landmark selection method that exploits information coming from multiple, possibly weak and correlated, landmark utility predictors; fused using learned feature coefficients. Such a selection yields a drastic reduction in data transfer while maintaining localization performance and the ability to efficiently summarize environments over time. We evaluate our approach on a data set that was autonomously collected in a dynamic indoor environment over a period of several months.
international conference on robotics and automation | 2015
Titus Cieslewski; Simon Lynen; Marcin Dymczyk; Stéphane Magnenat; Roland Siegwart
international conference on robotics and automation | 2018
Thomas Schneider; Marcin Dymczyk; Marius Fehr; Kevin Egger; Simon Lynen; Igor Gilitschenski; Roland Siegwart
international conference on robotics and automation | 2016
Marius Fehr; Marcin Dymczyk; Simon Lynen; Roland Siegwart