Narunas Vaskevicius
Jacobs University Bremen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Narunas Vaskevicius.
IEEE Transactions on Robotics | 2010
Kaustubh Pathak; Andreas Birk; Narunas Vaskevicius; Jann Poppinga
We present a robot-pose-registration algorithm, which is entirely based on large planar-surface patches extracted from point clouds sampled from a three-dimensional (3-D) sensor. This approach offers an alternative to the traditional point-to-point iterative-closest-point (ICP) algorithm, its point-to-plane variant, as well as newer grid-based algorithms, such as the 3-D normal distribution transform (NDT). The simpler case of known plane correspondences is tackled first by deriving expressions for least-squares pose estimation considering plane-parameter uncertainty computed during plane extraction. Closed-form expressions for covariances are also derived. To round-off the solution, we present a new algorithm, which is called minimally uncertain maximal consensus (MUMC), to determine the unknown plane correspondences by maximizing geometric consistency by minimizing the uncertainty volume in configuration space. Experimental results from three 3-D sensors, viz., Swiss-Ranger, University of South Florida Odetics Laser Detection and Ranging, and an actuated SICK S300, are given. The first two have low fields of view (FOV) and moderate ranges, while the third has a much bigger FOV and range. Experimental results show that this approach is not only more robust than point- or grid-based approaches in plane-rich environments, but it is also faster, requires significantly less memory, and offers a less-cluttered planar-patches-based visualization.
intelligent robots and systems | 2008
Jann Poppinga; Narunas Vaskevicius; Andreas Birk; Kaustubh Pathak
A fast but nevertheless accurate approach for surface extraction from noisy 3D point clouds is presented. It consists of two parts, namely a plane fitting and a polygonalization step. Both exploit the sequential nature of 3D data acquisition on mobile robots in form of range images. For the plane fitting, this is used to revise the standard mathematical formulation to an incremental version, which allows a linear computation. For the polygonalization, the neighborhood relation in range images is exploited. Experiments are presented using a time-of-flight range camera in form of a Swissranger SR-3000. Results include lab scenes as well as data from two runs of the rescue robot league at the RoboCup German Open 2007 with 1,414, respectively 2,343 sensor snapshots. The 36ldr106, respectively 59ldr106 points from the two point clouds are reduced to about 14ldr103, respectively 23ldr103 planes with only about 0.2 sec of total computation time per snapshot while the robot moves along. Uncertainty analysis of the computed plane parameters is presented as well.
intelligent robots and systems | 2009
Kaustubh Pathak; Narunas Vaskevicius; Jann Poppinga; Max Pfingsthorn; Sören Schwertfeger; Andreas Birk
This article addresses fast 3D mapping by a mobile robot in a predominantly planar environment. It is based on a novel pose registration algorithm based entirely on matching features composed of plane-segments extracted from point-clouds sampled from a 3D sensor. The approach has advantages in terms of robustness, speed and storage as compared to the voxel based approaches. Unlike previous approaches, the uncertainty in plane parameters is utilized to compute the uncertainty in the pose computed by scan-registration. The algorithm is illustrated by creating a full 3D model of a multi-level robot testing arena.
Advanced Robotics | 2010
Narunas Vaskevicius; Andreas Birk; Kaustubh Pathak; Sören Schwertfeger
Good situational awareness is an absolute must when operating mobile robots for planetary exploration. Three-dimensional (3-D) sensing and modeling data gathered by the robot are, hence, crucial for the operator. However, standard methods based on stereo vision have their limitations, especially in scenarios where there is no or only very limited visibility, e.g., due to extreme light conditions. Three-dimensional laser range finders (3-D-LRFs) provide an interesting alternative, especially as they can provide very accurate, high-resolution data at very high sampling rates. However, the more 3-D range data are acquired, the harder it becomes to transmit the data to the operator station. Here, a fast and robust method to fit planar surface patches into the data is presented. The usefulness of the approach is demonstrated in two different sets of experiments. The first set is based on data from our participation at the European Space Agency Lunar Robotics Challenge 2008. The second one is based on data from a Velodyne 3-D-LRF in a high-fidelity simulation with ground truth data from Mars.
international symposium on safety, security, and rescue robotics | 2007
Narunas Vaskevicius; Andreas Birk; Kaustubh Pathak; Jann Poppinga
3D sensing and modeling is increasingly important for mobile robotics in general and safety, security and rescue robotics (SSRR) in particular. To reduce the data and to allow for efficient processing, e.g., with computational geometry algorithms, it is necessary to extract surface data from 3D point clouds delivered by range sensors. A significant amount of work on this topic exists from the computer graphics community. But the existing work relies on relatively exact point cloud data. As also shown by others, sensors suited for mobile robots are very noise-prone and standard approaches that use local processing on surface normals are doomed to fail. Hence plane fitting has been suggested as solution by the robotics community. Here, a novel approach for this problem is presented. Its main feature is that it is based on region growing and that the underlying mathematics has been re-formulated such that an incremental fit can be done, i.e., the best fit surface does not have to be completely re-computed the moment a new point is investigated in the region growing process. The worst case complexity is O(n log(n)), but as shown in experiments it tends to scale linearly with typical data. Results with real world data from a Swissranger time-of-flight camera are presented where surface polygons are always successfully extracted within about 0.3 sec.
intelligent robots and systems | 2010
Kaustubh Pathak; Andreas Birk; Narunas Vaskevicius
Surface-patches based 3D mapping in a real world underwater scenario is presented. It is based on a 6 degrees of freedom registration of sonar data. Planar surfaces are fitted into the sonar data and the subsequent registration method maximizes the overall geometric consistency within a search-space to determine correspondences between the planes. This approach has previously only been used on high quality range data from sensors on land robots like laser range finders. It is shown here that the algorithm is also applicable to very noisy, coarse sonar data. The 3D map presented is of a large underwater structure, namely the Lesumer Sperrwerk, a flood gate north of the city of Bremen, Germany. It is generated from 18 scans collected using a Tritech Eclipse sonar.
international conference on robotics and automation | 2015
Martin Magnusson; Narunas Vaskevicius; Todor Stoyanov; Kaustubh Pathak; Andreas Birk
Given that 3D scan matching is such a central part of the perception pipeline for robots, thorough and large-scale investigations of scan matching performance are still surprisingly few. A crucial part of the scientific method is to perform experiments that can be replicated by other researchers in order to compare different results. In light of this fact, this paper presents a thorough comparison of 3D scan registration algorithms using a recently published benchmark protocol which makes use of a publicly available challenging data set that covers a wide range of environments. In particular, we evaluate two types of recent 3D registration algorithms - one local and one global. Both approaches take local surface structure into account, rather than matching individual points. After well over 100 000 individual tests, we conclude that algorithms using the normal distributions transform (NDT) provides accurate results compared to a modern implementation of the iterative closest point (ICP) method, when faced with scan data that has little overlap and weak geometric structure. We also demonstrate that the minimally uncertain maximum consensus (MUMC) algorithm provides accurate results in structured environments without needing an initial guess, and that it provides useful measures to detect whether it has succeeded or not. We also propose two amendments to the experimental protocol, in order to provide more valuable results in future implementations.
intelligent robots and systems | 2010
Kaustubh Pathak; Dorit Borrmann; Jan Elseberg; Narunas Vaskevicius; Andreas Birk; Andreas Nüchter
The recently introduced Minimum Uncertainty Maximum Consensus (MUMC) algorithm for 3D scene registration using planar-patches is tested in a large outdoor urban setting without any prior motion estimate whatsoever. With the aid of a new overlap metric based on unmatched patches, the algorithm is shown to work successfully in most cases. The absolute accuracy of its computed result is corroborated for the first time by ground-truth obtained using reflective markers. There were a couple of unsuccessful scan-pairs. These are analyzed for the reason of failure by formulating two kinds of overlap metrics: one based on the actual overlapping surface-area and another based on the extent of agreement of range-image pixels. We conclude that neither metric in isolation is able to predict all failures, but that both taken together are able to predict the difficulty level of a scan-pair vis-à-vis registration by MUMC.
IEEE Robotics & Automation Magazine | 2009
Andreas Birk; Narunas Vaskevicius; Kaustubh Pathak; Soeren Schwertfeger; Jann Poppinga; Heiko Buelow
In the context of the 2008 Lunar Robotics Challenge (LRC) of the European Space Agency (ESA), the Jacobs Robotics team investigated three-dimensional (3-D) perception and modeling as an important basis of autonomy in unstructured domains. Concretely, the efficient modeling of the terrain via a 3D laser range finder (LRF) is addressed. The underlying fast extraction of planar surface patches can be used to improve situational awareness of an operator or for path planning. 3D perception and modeling is an important basis for mobile robot operations in planetary exploration scenarios as it supports good situation awareness for motion level teleoperation as well as higher level intelligent autonomous functions. It is hence desirable to get long-range 3D data with high resolution, large field of view, and very fast update rates. 3D LRF have a high potential in this respect. In addition, 3D LRF can operate under conditions where standard vision based methods fail, e.g., under extreme light conditions. However, it is nontrivial to transmit the huge amount of data delivered by a 3D LRF to an operator station or to use this point cloud data as basis for higher level intelligent functions. Based on our participation in the LRC of the ESA, it is shown how the huge amount of 3D point cloud data from 3D LRF can be tremendously reduced. Concretely, large sets of points are replaced by planar surface patches that are fitted into the data in an optimal way. The underlying computations are very efficient and hence suited for online computations onboard of the robot.
Robotics and Autonomous Systems | 2015
Rãzvan-George Mihalyi; Kaustubh Pathak; Narunas Vaskevicius; Tobias Fromm; Andreas Birk
Abstract An approach for generating textured 3D models of objects without the need for complex infrastructure such as turn-tables or high-end sensors on precisely controlled rails is presented. The method is inexpensive as it uses only a low-cost RGBD sensor, e.g., Microsoft Kinect or ASUS Xtion, and Augmented Reality (AR) markers printed on paper sheets. The sensor can be moved by hand by an untrained person and the AR-markers can be arbitrarily placed in the scene, thus allowing the modeling of objects of a large range of sizes. Due to the use of the simple AR markers, the method is significantly more robust than just using the RGBD sensor or a monocular camera alone and it hence avoids the typical need for manual post-processing of alternative approaches like Kinect-Fusion, 123D Catch, Photosynth, or similar. This article has two main contributions: First, the development of a simple, inexpensive method for the quick and easy digitization of physical objects is presented. Second, the development of an uncertainty model for AR-marker pose estimation is introduced. The latter is of interest beyond the object modeling application presented here. The uncertainty model is used in a graph-based relaxation method to improve model-consistency. Realistic modeling of various objects, such as parcels, sport balls, coffee sacks, human dolls, etc., is experimentally demonstrated. Good model-accuracy is shown for several ground-truth objects with simple geometries and known dimensions. Furthermore, it is shown that the models obtained using the uncertainty model have fewer errors than the ones obtained without it.