Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jann Poppinga is active.

Publication


Featured researches published by Jann Poppinga.


IEEE Transactions on Robotics | 2010

Fast Registration Based on Noisy Planes With Unknown Correspondences for 3-D Mapping

Kaustubh Pathak; Andreas Birk; Narunas Vaskevicius; Jann Poppinga

We present a robot-pose-registration algorithm, which is entirely based on large planar-surface patches extracted from point clouds sampled from a three-dimensional (3-D) sensor. This approach offers an alternative to the traditional point-to-point iterative-closest-point (ICP) algorithm, its point-to-plane variant, as well as newer grid-based algorithms, such as the 3-D normal distribution transform (NDT). The simpler case of known plane correspondences is tackled first by deriving expressions for least-squares pose estimation considering plane-parameter uncertainty computed during plane extraction. Closed-form expressions for covariances are also derived. To round-off the solution, we present a new algorithm, which is called minimally uncertain maximal consensus (MUMC), to determine the unknown plane correspondences by maximizing geometric consistency by minimizing the uncertainty volume in configuration space. Experimental results from three 3-D sensors, viz., Swiss-Ranger, University of South Florida Odetics Laser Detection and Ranging, and an actuated SICK S300, are given. The first two have low fields of view (FOV) and moderate ranges, while the third has a much bigger FOV and range. Experimental results show that this approach is not only more robust than point- or grid-based approaches in plane-rich environments, but it is also faster, requires significantly less memory, and offers a less-cluttered planar-patches-based visualization.


intelligent robots and systems | 2008

Fast plane detection and polygonalization in noisy 3D range images

Jann Poppinga; Narunas Vaskevicius; Andreas Birk; Kaustubh Pathak

A fast but nevertheless accurate approach for surface extraction from noisy 3D point clouds is presented. It consists of two parts, namely a plane fitting and a polygonalization step. Both exploit the sequential nature of 3D data acquisition on mobile robots in form of range images. For the plane fitting, this is used to revise the standard mathematical formulation to an incremental version, which allows a linear computation. For the polygonalization, the neighborhood relation in range images is exploited. Experiments are presented using a time-of-flight range camera in form of a Swissranger SR-3000. Results include lab scenes as well as data from two runs of the rescue robot league at the RoboCup German Open 2007 with 1,414, respectively 2,343 sensor snapshots. The 36ldr106, respectively 59ldr106 points from the two point clouds are reduced to about 14ldr103, respectively 23ldr103 planes with only about 0.2 sec of total computation time per snapshot while the robot moves along. Uncertainty analysis of the computed plane parameters is presented as well.


intelligent robots and systems | 2009

Fast 3D mapping by matching planes extracted from range sensor point-clouds

Kaustubh Pathak; Narunas Vaskevicius; Jann Poppinga; Max Pfingsthorn; Sören Schwertfeger; Andreas Birk

This article addresses fast 3D mapping by a mobile robot in a predominantly planar environment. It is based on a novel pose registration algorithm based entirely on matching features composed of plane-segments extracted from point-clouds sampled from a 3D sensor. The approach has advantages in terms of robustness, speed and storage as compared to the voxel based approaches. Unlike previous approaches, the uncertainty in plane parameters is utilized to compute the uncertainty in the pose computed by scan-registration. The algorithm is illustrated by creating a full 3D model of a multi-level robot testing arena.


intelligent robots and systems | 2007

3D forward sensor modeling and application to occupancy grid based sensor fusion

Kaustubh Pathak; Andreas Birk; Jann Poppinga; Sören Schwertfeger

This paper presents a new technique for the update of a probabilistic spatial occupancy grid map using a forward sensor model. Unlike currently popular inverse sensor models, forward sensor models can be found experimentally and can represent sensor characteristics better. The formulation is applicable to both 2D and 3D range sensors and does not have some of the theoretical and practical problems associated with the current approaches which use forward models. As an illustration of this procedure, a new prototype 3D forward sensor model is derived using a beam represented as a spherical sector. Furthermore, this model is used for fusion of point-clouds obtained from different 3D sensors, in particular, time-of-flight sensors (Swiss-ranger, laser range finders), and stereo vision cameras. Several techniques are described for an efficient data-structure representation and implementation. The range beams from different sensors are fused in a common local Cartesian occupancy map. Experimental results of this fusion are presented and evaluated using Hough-transform performed on the grid.


Journal of Field Robotics | 2008

Hough based Terrain Classification for Realtime Detection of Drivable Ground

Jann Poppinga; Andreas Birk; Kaustubh Pathak

The usability of mobile robots for surveillance, search and rescue missions can be significantly improved by intelligent functionalities decreasing the cognitive load on the operator or even allowing autonomous operations, e.g., when communication fails. Mobility in this regard is not only a mechatronic problem but also a perception, modeling and planning challenge. Here, the perception issue of detecting drivable ground is addressed, an important issue for safety, security, and rescue robots, which have to operate in a vast range of unstructured, challenging environments. The simple yet efficient approach is based on the Hough transform of planes. The idea is to design the parameter space such that drivable surfaces can be easily detected by the number of hits in the bins corresponding to drivability. A decision tree on the bin properties increases robustness as it allows to handle uncertainties, especially sensor noise. In addition to the binary distinction of drivable/non-drivable ground, a classification of terrain types is possible. The algorithm is applied to 3D data obtained from two different sensors, namely, a time-of-flight camera and a stereo camera. Experimental results are presented for indoor and outdoor terrains, demonstrating robust realtime detection of drivable ground. Seven datasets recorded under very varying conditions are used. About 6,800 snapshots of range data are processed in total. It is shown that drivability can be robustly detected with success rates ranging between 83% and 100%. Computation is extremely fast in the order of 5 to 50 msec.


international symposium on safety, security, and rescue robotics | 2007

Fast Detection of Polygons in 3D Point Clouds from Noise-Prone Range Sensors

Narunas Vaskevicius; Andreas Birk; Kaustubh Pathak; Jann Poppinga

3D sensing and modeling is increasingly important for mobile robotics in general and safety, security and rescue robotics (SSRR) in particular. To reduce the data and to allow for efficient processing, e.g., with computational geometry algorithms, it is necessary to extract surface data from 3D point clouds delivered by range sensors. A significant amount of work on this topic exists from the computer graphics community. But the existing work relies on relatively exact point cloud data. As also shown by others, sensors suited for mobile robots are very noise-prone and standard approaches that use local processing on surface normals are doomed to fail. Hence plane fitting has been suggested as solution by the robotics community. Here, a novel approach for this problem is presented. Its main feature is that it is based on region growing and that the underlying mathematics has been re-formulated such that an incremental fit can be done, i.e., the best fit surface does not have to be completely re-computed the moment a new point is investigated in the region growing process. The worst case complexity is O(n log(n)), but as shown in experiments it tends to scale linearly with typical data. Results with real world data from a Swissranger time-of-flight camera are presented where surface polygons are always successfully extracted within about 0.3 sec.


robot soccer world cup | 2010

A characterization of 3d sensors for response robots

Jann Poppinga; Andreas Birk; Kaustubh Pathak

Sensors that measure range information not only in a single plane are becoming more and more important for mobile robots, especially for applications in unstructured environments like response missions where 3D perception and 3D mapping is of interest. Three such sensors are characterized here, namely a Hokuyo URG-04LX laser scanner actuated with a servo in a pitching motion, a Videre STOC stereo camera and a Swissranger SR-3000. The three devices serve as prototypical examples of the according technologies, i.e., 3D laser scanners, stereo vision and time-of-flight cameras.


IEEE Robotics & Automation Magazine | 2009

3-D perception and modeling

Andreas Birk; Narunas Vaskevicius; Kaustubh Pathak; Soeren Schwertfeger; Jann Poppinga; Heiko Buelow

In the context of the 2008 Lunar Robotics Challenge (LRC) of the European Space Agency (ESA), the Jacobs Robotics team investigated three-dimensional (3-D) perception and modeling as an important basis of autonomy in unstructured domains. Concretely, the efficient modeling of the terrain via a 3D laser range finder (LRF) is addressed. The underlying fast extraction of planar surface patches can be used to improve situational awareness of an operator or for path planning. 3D perception and modeling is an important basis for mobile robot operations in planetary exploration scenarios as it supports good situation awareness for motion level teleoperation as well as higher level intelligent autonomous functions. It is hence desirable to get long-range 3D data with high resolution, large field of view, and very fast update rates. 3D LRF have a high potential in this respect. In addition, 3D LRF can operate under conditions where standard vision based methods fail, e.g., under extreme light conditions. However, it is nontrivial to transmit the huge amount of data delivered by a 3D LRF to an operator station or to use this point cloud data as basis for higher level intelligent functions. Based on our participation in the LRC of the ESA, it is shown how the huge amount of 3D point cloud data from 3D LRF can be tremendously reduced. Concretely, large sets of points are replaced by planar surface patches that are fitted into the data in an optimal way. The underlying computations are very efficient and hence suited for online computations onboard of the robot.


intelligent robots and systems | 2008

Sub-pixel depth accuracy with a time of flight sensor using multimodal Gaussian analysis

Kaustubh Pathak; Andreas Birk; Jann Poppinga

Pixel array based 3D range sensors like time of flight sensors (e.g. Swiss-ranger) are commonly used for spatial mapping. Analogous to a laser range finder beam, each pixel measures distances within its conical field of view. However, unlike laser range finders, these sensors use incoherent visible or near visible light for range measurement. This, combined with their relatively long maximum ranges, means that the convenient thin beam assumption can no longer be used to update an occupancy grid map using a probabilistic sensor model. We present an analysis of a scenario where a pixel beam is intersected by more than one object and where a unimodal probabilistic distribution assumption causes spurious objects to appear in the detected scene. This adversely affects path-planning based on such maps because perfectly good escape routes are blocked off. A two step multimodal Gaussian mixture model based procedure is presented which is able to detect multiple obstacles per pixel and hence ameliorate the problem. The results of experiments done using a time of flight based sensor are presented.


Künstliche Intelligenz | 2010

Surface Representations for 3D Mapping

Andreas Birk; Kaustubh Pathak; Narunas Vaskevicius; Max Pfingsthorn; Jann Poppinga; Sören Schwertfeger

Point clouds, i.e., sets of 3D coordinates of surface point samples from obstacles, are the predominant form of representation for 3D mapping. They are the raw data format of most 3D sensors and the basis for state of the art algorithms for 3D scan registration. It is argued here that point clouds have severe limitations and a case is made for a necessary paradigm shift to surface based representations. In addition to several conceptual arguments, it is shown how a surface based approach can be used for fast and robust registration of 3D data without the need for robot motion estimates from other sensors. Concretely, a short overview on own work dubbed 3D Plane SLAM is presented. It features an extraction of planes with uncertainties from 3D range scans. Two scans can then be registered by determining the correspondence set that maximizes the global rigid body motion constraint while finding the related optimal decoupled rotations and translations with their underlying uncertainties. The registered scans are embedded in pose-graph SLAM for loop closing and relaxation.

Collaboration


Dive into the Jann Poppinga's collaboration.

Top Co-Authors

Avatar

Andreas Birk

Jacobs University Bremen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Heiko Bülow

Jacobs University Bremen

View shared research outputs
Top Co-Authors

Avatar

Heiko Buelow

Jacobs University Bremen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge