Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frank Neuhaus is active.

Publication


Featured researches published by Frank Neuhaus.


emerging technologies and factory automation | 2009

Terrain drivability analysis in 3D laser range data for autonomous robot navigation in unstructured environments

Frank Neuhaus; Denis Dillenberger; Johannes Pellenz; Dietrich Paulus

Three-dimensional laser range finders provide autonomous systems with vast amounts of information. However, autonomous robots navigating in unstructured environments are usually not interested in every geometric detail of their surroundings. Instead, they require real-time information about the location of obstacles and the condition of drivable areas.In this paper, we first present grid-based algorithms for classifying regions as either drivable or not. In a subsequent step, drivable regions are further examined using a novel algorithm which determines the local terrain roughness. This information can be used by a path planning algorithm to decide whether to prefer a rough, muddy area, or a plain street, which would not be possible using binary drivability information only.


Robotics and Autonomous Systems | 2013

Probabilistic terrain classification in unstructured environments

Marcel Häselich; Marc Arends; Nicolai Wojke; Frank Neuhaus; Dietrich Paulus

Autonomous navigation in unstructured environments is a complex task and an active area of research in mobile robotics. Unlike urban areas with lanes, road signs, and maps, the environment around our robot is unknown and unstructured. Such an environment requires careful examination as it is random, continuous, and the number of perceptions and possible actions are infinite. We describe a terrain classification approach for our autonomous robot based on Markov Random Fields (MRFs ) on fused 3D laser and camera image data. Our primary data structure is a 2D grid whose cells carry information extracted from sensor readings. All cells within the grid are classified and their surface is analyzed in regard to negotiability for wheeled robots. Knowledge of our robots egomotion allows fusion of previous classification results with current sensor data in order to fill data gaps and regions outside the visibility of the sensors. We estimate egomotion by integrating information of an IMU, GPS measurements, and wheel odometry in an extended Kalman filter. In our experiments we achieve a recall ratio of about 90% for detecting streets and obstacles. We show that our approach is fast enough to be used on autonomous mobile robots in real time.


international symposium on safety, security, and rescue robotics | 2010

Real-time 3D mapping of rough terrain: A field report from Disaster City

Johannes Pellenz; Dagmar Lang; Frank Neuhaus; Dietrich Paulus

Mobile systems for mapping and terrain classification are often tested on datasets of intact environments only. The behavior of the algorithms in unstructured environments is mostly unknown. In safety, security and rescue environments, the robots have to handle much rougher terrain. Therefore, there is a need for 3D test data that also contains disaster scenarios. During the Response Robot Evaluation Exercise in March 2010 in Disaster City, College Station, Texas (USA), a comprehensive dataset was recorded containing the data of a 3D laser range finder, a GPS receiver, an IMU and a color camera. We tested our algorithms (for terrain classification and 3D mapping) with the dataset, and will make the data available to give other researchers the chance to do the same. We believe that this captured data of this well known location provides a valuable dataset for the USAR robotics community, increasing chances of getting more comparable results.


international conference on image processing | 2016

Localization and pose estimation of textureless objects for autonomous exploration missions

Nicolai Wojke; Frank Neuhaus; Dietrich Paulus

In this paper we describe an approach for detection and pose estimation of colored objects with only few or no textural features. The approach consists of two separate stages. First, we perform vision-based object detection and hypothesis filtering. Then, we estimate and validate the objects pose in 3-D laser scans. For object detection we integrate image segmentation results from multiple viewpoints in a set-theoretical filter that provides a probabilistically sound estimate of the number of objects and their respective locations. For validation and pose estimation we search for the best pose by sampling from a geometric measurement model. The system has been validated during autonomous exploration missions in unstructured and space-like environments.


robot soccer world cup | 2011

Mixed 2D/3D perception for autonomous robots in unstructured environments

Johannes Pellenz; Frank Neuhaus; Denis Dillenberger; David Gossow; Dietrich Paulus

Autonomous robots in real world applications have to deal with a complex 3D environment, but are often equipped with standard 2D laser range finders (LRF) only. By using the 2D LRF for both, the 2D localization and mapping (which can be done efficiently and precisely) and for the 3D obstacle detection (which makes the robot move safely), a completely autonomous robot can be built with affordable 2D LRFs. We use the 2D LRF to perform particle filter based SLAM to generate a 2D occupancy grid, and the same LRF (moved by two servo motors) to acquire 3D scans to detect obstacles not visible in the 2D scans. The 3D data is analyzed with a recursive principal component analysis (PCA) based method, and the detected obstacles are recorded in a separate obstacle map. This obstacle map and the occupancy map are merged for the path planning. Our solution was tested on our mobile system Robbie during the RoboCup Rescue competitions in 2008 and 2009, winning the mapping challenge at the world championship 2008 and the German Open in 2009. This shows that the benefit of a sensor can dramatically be increased by actively controlling it, and that mixed 2D/3D perception can efficiently be achieved with a standard 2D sensor by controlling it actively.


ieee intelligent vehicles symposium | 2015

Advanced 3-D trailer pose estimation for articulated vehicles

Christian Fuchs; Frank Neuhaus; Dietrich Paulus

When crafting driver assistance systems designed for truck/trailer combinations, knowledge about the position and orientation of a truck relative to the attached trailer is a vital prerequisite for any kinematic calculation and trajectory estimation. An advanced optical sensor system measuring the 3-D state of an attached two-axle trailer is proposed in this publication. It uses a Kalman filter for enhanced pose estimation and is evaluated against previous versions of the sensor system for the same purpose.


Pattern Recognition and Image Analysis | 2015

Geometric features for robust registration of point clouds

A. Mützel; Frank Neuhaus; Dietrich Paulus

Several feature detectors for 3D point data have been proposed in the literature. They have been applied to various problems in computer vision and robotics. We use them to solve two fundamental problems in real-time robotics, namely the registration of laser scans as well as the detection of loops and places. We extend and modify existing feature detectors, combine them in a smart way and create a system, that solves these problems efficiently and better that existing other solutions.We evaluate our system with data sets provided by other groups as well as our own data and we compare our results to those obtained with other algorithms.


robotics and biomimetics | 2014

Markov random field terrain classification of large-scale 3D maps

Marcel Häselich; Benedikt Jobgen; Frank Neuhaus; Dagmar Lang; Dietrich Paulus

Simultaneous localization and mapping, drivability classification of the terrain and path planning represent three major research areas in the field of autonomous outdoor robotics. Especially unstructured environments require a careful examination as they are unknown, continuous and the number of possible actions for the robot are infinite. We present an approach to create a semantic 3D map with drivability information for wheeled robots using a terrain classification algorithm. Our robot is equipped with a 3D laser range finder, a Velodyne HDL-64E, as primary sensor. For the registration of the point clouds, we use a featureless 3D correlative scan matching algorithm which is an adaption of the 2D algorithm presented by Olson. Every 3D laser scan is additionally classified with a Markov random field based terrain classification algorithm. Our data structure for the terrain classification approach is a 2D grid whose cells carry information extracted from the laser range finder data. All cells within the grid are classified and their surface is analyzed regarding its drivability for wheeled robots. The main contribution of this work is the novel combination of these two algorithms which yields classified 3D maps with obstacle and drivability information. Thereby, the newly created semantic map is perfectly tailored for generic path planning applications for all kinds of wheeled robots. We evaluate our algorithms on large datasets with more than 137 million annotated 3D points that were labeled by multiple human experts. All datasets are published online and are provided for the community.


international joint conference on computer vision imaging and computer graphics theory and applications | 2018

Combining 2D to 2D and 3D to 2D Point Correspondences for Stereo Visual Odometry

Stephan Manthe; Adrian Carrio; Frank Neuhaus; Pascual Campoy; Dietrich Paulus

Self-localization and motion estimation are requisite skills for autonomous robots. They enable the robot to navigate autonomously without relying on external positioning systems. The autonomous navigation can be achieved by making use of a stereo camera on board the robot. In this work a stereo visual odometry algorithm is developed which uses FAST features in combination with the Rotated-BRIEF descriptor and an approach for feature tracking. For motion estimation we utilize 3D to 2D point correspondences as well as 2D to 2D point correspondences. First we estimate an initial relative pose by decomposing the essential matrix. After that we refine the initial motion estimate by solving an optimization problem that minimizes the reprojection error as well as a cost function based on the epipolar constraint. The second cost function enables us to take also advantage of useful information from 2D to 2D point correspondences. Finally, we evaluate the implemented algorithm on the well known KITTI and EuRoC datasets.


international conference on machine vision | 2018

High-resolution hyperspectral ground mapping for robotic vision

Christian Fuchs; Frank Neuhaus; Dietrich Paulus

Recently released hyperspectral cameras use large, mosaiced filter patterns to capture different ranges of the light’s spectrum in each of the camera’s pixels. Spectral information is sparse, as it is not fully available in each location. We propose an online method that avoids explicit demosaicing of camera images by fusing raw, unprocessed, hyperspectral camera frames inside an ego-centric ground surface map. It is represented as a multilayer heightmap data structure, whose geometry is estimated by combining a visual odometry system with either dense 3D reconstruction or 3D laser data. We use a publicly available dataset to show that our approach is capable of constructing an accurate hyperspectral representation of the surface surrounding the vehicle. We show that in many cases our approach increases spatial resolution over a demosaicing approach, while providing the same amount of spectral information.

Collaboration


Dive into the Frank Neuhaus's collaboration.

Top Co-Authors

Avatar

Dietrich Paulus

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Christian Fuchs

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Johannes Pellenz

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Christian Winkens

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Dagmar Lang

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Denis Dillenberger

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Marcel Häselich

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Nicolai Wojke

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Stephan Manthe

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

A. Mützel

University of Koblenz and Landau

View shared research outputs
Researchain Logo
Decentralizing Knowledge