Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Himmelsbach is active.

Publication


Featured researches published by Michael Himmelsbach.


Proceedings of the IEEE | 2012

Autonomous Ground Vehicles—Concepts and a Path to the Future

Thorsten Luettel; Michael Himmelsbach; Hans-Joachim Wuensche

Autonomous vehicles promise numerous improvements to vehicular traffic: an increase in both highway capacity and traffic flow because of faster response times, less fuel consumption and pollution thanks to more foresighted driving, and hopefully fewer accidents thanks to collision avoidance systems. In addition, drivers can save time for more useful activities. In order for these vehicles to safely operate in everyday traffic or in harsh off-road environments, a multitude of problems in perception, navigation, and control have to be solved. This paper gives an overview of the most current trends in autonomous vehicles, highlighting the concepts common to most successful systems as well as their differences. It concludes with an outlook into the promising future of autonomous vehicles.


ieee intelligent vehicles symposium | 2010

Fast segmentation of 3D point clouds for ground vehicles

Michael Himmelsbach; Felix v. Hundelshausen; Hans-Joachim Wuensche

This paper describes a fast method for segmentation of large-size long-range 3D point clouds that especially lends itself for later classification of objects. Our approach is targeted at high-speed autonomous ground robot mobility, so real-time performance of the segmentation method plays a critical role. This is especially true as segmentation is considered only a necessary preliminary for the more important task of object classification that is itself computationally very demanding. Efficiency is achieved in our approach by splitting the segmentation problem into two simpler subproblems of lower complexity: local ground plane estimation followed by fast 2D connected components labeling. The methods performance is evaluated on real data acquired in different outdoor scenes, and the results are compared to those of existing methods. We show that our method requires less runtime while at the same time yielding segmentation results that are better suited for later classification of the identified objects.


Künstliche Intelligenz | 2011

Autonomous Off-Road Navigation for MuCAR-3 Improving the Tentacles Approach: Integral Structures for Sensing and Motion

Michael Himmelsbach; Thorsten Luettel; Falk Hecker; Felix von Hundelshausen; Hans-Joachim Wuensche

This report gives an overview of the autonomous navigation approach developed for the ground robot MuCAR-3, partly as a satellite project in the CoTeSys cluster of excellence. Safe robot navigation in general demands that the navigation approach can also cope with situations where GPS data is noisy or even absent and hence great care must be taken when using global map information. Choosing a safe action should be tightly coupled to the perception of the immediate surrounding in such situations. The tentacles approach developed earlier in the project efficiently deals with these issues by introducing integral structures for sensing and motion. This report presents the extensions and improvements made to the tentacles approach during the progress of the project and the results obtained at various challenging robot competitions.


ieee intelligent vehicles symposium | 2010

Fusing vision and LIDAR - Synchronization, correction and occlusion reasoning

Sebastian Schneider; Michael Himmelsbach; Thorsten Luettel; Hans-Joachim Wuensche

Autonomous navigation in unstructured environments like forest or country roads with dynamic objects remains a challenging task, particularly with respect to the perception of the environment using multiple different sensors.


ieee intelligent vehicles symposium | 2009

A model-based object following system

A. Muller; M. Manz; Michael Himmelsbach; Hans-Joachim Wünsche

In this paper we describe an object following system for ground robot mobility, which incorporates LIDAR-based object perception and model-based lane estimation into control signal generation. The approach enables our autonomous ground vehicle MuCAR-3 , see Fig. 1, to safely follow an object even in curved, narrow roads without using GPS or any prior environmental information at all, and to push the follower vehicle backwards in case of dead ends or blocked roads. The effectivness of this approach originates from a tight coupling between object recognition and control signal generation. Objects are detected, classified and tracked using a unique combination of 3D point clouds and a 2½D occupancy grid. With the object information gained, a Kalman Filter is used for lane estimation. Furthermore to cope with the problem of local obstacle avoidance, a set of drivable primitives, called tentacles, is integrated into the system. Using parameters from both, a controller generates an appropriate control signal for underlying vehicle control circuits. With this approach we are able to demonstrate smooth steering behavior at speeds up to 20 m/s while following an object even in rough terrain with high precession. The system was tested in various urban and non-urban scenarios like inner city traffic with crossings including stop lights, as well as roundabouts and pedestrian areas, which requires accurate lane execution.


intelligent robots and systems | 2011

Detection and tracking of road networks in rural terrain by fusing vision and LIDAR

Michael Manz; Michael Himmelsbach; Thorsten Luettel; Hans-Joachim Wuensche

The ability to perceive a robots local environment is one of the main challenges in the development of mobile ground robots. Here, we present a robust model-based approach for detection and tracking of road networks in rural terrain. To get a rich environment representation, we fuse the complementary data provided by a 3D LIDAR and an active camera platform into an accumulated, colored 3D elevation map of the terrain. Additionally, we use commercially available GIS data to get a rough idea about the geometry of the road network ahead of the robot. This way, the system is able to dynamically adjust the geometric model used within a particle filter framework for both detection and estimation of the road networks geometry. The estimation process makes use of edge- and region-based image features as well as obstacle information, all supplied by the dense terrain map. Instead of tuning the likelihood functions used within the particle filters cue fusion concept by hand, as commonly done, we apply supervised learning techniques to derive an appropriate weighting of all features. We finally show that the proposed approach enables our ground robot MuCAR-3 to autonomously navigate on rural- and dirt-road networks.


international conference on intelligent transportation systems | 2011

GIS-based topological robot localization through LIDAR crossroad detection

Andre Mueller; Michael Himmelsbach; Thorsten Luettel; Felix von Hundelshausen; Hans-Joachim Wuensche

While navigating in areas with weak or erroneous GPS signals such as forests or urban canyons, correct map localization is impeded by means of contradicting position hypotheses. Thus, instead of just utilizing GPS positions improved by the robots ego-motion, this papers approach tries to incorporate crossroad measurements given by the robots perception system and topological informations associated to crossroads within a pre-defined road network into the localization process. We thus propose a new algorithm for crossroad detection in LIDAR data, that examines the free space between obstacles in an occupancy grid in combination with a Kalman filter for data association and tracking. Hence rather than correcting a robots position by just incorporating the robots ego-motion in the absence of GPS signals, our method aims at data association and correspondence finding by means of detected real world structures and their counterparts in predefined, maybe even handcrafted, digital maps.


autonome mobile systeme | 2009

Fusing LIDAR and Vision for Autonomous Dirt Road Following

Michael Manz; Michael Himmelsbach; Thorsten Luettel; Hans-Joachim Wuensche

In this paper we describe how visual features can be incorporated into the well known tentacles approach [1] which up to now has only used LIDAR and GPS data and was therefore limited to scenarios with significant obstacles or non-flat surfaces along roads. In addition we present a visual feature considering only color intensity which can be used to visually rate tentacles. The presented sensor fusion and color based feature were both applied with great success at the C-ELROB 2009 robotic competition.


ieee intelligent vehicles symposium | 2016

A new geometric 3D LiDAR feature for model creation and classification of moving objects

Michael Kusenbach; Michael Himmelsbach; Hans-Joachim Wuensche

In this paper, we introduce a new geometric 3D feature combined with a clustering approach. Besides 3D data provided by a LiDAR point cloud, reflectivity information is used to further enhance the descriptivity of the feature. The proposed feature can be extracted and compared in real-time. Similar parts of an object, such as features belonging to an automobile headlight, are automatically clustered in an object model without explicit specification. Additionally, we provide a method for autonomous vehicles to automatically learn the shapes of observed moving objects and use them for real-time classification. The resulting object models consisting of the extracted feature clusters are interpretable by humans.


international conference on robotics and automation | 2012

Active perception for autonomous vehicles

Alois Unterholzner; Michael Himmelsbach; Hans-Joachim Wuensche

Precise perception of a vehicles surrounding is crucial for safe autonomous driving. It requires a high sensor resolution and a large field of view. Active perception, i.e. the redirection of a sensors focus of attention, is an approach to provide both. With active perception, however, the selection of an appropriate sensor orientation becomes necessary. This paper presents a method for determining the sensor orientation in urban traffic scenarios based on three criteria: the importance of traffic participants w.r.t. the current situation, the available information about traffic participants while considering alternative sensor orientations as well as sensor coverage of the vehicles relevant surrounding area.

Collaboration


Dive into the Michael Himmelsbach's collaboration.

Top Co-Authors

Avatar

Thorsten Luettel

Bundeswehr University Munich

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge