Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Beetz is active.

Publication


Featured researches published by Michael Beetz.


international conference on robotics and automation | 2009

Fast Point Feature Histograms (FPFH) for 3D registration

Radu Bogdan Rusu; Nico Blodow; Michael Beetz

In our recent work [1], [2], we proposed Point Feature Histograms (PFH) as robust multi-dimensional features which describe the local geometry around a point p for 3D point cloud datasets. In this paper, we modify their mathematical expressions and perform a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views. More concretely, we present several optimizations that reduce their computation times drastically by either caching previously computed values or by revising their theoretical formulations. The latter results in a new type of local features, called Fast Point Feature Histograms (FPFH), which retain most of the discriminative power of the PFH. Moreover, we propose an algorithm for the online computation of FPFH features for realtime applications. To validate our results we demonstrate their efficiency for 3D registration and propose a new sample consensus based method for bringing two datasets into the convergence basin of a local non-linear optimizer: SAC-IA (SAmple Consensus Initial Alignment).


The International Journal of Robotics Research | 2000

Probabilistic Algorithms and the Interactive Museum Tour-Guide Robot Minerva

Sebastian Thrun; Michael Beetz; Wolfram Burgard; Armin B. Cremers; Frank Dellaert; Dieter Fox; Dirk Hähnel; Charles R. Rosenberg; Nicholas Roy; Jamieson Schulte; Dirk Schulz

This paper describes Minerva, an interactive tour-guide robot that was successfully deployed in a Smithsonian museum. Minerva’s software is pervasively probabilistic, relying on explicit representations of uncertainty in perception and control. During 2 weeks of operation, the robot interacted with thousands of people, both in the museum and through the Web, traversing more than 44 km at speeds of up to 163 cm/sec in the unmodified museum.


Robotics and Autonomous Systems | 2008

Towards 3D Point cloud based object maps for household environments

Radu Bogdan Rusu; Zoltan-Csaba Marton; Nico Blodow; Mihai Emanuel Dolha; Michael Beetz

This article investigates the problem of acquiring 3D object maps of indoor household environments, in particular kitchens. The objects modeled in these maps include cupboards, tables, drawers and shelves, which are of particular importance for a household robotic assistant. Our mapping approach is based on PCD (point cloud data) representations. Sophisticated interpretation methods operating on these representations eliminate noise and resample the data without deleting the important details, and interpret the improved point clouds in terms of rectangular planes and 3D geometric shapes. We detail the steps of our mapping approach and explain the key techniques that make it work. The novel techniques include statistical analysis, persistent histogram features estimation that allows for a consistent registration, resampling with additional robust fitting techniques, and segmentation of the environment into meaningful regions.


intelligent robots and systems | 2008

Aligning point cloud views using persistent feature histograms

Radu Bogdan Rusu; Nico Blodow; Zoltan-Csaba Marton; Michael Beetz

In this paper we investigate the usage of persistent point feature histograms for the problem of aligning point cloud data views into a consistent global model. Given a collection of noisy point clouds, our algorithm estimates a set of robust 16D features which describe the geometry of each point locally. By analyzing the persistence of the features at different scales, we extract an optimal set which best characterizes a given point cloud. The resulted persistent features are used in an initial alignment algorithm to estimate a rigid transformation that approximately registers the input datasets. The algorithm provides good starting points for iterative registration algorithms such as ICP (Iterative Closest Point), by transforming the datasets to its convergence basin. We show that our approach is invariant to pose and sampling density, and can cope well with noisy data coming from both indoor and outdoor laser scans.


intelligent robots and systems | 2009

KNOWROB — knowledge processing for autonomous personal robots

Moritz Tenorth; Michael Beetz

Knowledge processing is an essential technique for enabling autonomous robots to do the right thing to the right object in the right way. Using knowledge processing the robots can achieve more flexible and general behavior and better performance. While knowledge representation and reasoning has been a well-established research field in Artificial Intelligence for several decades, little work has been done to design and realize knowledge processing mechanisms for the use in the context of robotic control. In this paper, we report on KNOWROB, a knowledge processing system particularly designed for autonomous personal robots. KNOWROB is a first-order knowledge representation based on description logics that provides specific mechanisms and tools for action-centered representation, for the automated acquisition of grounded concepts through observation and experience, for reasoning about and managing uncertainty, and for fast inference — knowledge processing features that are particularly necessary for autonomous robot control.


The International Journal of Robotics Research | 2013

KnowRob: A knowledge processing infrastructure for cognition-enabled robots

Moritz Tenorth; Michael Beetz

Autonomous service robots will have to understand vaguely described tasks, such as “set the table” or “clean up”. Performing such tasks as intended requires robots to fully, precisely, and appropriately parameterize their low-level control programs. We propose knowledge processing as a computational resource for enabling robots to bridge the gap between vague task descriptions and the detailed information needed to actually perform those tasks in the intended way. In this article, we introduce the KnowRob knowledge processing system that is specifically designed to provide autonomous robots with the knowledge needed for performing everyday manipulation tasks. The system allows the realization of “virtual knowledge bases”: collections of knowledge pieces that are not explicitly represented but computed on demand from the robot’s internal data structures, its perception system, or external sources of information. This article gives an overview of the different kinds of knowledge, the different inference mechanisms, and interfaces for acquiring knowledge from external sources, such as the robot’s perception system, observations of human activities, Web sites on the Internet, as well as Web-based knowledge bases for information exchange between robots. We evaluate the system’s scalability and present different integrated experiments that show its versatility and comprehensiveness.


international conference on computer vision | 2009

The TUM Kitchen Data Set of everyday manipulation activities for motion tracking and action recognition

Moritz Tenorth; Jan Bandouch; Michael Beetz

We introduce the publicly available TUM Kitchen Data Set as a comprehensive collection of activity sequences recorded in a kitchen environment equipped with multiple complementary sensors. The recorded data consists of observations of naturally performed manipulation tasks as encountered in everyday activities of human life. Several instances of a table-setting task were performed by different subjects, involving the manipulation of objects and the environment. We provide the original video sequences, full-body motion capture data recorded by a markerless motion tracker, RFID tag readings and magnetic sensor readings from objects and the environment, as well as corresponding action labels. In this paper, we both describe how the data was computed, in particular the motion tracker and the labeling, and give examples what it can be used for. We present first results of an automatic method for segmenting the observed motions into semantic classes, and describe how the data can be integrated in a knowledge-based framework for reasoning about the observations.


intelligent robots and systems | 2009

Close-range scene segmentation and reconstruction of 3D point cloud maps for mobile manipulation in domestic environments

Radu Bogdan Rusu; Nico Blodow; Zoltan-Csaba Marton; Michael Beetz

In this paper we present a framework for 3D geometric shape segmentation for close-range scenes used in mobile manipulation and grasping, out of sensed point cloud data. Our proposed approach proposes a robust geometric mapping pipeline for large input datasets that extracts relevant objects useful for a personal robotic assistant to perform manipulation tasks. The objects are segmented out from partial views and a reconstructed model is computed by fitting geometric primitive classes such as planes, spheres, cylinders, and cones. The geometric shape coefficients are then used to reconstruct missing data. Residual points are resampled and triangulated, to create smooth decoupled surfaces that can be manipulated. The resulted map is represented as a hybrid concept and is comprised of 3D shape coefficients and triangular meshes used for collision avoidance in manipulation routines.


ieee-ras international conference on humanoid robots | 2011

Robotic roommates making pancakes

Michael Beetz; Ulrich Klank; Ingo Kresse; Alexis Maldonado; Lorenz Mösenlechner; Dejan Pangercic; Thomas Rühr; Moritz Tenorth

In this paper we report on a recent public experiment that shows two robots making pancakes using web instructions. In the experiment, the robots retrieve instructions for making pancakes from the World Wide Web and generate robot action plans from the instructions. This task is jointly performed by two autonomous robots: The first robot opens and closes cupboards and drawers, takes a pancake mix from the refrigerator, and hands it to the robot B. The second robot cooks and flips the pancakes, and then delivers them back to the first robot. While the robot plans in the scenario are all percept-guided, they are also limited in different ways and rely on manually implemented sub-plans for parts of the task. We will thus discuss the potential of the underlying technologies as well as the research challenges raised by the experiment.


intelligent robots and systems | 2010

CRAM — A Cognitive Robot Abstract Machine for everyday manipulation in human environments

Michael Beetz; Lorenz Mösenlechner; Moritz Tenorth

This paper describes CRAM (Cognitive Robot Abstract Machine) as a software toolbox for the design, the implementation, and the deployment of cognition-enabled autonomous robots performing everyday manipulation activities. CRAM equips autonomous robots with lightweight reasoning mechanisms that can infer control decisions rather than requiring the decisions to be preprogrammed. This way CRAM-programmed autonomous robots are much more flexible, reliable, and general than control programs that lack such cognitive capabilities. CRAM does not require the whole domain to be stated explicitly in an abstract knowledge base. Rather, it grounds symbolic expressions in the knowledge representation into the perception and actuation routines and into the essential data structures of the control programs. In the accompanying video, we show complex mobile manipulation tasks performed by our household robot that were realized using the CRAM infrastructure.

Collaboration


Dive into the Michael Beetz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lars Kunze

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge