Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Günther is active.

Publication


Featured researches published by Martin Günther.


intelligent robots and systems | 2013

Building semantic object maps from sparse and noisy 3D data

Martin Günther; Thomas Wiemann; Sven Albrecht; Joachim Hertzberg

We present an approach to create a semantic map of an indoor environment, based on a series of 3D point clouds captured by a mobile robot using a Kinect camera. The proposed system reconstructs the surfaces in the point clouds, detects different types of furniture and estimates their poses. The result is a consistent mesh representation of the environment enriched by CAD models corresponding to the detected pieces of furniture. We evaluate our approach on two datasets totaling over 800 frames directly on each individual frame.


Künstliche Intelligenz | 2014

The RACE Project: Robustness by Autonomous Competence Enhancement

Joachim Hertzberg; Jianwei Zhang; Liwei Zhang; Sebastian Rockel; Bernd Neumann; Jos Lehmann; Krishna Sandeep Reddy Dubba; Anthony G. Cohn; Alessandro Saffiotti; Federico Pecora; Masoumeh Mansouri; Štefan Konečný; Martin Günther; Sebastian Stock; Luís Seabra Lopes; M. Oliveira; Gi Hyun Lim; Hamidreza Kasaei; Vahid Mokhtari; Lothar Hotz; Wilfried Bohlken

This paper reports on the aims, the approach, and the results of the European project RACE. The project aim was to enhance the behavior of an autonomous robot by having the robot learn from conceptualized experiences of previous performance, based on initial models of the domain and its own actions in it. This paper introduces the general system architecture; it then sketches some results in detail regarding hybrid reasoning and planning used in RACE, and instances of learning from the experiences of real robot task execution. Enhancement of robot competence is operationalized in terms of performance quality and description length of the robot instructions, and such enhancement is shown to result from the RACE system.


KI'11 Proceedings of the 34th Annual German conference on Advances in artificial intelligence | 2011

Model-based object recognition from 3D laser data

Martin Günther; Thomas Wiemann; Sven Albrecht; Joachim Hertzberg

This paper presents a method for recognizing objects in 3D point clouds. Based on a structural model of these objects, we generate hypotheses for the location and 6DoF pose of these models and verify them by matching a CAD model of the object into the point cloud. Our method only needs a CAD model of each object class; no previous training is required.


Artificial Intelligence | 2017

Model-based furniture recognition for building semantic object maps

Martin Günther; Thomas Wiemann; Sven Albrecht; Joachim Hertzberg

Abstract This paper presents an approach to creating a semantic map of an indoor environment incrementally and in closed loop, based on a series of 3D point clouds captured by a mobile robot using an RGB-D camera. Based on a semantic model about furniture objects (represented in an OWL-DL ontology with rules attached), we generate hypotheses for locations and 6DoF poses of object instances and verify them by matching a geometric model of the object (given as a CAD model) into the point cloud. The result, in addition to the registered point cloud, is a consistent mesh representation of the environment, further enriched by object models corresponding to the detected pieces of furniture. We demonstrate the robustness of our approach against occlusion and aperture limitations of the RGB-D frames, and against differences between the CAD models and the real objects. We evaluate the complete system on two challenging datasets featuring partial visibility and totaling over 800 frames. The results show complementary strengths and weaknesses of processing each frame directly vs. processing the fully registered scene, which accord with intuitive expectations.


IFAC Proceedings Volumes | 2012

Hybrid Reasoning in Perception: A Case Study

Martin Günther; Joachim Hertzberg; Masoumeh Mansouri; Federico Pecora; Alessandro Saffiotti

Robots operating in a complex human-inhabited environment need to represent and reason about different kinds of knowledge, including ontological, spatial, causal, temporal and resource knowledge. Often, these reasoning tasks are not mutually independent, but need to be integrated with each other. Integrated reasoning is especially important when dealing with knowledge derived from perception, which may be intrinsically incomplete or ambiguous. For instance, the non-observable property that a dish has been used and should therefore be washed can be inferred from the observable properties that it was full before and that it is empty now. In this paper, we present a hybrid reasoning framework which allows to easily integrate different kinds of reasoners. We demonstrate the suitability of our approach by integrating two kinds of reasoners, for ontological reasoning and for temporal reasoning, and using them to recognize temporally and ontologically defined object properties in point cloud data captured using an RGB-D camera.


Künstliche Intelligenz | 2017

iMRK: Demonstrator for Intelligent and Intuitive Human–Robot Collaboration in Industrial Manufacturing

José de Gea Fernández; Dennis Mronga; Martin Günther; Malte Wirkus; Martin Schröer; Stefan Stiene; Elsa Andrea Kirchner; Vinzenz Bargsten; Timo Bänziger; Johannes Teiwes; Thomas Krüger; Frank Kirchner

This report describes an intelligent and intuitive dual-arm robotic system for industrial human–robot collaboration which provides the basis for further work between DFKI (Robotics Innovation Center) and Volkswagen Group (Smart Production Lab) in the field of intuitive and safe collaborative robotics in manufacturing scenarios. The final robot demonstrator developed in a pilot project possesses multiple sensor modalities for environment monitoring and is equipped with the ability for online collision-free dual-arm manipulation in a shared human–robot workspace. Moreover, the robot can be controlled via simple human gestures. The capabilities of the robotic system were validated at a mockup of a gearbox assembly station at a Volkswagen factory.


Robotics and Autonomous Systems | 2018

Context-aware 3D object anchoring for mobile robots

Martin Günther; J.R. Ruiz-Sarmiento; Cipriano Galindo; Javier Gonzalez-Jimenez; Joachim Hertzberg

Abstract A world model representing the elements in a robot’s environment needs to maintain a correspondence between the objects being observed and their internal representations, which is known as the anchoring problem. Anchoring is a key aspect for an intelligent robot operation, since it enables high-level functions such as task planning and execution. This work presents an anchoring system that continually integrates new observations from a 3D object recognition algorithm into a probabilistic world model. Our system takes advantage of the contextual relations inherent to human-made spaces in order to improve the classification results of the baseline object recognition system. To achieve that, the system builds a graph-based world model containing the objects in the scene (both in the current and previously perceived observations), which is exploited by a Probabilistic Graphical Model (PGM) in order to leverage contextual information during recognition. The world model also enables the system to exploit information about objects beyond the current field of view of the robot sensors. Most importantly, this is done in an online fashion, overcoming both the disadvantages of single-shot recognition systems (e.g., limited sensor aperture) and offline recognition systems that require prior registration of all frames of a scene (e.g., dynamic scenes, unsuitability for plan-based robot control). We also propose a novel way to include the outcome of local object recognition methods in the PGM, which results in a decrease in the usually high model learning complexity and an increase in the system performance. The system performance has been assessed with a dataset collected by a mobile robot from restaurant-like settings, obtaining positive results for both its data association and object recognition capabilities. The system has been successfully used in the RACE robotic architecture.


Joint German/Austrian Conference on Artificial Intelligence (Künstliche Intelligenz) | 2015

Abducing Hypotheses About Past Events from Observed Environment Changes

Ann-Katrin Becker; Jochen Sprickerhof; Martin Günther; Joachim Hertzberg

Humans perform abductive reasoning routinely. We hypothesize about what happened in the past to explain an observation made in the present. This is frequently needed to model the present, too.


advanced robotics and its social impacts | 2011

On the impact of embedded knowledge-based systems

Joachim Hertzberg; Thomas Wiemann; Sven Albrecht; Martin Günther; Kai Lingemann; Florian Otte; Stephan Scheuren; Thomas Schüler; Jochen Sprickerhof; Stefan Stiene

Autonomous mobile robots are well visible both in science and in popular media. Significant progress has been made lately in that branch of robotics in all involved aspects of science, technology and engineering. Interesting studies and forecasts have been made how robots, based on the science and technology of mobile robotics, have started to become parts of our lives and will increasingly do so in the future. However, we think that the fascination with the idea of robots - preferably humanoid ones - both in science and in the public has been outshining the topic that is actually making the impact on society here: The science and technology behind these advanced robots, in particular embedded knowledge-based systems (EKBSs). So the question about the societal impact of robotics is actually the question of the impact of EKBSs.


Archive | 2009

Factoring General Games

Martin Günther; Stephan Schiffel; Michael Thielscher

Collaboration


Dive into the Martin Günther's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Stiene

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar

Sven Albrecht

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar

Thomas Wiemann

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kai Lingemann

University of Osnabrück

View shared research outputs
Researchain Logo
Decentralizing Knowledge