Michael Suppa
German Aerospace Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Suppa.
european conference on computer vision | 2010
Elmar Mair; Gregory D. Hager; Darius Burschka; Michael Suppa; Gerhard Hirzinger
The efficient detection of interesting features is a crucial step for various tasks in Computer Vision. Corners are favored cues due to their two dimensional constraint and fast algorithms to detect them. Recently, a novel corner detection approach, FAST, has been presentedwhich outperforms previous algorithms in both computational performance and repeatability. We will show how the accelerated segment test, which underlies FAST, can be significantly improved by making it more generic while increasing its performance.We do so by finding the optimal decision tree in an extended configuration space, and demonstrating how specialized trees can be combined to yield an adaptive and generic accelerated segment test. The resulting method provides high performance for arbitrary environments and so unlike FAST does not have to be adapted to a specific scene structure. We will also discuss how different test patterns affect the corner response of the accelerated segment test.
IEEE Robotics & Automation Magazine | 2012
Teodor Tomic; Korbinian Schmid; Philipp Lutz; Andreas Dömel; Michael Kassecker; Elmar Mair; Iris Lynne Grixa; Felix Ruess; Michael Suppa; Darius Burschka
Urban search and rescue missions raise special requirements on robotic systems. Small aerial systems provide essential support to human task forces in situation assessment and surveillance. As external infrastructure for navigation and communication is usually not available, robotic systems must be able to operate autonomously. A limited payload of small aerial systems poses a great challenge to the system design. The optimal tradeoff between flight performance, sensors, and computing resources has to be found. Communication to external computers cannot be guaranteed; therefore, all processing and decision making has to be done on board. In this article, we present an unmanned aircraft system design fulfilling these requirements. The components of our system are structured into groups to encapsulate their functionality and interfaces. We use both laser and stereo vision odometry to enable seamless indoor and outdoor navigation. The odometry is fused with an inertial measurement unit in an extended Kalman filter. Navigation is supported by a module that recognizes known objects in the environment. A distributed computation approach is adopted to address the computational requirements of the used algorithms. The capabilities of the system are validated in flight experiments, using a quadrotor.
ieee-ras international conference on humanoid robots | 2006
Christian Ott; Oliver Eiberger; Werner Friedl; Berthold Bäuml; Ulrich Hillenbrand; Christoph Borst; Alin Albu-Schäffer; Bernhard Brunner; Heiko Hirschmüller; Simon Kielhöfer; Rainer Konietschke; Michael Suppa; Franziska Zacharias; Gerhard Hirzinger
This paper presents a humanoid two-arm system developed as a research platform for studying dexterous two-handed manipulation. The system is based on the modular DLR-Lightweight-Robot-III and the DLR-Hand-II. Two arms and hands are combined with a three degrees-of-freedom movable torso and a visual system to form a complete humanoid upper body. In this paper we present the design considerations and give an overview of the different sub-systems. Then, we describe the requirements on the software architecture. Moreover, the applied control methods for two-armed manipulation and the vision algorithms used for scene analysis are discussed
ISRR | 2011
Sami Haddadin; Michael Suppa; Stefan Fuchs; Tim Bodenmüller; Alin Albu-Schäffer; Gerd Hirzinger
Recently, robots have gained capabilities in both sensing and actuation, which enable operation in the proximity of humans. Even direct physical interaction has become possible without suffering the decrease in speed and payload. The DLR Lightweight Robot III (LWR-III), whose technology is currently being transferred to the robot manufacturer KUKA Roboter GmbH, is such a device capable of realizing various features crucial for direct interaction with humans. Impedance control and collision detection with adequate reaction are key components for enabling “soft and safe” robotics. The implementation of a sensor based robotic co-worker that brings robots closer to humans in industrial settings and achieve close cooperation is an important goal in robotics. Despite being a common vision in robotics it has not become reality yet, as there are various open questions still to be answered. In this paper a sound concept and a prototype implementation of a co-worker scenario are developed in order to demonstrate that stateof- the-art technology is now mature enough to reach this aspiring aim. We support our ideas by addressing the industrially relevant bin-picking problem with the LWR-III, which is equipped with a Time-of-Flight camera for object recognition and the DLR 3D-Modeller for generating accurate environment models. The paper describes the sophisticated control schemes of the robot in combination with robust computer vision algorithms, which lead to a reliable solution for the addressed problem. Strategies are devised for safe interaction with the human during task execution, state depending robot behavior, and the appropriate mechanisms, to realize robustness in partially unstructured environments.
intelligent robots and systems | 2013
Korbinian Schmid; Teodor Tomic; Felix Ruess; Heiko Hirschmüller; Michael Suppa
We introduce our new quadrotor platform for realizing autonomous navigation in unknown indoor/outdoor environments. Autonomous waypoint navigation, obstacle avoidance and flight control is implemented on-board. The system does not require a special environment, artificial markers or an external reference system. We developed a monolithic, mechanically damped perception unit which is equipped with a stereo camera pair, an Inertial Measurement Unit (IMU), two processor-and an FPGA board. Stereo images are processed on the FPGA by the Semi-Global Matching algorithm. Keyframe-based stereo odometry is fused with IMU data compensating for time delays that are induced by the vision pipeline. The system state estimate is used for control and on-board 3D mapping. An operator can set waypoints in the map, while the quadrotor autonomously plans its path avoiding obstacles. We show experiments with the quadrotor flying from inside a building to the outside and vice versa, traversing a window and a door respectively. A video of the experiments is part of this work. To the best of our knowledge, this is the first autonomously flying system with complete on-board processing that performs waypoint navigation with obstacle avoidance in geometrically unconstrained, complex indoor/outdoor environments.
international conference on robotics and automation | 2007
Michael Suppa; Simon Kielhöfer; Jörg Langwald; Franz Hacker; Klaus H. Strobl; Gerd Hirzinger
This paper deals with the concept and implementation of a multi-purpose vision platform. In robotics, numerous applications require perception. A multi-purpose vision platform suited for object recognition, cultural heritage preservation and visual servoing at the same time is missing. In this work, we draw attention to the design principles for such a vision platform. We present its implementation, the 3D-modeller. In specifying and combining multiple sensors, laser-range scanner, laser-stripe profiler and stereo vision, we derive the required mechanical and electrical hardware design. The concepts for synchronization and communication round offs our approach. Precision and frame rate are presented. We illustrate the versatility of the 3D-modeller by addressing four applications: 3D-modeling, exploration, tracking and object recognition. Due to its low weight and generic mechanical interface, it can be mounted on industrial robots, humanoids, or free-handed as well. The 3D-modeller is flexibly applicable, not only in research but also in industry, especially in small batch assembly.
Journal of Real-time Image Processing | 2015
Simon Kriegel; Christian Rink; Tim Bodenmüller; Michael Suppa
This work focuses on autonomous surface reconstruction of small-scale objects with a robot and a 3D sensor. The aim is a high-quality surface model allowing for robotic applications such as grasping and manipulation. Our approach comprises the generation of next-best-scan (NBS) candidates and selection criteria, error minimization between scan patches and termination criteria. NBS candidates are iteratively determined by a boundary detection and surface trend estimation of the acquired model. To account for both a fast and high-quality model acquisition, that candidate is selected as NBS, which maximizes a utility function that integrates an exploration and a mesh-quality component. The modeling and scan planning methods are evaluated on an industrial robot with a high-precision laser striper system. While performing the new laser scan, data are integrated on-the-fly into both, a triangle mesh and a probabilistic voxel space. The efficiency of the system in fast acquisition of high-quality 3D surface models is proven with different cultural heritage, household and industrial objects.
international conference on robotics and automation | 2004
Klaus H. Strobl; Wolfgang Sepp; Eric Wahl; Tim Bodenmüller; Michael Suppa; Javier F. Seara; Gerd Hirzinger
This paper presents the DLR Laser Stripe Profiler as a component of the DLR multisensory Hand-Guided Device for 3D modeling. After modeling the reconstruction process, we propose a novel method for laser plane self-calibration based on the assessment of the deformations the miscalibration leads to. In addition, the requirement for absence of optical filtering implies the development of a robust stripe segmentation algorithm. Experiments demonstrate the validity and applicability of the approaches.
international conference on robotics and automation | 2011
Simon Kriegel; Tim Bodenmüller; Michael Suppa; Gerd Hirzinger
The procedure of manually generating a 3D model of an object is very time consuming for a human operator. Next-best- view (NBV) planning is an important aspect for automation of this procedure in a robotic environment. We propose a surface-based NBV approach, which creates a triangle surface from a real-time data stream and determines viewpoints similar to human intuition. Thereby, the boundaries in the surface are detected and a quadratic patch for each boundary is estimated. Then several viewpoint candidates are calculated, which look perpendicular to the surface and overlap with previous sensor data. A NBV is selected with the goal to fill areas which are occluded. This approach focuses on the completion of a 3D model of an unknown object. Thereby, the search space for the viewpoints is not restricted to a cylinder or sphere. Our NBV determination proves to be very fast, and is evaluated in an experiment on test objects, applying an industrial robot and a laser range scanner.
intelligent robots and systems | 2010
Stefan Fuchs; Sami Haddadin; Maik Keller; Sven Parusel; Andreas Kolb; Michael Suppa
Because bin-picking effectively mirrors great challenges in robotics, it has been a relevant robotic showpiece application for several decades. In this paper we describe the computer vision algorithms in combination with the sophisticated control schemes of the robot and demonstrate a reliable and robust solution to the chosen problem. This paper approaches the bin-picking issue by applying the latest state-of-the-art hardware components, namely an impedance controlled lightweight robot and a Time-of-Flight camera. Lightweight robots have gained new capabilities in both sensing and actuation without suffering a decrease in speed and payload. Time-of- Flight cameras are superior to common proximity sensors in the sense that they provide depth and intensity images in video frame rate independent of textures. The bin-picking solution presented in this paper aims at extending the classical bin-picking problem by incorporating an environment model and allowing for the physical human-robot interaction during the entire process. Existing imprecisions in Time-of-Flight camera measurements and environment uncertainties are compensated by the compliant behavior of the robot. The overall process is implemented in a generic state machine that also monitors the entire bin-picking process.