Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Fuchs is active.

Publication


Featured researches published by Stefan Fuchs.


computer vision and pattern recognition | 2008

Extrinsic and depth calibration of ToF-cameras

Stefan Fuchs; Gerd Hirzinger

Recently, ToF-cameras have attracted attention because of their ability to generate a full 2 1/2D depth image at video frame rates. Thus, ToF-cameras are suitable for real-time 3D tasks such as tracking, visual servoing or object pose estimation. The usability of such systems mainly depends on an accurate camera calibration. In this work a calibration process for ToF-cameras with respect to the intrinsic parameters, the depth measurement distortion and the pose of the camera relative to a robotpsilas end effector is described. The calibration process is not only based on the monochromatic images of the camera but also uses its depth values that are generated from a chequer-board pattern. The robustness and precision of the presented method is assessed applying it to randomly selected shots and comparing the calibrated measurements to a ground truth obtained from a laser scanner.


intelligent robots and systems | 2009

Robust 3D-mapping with time-of-flight cameras

Stefan May; David Droeschel; Stefan Fuchs; Dirk Holz; Andreas Nüchter

Time-of-flight cameras constitute a smart and fast technology for 3D perception but lack in measurement precision and robustness. The authors present a comprehensive approach for 3D environment mapping based on this technology. Imprecision of depth measurements are properly handled by calibration and application of several filters. Robust registration is performed by a novel extension to the Iterative Closest Point algorithm. Remaining registration errors are reduced by global relaxation after loop-closure and surface smoothing. A laboratory ground truth evaluation is provided as well as 3D mapping experiments in a larger indoor environment.


ISRR | 2011

Towards the Robotic Co-Worker

Sami Haddadin; Michael Suppa; Stefan Fuchs; Tim Bodenmüller; Alin Albu-Schäffer; Gerd Hirzinger

Recently, robots have gained capabilities in both sensing and actuation, which enable operation in the proximity of humans. Even direct physical interaction has become possible without suffering the decrease in speed and payload. The DLR Lightweight Robot III (LWR-III), whose technology is currently being transferred to the robot manufacturer KUKA Roboter GmbH, is such a device capable of realizing various features crucial for direct interaction with humans. Impedance control and collision detection with adequate reaction are key components for enabling “soft and safe” robotics. The implementation of a sensor based robotic co-worker that brings robots closer to humans in industrial settings and achieve close cooperation is an important goal in robotics. Despite being a common vision in robotics it has not become reality yet, as there are various open questions still to be answered. In this paper a sound concept and a prototype implementation of a co-worker scenario are developed in order to demonstrate that stateof- the-art technology is now mature enough to reach this aspiring aim. We support our ideas by addressing the industrially relevant bin-picking problem with the LWR-III, which is equipped with a Time-of-Flight camera for object recognition and the DLR 3D-Modeller for generating accurate environment models. The paper describes the sophisticated control schemes of the robot in combination with robust computer vision algorithms, which lead to a reliable solution for the addressed problem. Strategies are devised for safe interaction with the human during task execution, state depending robot behavior, and the appropriate mechanisms, to realize robustness in partially unstructured environments.


International Journal of Intelligent Systems Technologies and Applications | 2008

Calibration and registration for precise surface reconstruction with Time-Of-Flight cameras

Stefan Fuchs; Stefan May

This paper presents a method for precise surface reconstruction with Time-Of-Flight (TOF) cameras. A novel calibration approach, which simplifies the calibration task and doubles the cameras precision is developed and compared to current calibration methods. Remaining errors are tackled by applying filter and error distributing methods. Thus, a reference object is circumferentially reconstructed with an overall mean precision of approximately 3 mm in translation and 3° in rotation. The presented way of quantification for an achievable reconstruction precision with TOF cameras discloses the potential of localising and grasping objects with robot manipulators. This is a major criteria for the potential analysis of this sensor technology in robotics, which is first demonstrated within this work.


intelligent robots and systems | 2010

Cooperative bin-picking with Time-of-Flight camera and impedance controlled DLR lightweight robot III

Stefan Fuchs; Sami Haddadin; Maik Keller; Sven Parusel; Andreas Kolb; Michael Suppa

Because bin-picking effectively mirrors great challenges in robotics, it has been a relevant robotic showpiece application for several decades. In this paper we describe the computer vision algorithms in combination with the sophisticated control schemes of the robot and demonstrate a reliable and robust solution to the chosen problem. This paper approaches the bin-picking issue by applying the latest state-of-the-art hardware components, namely an impedance controlled lightweight robot and a Time-of-Flight camera. Lightweight robots have gained new capabilities in both sensing and actuation without suffering a decrease in speed and payload. Time-of- Flight cameras are superior to common proximity sensors in the sense that they provide depth and intensity images in video frame rate independent of textures. The bin-picking solution presented in this paper aims at extending the classical bin-picking problem by incorporating an environment model and allowing for the physical human-robot interaction during the entire process. Existing imprecisions in Time-of-Flight camera measurements and environment uncertainties are compensated by the compliant behavior of the robot. The overall process is implemented in a generic state machine that also monitors the entire bin-picking process.


intelligent robots and systems | 2006

Hierarchical Featureless Tracking for Position-Based 6-DoF Visual Servoing

Wolfgang Sepp; Stefan Fuchs; Gerhard Hirzinger

Classical position-based visual servoing approaches rely on the presence of distinctive features in the image such as corners and edges. In this contribution we exploit a hierarchical approach for object detection, initial-pose estimation, and real-time tracking based first on colour distribution and subsequently on the shape and texture information. The shape model of the object is not limited to surface primitives but allow for any free-form surface not subject to self-occlusion. We evaluate the approach as part of a handshake scenario where a 7-DoF robot takes a free moving object over from a human


international conference on computer vision systems | 2013

Compensation for multipath in tof camera measurements supported by photometric calibration and environment integration

Stefan Fuchs; Michael Suppa; Olaf Hellwich

Multipath is a prominent phenomenon in Time-of-Flight camera images and distorts the measurements by several centimetres. It troubles applications that demand for high accuracy, such as robotic manipulation or mapping. This paper addresses the photometric processes that cause multipath interference. It formulates an improved multipath model and designs a compensation process in order to correct the multipath-related errors. A calibration of the ToF illumination supports the process. The proposed approach, moreover, allows to include an environment model. The positive impact of this process is demonstrated.


advanced concepts for intelligent vision systems | 2012

Information-gain view planning for free-form object reconstruction with a 3d ToF camera

Sergi Foix; Simon Kriegel; Stefan Fuchs; Guillem Alenyà; Carme Torras

Active view planning for gathering data from an unexplored 3D complex scenario is a hard and still open problem in the computer vision community. In this paper, we present a general task-oriented approach based on an information-gain maximization that easily deals with such a problem. Our approach consists of ranking a given set of possible actions, based on their task-related gains, and then executing the best-ranked action to move the required sensor. An example of how our approach behaves is demonstrated by applying it over 3D raw data for real-time volume modelling of complex-shaped objects. Our setting includes a calibrated 3D time-of-flight (ToF) camera mounted on a 7 degrees of freedom (DoF) robotic arm. Noise in the sensor data acquisition, which is too often ignored, is here explicitly taken into account by computing an uncertainty matrix for each point, and refining this matrix each time the point is seen again. Results show that, by always choosing the most informative view, a complete model of a 3D free-form object is acquired and also that our method achieves a good compromise between speed and precision.


At-automatisierungstechnik | 2010

Konzepte für den Roboterassistenten der Zukunft

Sami Haddadin; Michael Suppa; Stefan Fuchs; Tim Bodenmüller; Alin Albu-Schäffer; Gerd Hirzinger

Zusammenfassung Die Realisierung eines sensorbasierten Roboterassistenten, der es ermöglicht Mensch und Roboter im industriellen Umfeld intenisiv kooperieren zu lassen ist ein wichtiges Ziel der Robotik. In diesem Beitrag wird ein Konzept für einen Roboterassistenten vorgestellt und anhand einer Beispielanwendung, dem Griff-in-die-Kiste, demonstriert. Dabei werden verfügbare Standardkomponenten benutzt, um aufzuzeigen, dass der Stand-der-Technik bereits weit genug ist derart komplexe Szenarien zu realisieren. Es werden sensorbasierte Reaktionsstrategien zur sicheren Mensch-Roboter Interaktion in ein zustandsbasiertes Roboterverhalten integriert und des Weiteren benutzt, um die nötige Robustheit für partiell unstrukturierte Umgebungen zu erlangen. Der Griff-in-die-Kiste wird an einem DLR Leichtbauroboter III (LBR-III) validiert, der mit dem DLR 3D-Modeller zur Generierung von Umgebungsmodelle und einer Time-of-Flight Kamera zur Objekterkennung ausgestattet ist. Abstract The realization of a sensor based robotic co-worker that brings robots closer to humans in industrial settings and achieve close cooperation is an important goal in robotics. In this paper a solid concept and a prototype realization of a co-worker scenario are developed in order to demonstrate that state-of-the-art technology is now mature enough to reach this aspiring aim. We support our ideas by addressing the industrially relevant bin-picking problem with the DLR Lightweight Robot (LWR-III), which is equipped with a Time-of-Flight camera for object recognition and the DLR 3D-Modeller for generating accurate environment models. Strategies are devised for safe interaction with the human during task execution, state depending robot behavior, and the appropriate mechanisms, to realize robustness in partially unstructured environments.


Journal of Field Robotics | 2009

Three-dimensional mapping with time-of-flight cameras

Stefan May; David Droeschel; Dirk Holz; Stefan Fuchs; Ezio Malis; Andreas Nüchter; Joachim Hertzberg

Collaboration


Dive into the Stefan Fuchs's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Klaus Arbter

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge