Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Klaus Arbter is active.

Publication


Featured researches published by Klaus Arbter.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1990

Application of affine-invariant Fourier descriptors to recognition of 3-D objects

Klaus Arbter; Wesley E. Snyder; Hans Burkhardt; Gerd Hirzinger

The method of Fourier descriptors is extended to produce a set of normalized coefficients which are invariant under any affine transformation (translation, rotation, scaling, and shearing). The method is based on a parameterized boundary description which is transformed to the Fourier domain and normalized there to eliminate dependencies on the affine transformation and on the starting point. Invariance to affine transforms allows considerable robustness when applied to images of objects which rotate in all three dimensions, as is demonstrated by processing silhouettes of aircraft maneuvering in three-space. >


American Journal of Surgery | 1999

Self-guided robotic camera control for laparoscopic surgery compared with human camera control.

Kazuhiko Omote; Hubertus Feussner; A. Ungeheuer; Klaus Arbter; Guo-Qing Wei; J. Rüdiger Siewert; Gerd Hirzinger

BACKGROUND In laparoscopic surgery, the surgeon no longer has direct visual control of the operation area, and a camera assistant who maneuvers the laparoscope is essential. Problems of cooperation between the two naturally arise, and a robotic assistant that automatically controls the laparoscope can offer a highly desirable alternative to this situation. METHODS A self-guided robotic camera control system (SGRCCS) based upon a color tracking method has been developed and its use evaluated in 20 cases of laparoscopic cholecystectomy and compared with using human camera control. RESULTS In 83% of the patients the procedures were successfully completed with the SGRCCS. Set-up time for the robot averaged 21 minutes; and the surgical time with and without the robot averaged 54 and 60 minutes, respectively. Using the robot instead of a human camera assistant significantly reduced both the frequency of the camera correction, 2.2 per hour compared with 15.3 per hour, and frequency of the lens cleaning, 1.0 per hour compared with 6.8 per hour. Subjective assessment by the surgeon revealed that the robot performed better than the human assistant in 71 % of the cases. CONCLUSIONS In laparoscopic surgery, the SGRCCS offered optimal camera guidance and helped to maintain the surgeons concentration during the operation.


international conference on robotics and automation | 1998

Active self-calibration of robotic eyes and hand-eye relationships with model identification

Guo-Qing Wei; Klaus Arbter; Gerd Hirzinger

We first review research results of camera self-calibration achieved in photogrammetry, robotics and computer vision. Then we propose a method for self-calibration of robotic hand cameras by means of active motion. Through tracking a set of world points of unknown coordinates during robot motion, the internal parameters of the cameras (including distortions), the mounting parameters as well as the coordinates of the world points are estimated. The approach is fully autonomous, in that no initial guesses of the unknown parameters are to be provided from the outside by humans for the solution of a set of nonlinear equations. Sufficient conditions for a unique solution are derived in terms of controlled motion sequences. Methods to improve accuracy and robustness are proposed by means of best model identification and motion planning. Experimental results in both a simulated and a real environments are reported.


CVRMed-MRCAS '97 Proceedings of the First Joint Conference on Computer Vision, Virtual Reality and Robotics in Medicine and Medial Robotics and Computer-Assisted Surgery | 1997

Automatic tracking of laparoscopic instruments by color coding

Guo-Qing Wei; Klaus Arbter; Gerd Hirzinger

In this paper we describe an autonomous laparoscopic guidance system for laparoscopic surgery. By analyzing the color histogram of typical laparoscopic images, we propose to code the instrument by a color that does not appear. In terms of image processing, we segment the color mark in the visual image and use the location information to control a robot such that the tip of the color-coded surgical instrument are continuously shown in the central area of the monitor device and a constant distance is held in the case of stereo. The system is implemented by using an image processing hardware and a robot which are all commercially available. Clinical tests show the robustness and efficacy of the system.


Archive | 2008

Motion Tracking for Minimally Invasive Robotic Surgery

Martin Groeger; Klaus Arbter; Gerd Hirzinger

Minimally invasive surgery is a modern surgical technique in which the instruments are inserted into the patient through small incisions. An endoscopic camera provides the view to the site of surgery inside the patient. While the patient benefits from strongly reduced tissue traumatisation, the surgeon has to cope with a number of disadvantages. These drawbacks arise from the fact that, in contrast to open surgery, direct contact and view to the field of surgery are lost in minimally invasive scenarios. A sophisticated robotic system can compensate for the increased demands posed to the surgeon and provide assistance for the complicated tasks. To enable the robotic system to provide particular assistance by partly autonomous tasks e.g. by guiding the surgeon to a preoperatively planned situs or by moving the camera along the changing focus of surgery, the knowledge of intraoperative changes inside the patient becomes important. Two main types of targets can be identified in endoscopic video images, which are instruments and organs. Depending on these types different strategies for motion tracking become advantageous. Tracking of image motion from endoscopic video images can be based solely on structure information provided by the object itself or can involve artifical landmarks to aid the tracking process. In the first case, the use of natural landmarks refers to the fact that the genuine structure of the target is used to find reference positions which can be tracked. This can involve intensity or feature based tracking strategies. In the second case of artifical landmarks, markers with a special geometry or colour can be used. This enables particular tracking strategies, making use of the distinctive property of these markers. This chapter describes different motion tracking strategies used to accomplish the task of motion detection in minimally invasive surgical environments. Two example scenario are provided for which two different motion tracking strategies have been successfully implemented. Both are partly autonomous task scenarios, providing automated camera guidance for laparoscopic surgery and motion compensation of the beating heart.


intelligent robots and systems | 1994

Task directed programming of sensor based robots

Bernhard Brunner; Klaus Arbter; Gerhard Hirzinger

We propose the so-called TeleSensor programming concept that uses sensory perception to achieve local autonomy in robotic manipulation. Sensor based robot tasks are used to define elemental moves within a high level programming environment. This approach is applicable in both, the real robots world and the simulated one. Beside the graphical off-line programming concept, the range of application lies especially in the field of teleoperation with large time delays. A shared autonomy concept is proposed that distributes intelligence between man and machine. The feasibility of graphically simulating the robot within its environment is extended by emulating different sensor functions to achieve a correct copy of the real system behaviour as far as possible. The programming paradigm is supported by a sophisticated graphical man machine interface. Sensor fusion aspects with respect to autonomous sensor controlled task execution are discussed as well as the interaction between the real and the simulated system.<<ETX>>


intelligent robots and systems | 2008

Robotic assembly of complex planar parts: An experimental evaluation

Paolo Robuffo Giordano; Andreas Stemmer; Klaus Arbter; Alin Albu-Schäffer

In this paper we present an experimental evaluation of automatic robotic assembly of complex planar parts. The torque-controlled DLR light-weight robot, equipped with an on-board camera (eye-in-hand configuration), is committed with the task of looking for given parts on a table, picking them, and inserting them inside the corresponding holes on a movable plate. Visual servoing techniques are used for fine positioning over the selected part/hole, while insertion is based on active compliance control of the robot and robust assembly planning in order to align the parts automatically with the hole. Execution of the complete task is validated through extensive experiments, and performance of humans and robot are compared in terms of overall execution time.


Advanced Robotics | 2004

A micro-rover navigation and control system for autonomous planetary exploration

Klaus Landzettel; Bernhard-Michael Steinmetz; Bernhard Brunner; Klaus Arbter; Marc Pollefeys; Maarten Vergauwen; Ronny Moreas; Fuyi Xu; Leif Steinicke; Bernard Fontaine

This paper describes an end-to-end control system for autonomous navigation of a small vehicle at a remote place, e.g. in space for planetary exploration. Due to a realistic background of this study the proposed method has to deal with limited knowledge about the environment as well as limited system resources and operational boundary conditions, especially a very large time delay in the communication between the ground control station and the space segment. To overcome these constraints the remote system has to act in a very autonomous way. Ground support minimizes the computational load of the remote system. High-level information interchange reduces the communication bandwidth requirements.


international conference on robotics and automation | 1997

Active self-calibration of hand cameras and hand-eye relationships with motion planning

Guo-Qing Wei; Klaus Arbter; Gerd Hirzinger

In this paper we propose a method for self-calibration of robotic hand cameras by means of active motion of the robot. Through tracking a set of world points of unknown coordinates, the internal parameters of the cameras (including lens distortions), the mounting parameters as well as the coordinates of the world points are estimated. The approach is fully autonomous, in that no initial guesses of the unknown parameters are to be provided from the outside by humans for the solution of a set of nonlinear equations. Sufficient conditions for a unique solution are derived in terms of controlled motion sequences. To improve robustness of the calibration, we propose to identify the best lens-distortion model by using F-test. Furthermore, a method is proposed to minimize the effect of robot motion uncertainties by motion planning. Experimental results in both a simulated and a real environments are reported.


international symposium on experimental robotics | 1997

Towards a new Robot Generation

Gerd Hirzinger; Klaus Arbter; Bernhard Brunner; Reinhard Koeppe

Key items in the development of a new smart robot generation are explained by hand of DLR’s recent activities in robotics research. These items are the design of multisensory gripper and articulated hands systems, ultra-light-weight links and joint drive systems with integrated joint torque control, learning and self-improvement of the dynamical behaviour, modelling the environment using sensorfusion, and new sensor-based off-line programming techniques based on teaching by showing in a virtual environment.

Collaboration


Dive into the Klaus Arbter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Fuchs

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guo-Qing Wei

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wesley E. Snyder

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge