Christian Goerick
Honda
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christian Goerick.
ieee-ras international conference on humanoid robots | 2005
Michael Gienger; Herbert Janssen; Christian Goerick
We present a whole body motion control algorithm for humanoid robots. It is based on the framework of Liegeois and solves the redundant inverse kinematics problem on velocity level. We control the hand positions as well as the hand and head attitude. The attitude is described with a novel 2-dof description suited for symmetrical problems. Task-specific command elements can be assigned to the command vector at any time, such enabling the system to control one or multiple effectors and to seamlessly switch between such modes while generating a smooth, natural motion. Further, kinematic constraints can be assigned to individual degrees of freedom. The underlying kinematic model does not consider the leg joints explicitly. Nevertheless, the method can be used in combination with an independent balance or walking control system, such reducing the complexity of a complete system control. We show how to incorporate walking in this control scheme and present experimental results on ASIMO
ieee-ras international conference on humanoid robots | 2005
Christian Goerick; Heiko Wersing; Inna Mikhailova; Mark Dunn
This work is concerned with a framework for visual object recognition in real world tasks. Our approach is motivated by biological findings of the representation of space around the body, the so-called peripersonal space. We show that the principles behind those findings can lead to a natural structuring of object recognition tasks in artificial systems. We demonstrate this by the supervised learning and recognition of 20 complex-shaped objects from unsegmented visual input
international conference on robotics and automation | 2008
Behzad Dariush; Michael Gienger; Bing Jian; Christian Goerick; Kikuo Fujimura
Many advanced motion control strategies developed in robotics use captured human motion data as valuable source of examples to simplify the process of programming or learning complex robot motions. Direct and online control of robots from observed human motion has several inherent challenges. The most important may be the representation of the large number of mechanical degrees of freedom involved in the execution of movement tasks. Attempting to map all such degrees of freedom from a human to a humanoid is a formidable task from an instrumentation and sensing point of view. More importantly, such an approach is incompatible with mechanisms in the central nervous system which are believed to organize or simplify the control of these degrees of freedom during motion execution and motor learning phase. Rather than specifying the desired motion of every degree of freedom for the purpose of motion control, it is important to describe motion by low dimensional motion primitives that are defined in Cartesian (or task) space. In this paper, we formulate the human to humanoid retargeting problem as a task space control problem. The control objective is to track desired task descriptors while satisfying constraints such as joint limits, velocity limits, collision avoidance, and balance. The retargeting algorithm generates the joint space trajectories that are commanded to the robot. We present experimental and simulation results of the retargeting control algorithm on the Honda humanoid robot ASIMO.
international conference on robotics and automation | 2009
Manuel Mühlig; Michael Gienger; Sven Hellbach; Jochen J. Steil; Christian Goerick
Recent advances in the field of humanoid robotics increase the complexity of the tasks that such robots can perform. This makes it increasingly difficult and inconvenient to program these tasks manually. Furthermore, humanoid robots, in contrast to industrial robots, should in the distant future behave within a social environment. Therefore, it must be possible to extend the robots abilities in an easy and natural way. To address these requirements, this work investigates the topic of imitation learning of motor skills. The focus lies on providing a humanoid robot with the ability to learn new bi-manual tasks through the observation of object trajectories. For this, an imitation learning framework is presented, which allows the robot to learn the important elements of an observed movement task by application of probabilistic encoding with Gaussian Mixture Models. The learned information is used to initialize an attractor-based movement generation algorithm that optimizes the reproduced movement towards the fulfillment of additional criteria, such as collision avoidance. Experiments performed with the humanoid robot ASIMO show that the proposed system is suitable for transferring information from a human demonstrator to the robot. These results provide a good starting point for more complex and interactive learning tasks.
International Journal of Humanoid Robotics | 2009
Behzad Dariush; Michael Gienger; Arjun Arumbakkam; Youding Zhu; Bing Jian; Kikuo Fujimura; Christian Goerick
Transferring motion from a human demonstrator to a humanoid robot is an important step toward developing robots that are easily programmable and that can replicate or learn from observed human motion. The so called motion retargeting problem has been well studied and several off-line solutions exist based on optimization approaches that rely on pre-recorded human motion data collected from a marker-based motion capture system. From the perspective of human robot interaction, there is a growing interest in online motion transfer, particularly without using markers. Such requirements have placed stringent demands on retargeting algorithms and limited the potential use of off-line and pre-recorded methods. To address these limitations, we present an online task space control theoretic retargeting formulation to generate robot joint motions that adhere to the robots joint limit constraints, joint velocity constraints and self-collision constraints. The inputs to the proposed method include low dimensional normalized human motion descriptors, detected and tracked using a vision based key-point detection and tracking algorithm. The proposed vision algorithm does not rely on markers placed on anatomical landmarks, nor does it require special instrumentation or calibration. The current implementation requires a depth image sequence, which is collected from a single time of flight imaging device. The feasibility of the proposed approach is shown by means of online experimental results on the Honda humanoid robot — ASIMO.
intelligent robots and systems | 2006
Tobias Rodemann; Martin Heckmann; Frank Joublin; Christian Goerick; Björn Schölling
We present a sound localization system that operates in real-time, calculates three binaural cues (IED, UD, and ITD) and integrates them in a biologically inspired fashion to a combined localization estimation. Position information is furthermore integrated over frequency channels and time. The localization system controls a head motor to fovealize on and track the dominant sound source. Due to an integrated noise-reduction module the system shows robust localization capabilities even in noisy conditions. Real-time performance is gained by multi-threaded parallel operation across different machines using a timestamp-based synchronization scheme to compensate for processing delays
systems man and cybernetics | 2008
Jens Schmudderich; Volker Willert; Julian Eggert; Sven Rebhan; Christian Goerick; Gerhard Sagerer; Edgar Körner
For the interaction of a mobile robot with a dynamic environment, the estimation of object motion is desired while the robot is walking and/or turning its head. In this paper, we describe a system which manages this task by combining depth from a stereo camera and computation of the camera movement from robot kinematics in order to stabilize the camera images. Moving objects are detected by applying optical flow to the stabilized images followed by a filtering method, which incorporates both prior knowledge about the accuracy of the measurement and the uncertainties of the measurement process itself. The efficiency of this system is demonstrated in a dynamic real-world scenario with a walking humanoid robot.
ieee-ras international conference on humanoid robots | 2007
Marc Toussaint; Michael Gienger; Christian Goerick
In this paper, we propose a novel method to generate optimal robot motion based on a sequence of attractor dynamics in task space. This is motivated by the biological evidence that movements in the motor cortex of animals are encoded in a similar fashion- and by the need for compact movement representations on which efficient optimization can be performed. We represent the motion as a sequence of attractor points acting in the task space of the motion. Based on this compact and robust representation, we present a scheme to generate optimal movements. Unlike traditional optimization techniques, this optimization is performed on the low-dimensional representation of the attractor points and includes the underlying control loop itself as subject to optimization. We incorporate optimality criteria such as e.g. the smoothness of the motion, collision distance measures, or joint limit avoidance. The optimization problem is solved efficiently employing the analytic equations of the overall system. Due to the fast convergence, the method is suited for dynamic environments, including the interaction with humans. We will present the details of the optimization scheme, and give a description of the chosen optimization criteria. Simulation and experimental results on the humanoid robot ASIMO will underline the potential of the proposed approach.
International Journal of Neural Systems | 2007
Heiko Wersing; Stephan Kirstein; Michael Götting; Holger Brandl; Mark Dunn; Inna Mikhailova; Christian Goerick; Jochen J. Steil; Helge Ritter; Edgar Körner
We present a biologically motivated architecture for object recognition that is capable of online learning of several objects based on interaction with a human teacher. The system combines biological principles such as appearance-based representation in topographical feature detection hierarchies and context-driven transfer between different levels of object memory. Training can be performed in an unconstrained environment by presenting objects in front of a stereo camera system and labeling them by speech input. The learning is fully online and thus avoids an artificial separation of the interaction into training and test phases. We demonstrate the performance on a challenging ensemble of 50 objects.
intelligent robots and systems | 2009
Manuel Mühlig; Michael Gienger; Jochen J. Steil; Christian Goerick
Previous work [1] shows that the movement representation in task spaces offers many advantages for learning object-related and goal-directed movement tasks through imitation. It allows to reduce the dimensionality of the data that is learned and simplifies the correspondence problem that results from different kinematic structures of teacher and robot. Further, the task space representation provides a first generalization, for example wrt. differing absolute positions, if bi-manual movements are represented in relation to each other. Although task spaces are widely used, even if they are not mentioned explicitly, they are mostly defined a priori. This work is a step towards an automatic selection of task spaces. Observed movements are mapped into a pool of possibly even conflicting task spaces and we present methods that analyze this task space pool in order to acquire task space descriptors that match the observation best. As statistical measures cannot explain importance for all kinds of movements, the presented selection scheme incorporates additional criteria such as an attention-based measure. Further, we introduce methods that make a significant step from purely statistically-driven task space selection towards model-based movement analysis using a simulation of a complex human model. Effort and discomfort of the human teacher is being analyzed and used as a hint for important task elements. All methods are validated with real-world data, gathered using color tracking with a stereo vision system and a VICON motion capturing system.