Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Herbert Janssen is active.

Publication


Featured researches published by Herbert Janssen.


ieee-ras international conference on humanoid robots | 2005

Task-oriented whole body motion for humanoid robots

Michael Gienger; Herbert Janssen; Christian Goerick

We present a whole body motion control algorithm for humanoid robots. It is based on the framework of Liegeois and solves the redundant inverse kinematics problem on velocity level. We control the hand positions as well as the hand and head attitude. The attitude is described with a novel 2-dof description suited for symmetrical problems. Task-specific command elements can be assigned to the command vector at any time, such enabling the system to control one or multiple effectors and to seamlessly switch between such modes while generating a smooth, natural motion. Further, kinematic constraints can be assigned to individual degrees of freedom. The underlying kinematic model does not consider the leg joints explicitly. Nevertheless, the method can be used in combination with an independent balance or walking control system, such reducing the complexity of a complete system control. We show how to incorporate walking in this control scheme and present experimental results on ASIMO


IEEE Transactions on Robotics | 2011

Active 3D Object Localization Using a Humanoid Robot

Alexander Andreopoulos; Stephan Hasler; Heiko Wersing; Herbert Janssen; John K. Tsotsos; Edgar Körner

We study the problem of actively searching for an object in a three-dimensional (3-D) environment under the constraint of a maximum search time using a visually guided humanoid robot with 26 degrees of freedom. The inherent intractability of the problem is discussed, and a greedy strategy for selecting the best next viewpoint is employed. We describe a target probability updating scheme approximating the optimal solution to the problem, providing an efficient solution to the selection of the best next viewpoint. We employ a hierarchical recognition architecture, inspired by human vision, that uses contextual cues for attending to the view-tuned units at the proper intrinsic scales and for active control of the robotic platform sensors coordinate frame, which also gives us control of the extrinsic image scale and achieves the proper sequence of pathognomonic views of the scene. The recognition model makes no particular assumptions on shape properties like texture and is trained by showing the object by hand to the robot. Our results demonstrate the feasibility of using state-of-the-art vision-based systems for efficient and reliable object localization in an indoor 3-D environment.


ieee-ras international conference on humanoid robots | 2006

Real-Time Self Collision Avoidance for Humanoids by means of Nullspace Criteria and Task Intervals

Hisashi Sugiura; Michael Gienger; Herbert Janssen; Christian Goerick

We describe a new method for real-time collision avoidance for humanoid robots. Instead of explicitly modifying the commands, our method influences the control system by means of a nullspace criteria and a task interval. The nullspace criteria is driven by a virtual force acting on a joint center vector that defines the minimum of a potential function in joint space. The task interval defines the target constraints in task coordinates and allows the avoidance system to specify deviations from the given target position. The advantages of this indirect method are that smooth trajectories can be achieved and the underlying motion control may use any trajectory generation method that is able to satisfy the constraints given by the collision avoidance. It is most useful for highly redundant robots like typical humanoids. The method is able to assure smooth collision free movement on the humanoid robot ASIMO in real time interaction even in cases where the dynamical constraints of legged walking apply


ieee-ras international conference on humanoid robots | 2008

Expectation-driven autonomous learning and interaction system

Bram Bolder; Holger Brandl; Martin Heracles; Herbert Janssen; Inna Mikhailova; Jens Schmüdderich; Christian Goerick

We introduce our latest autonomous learning and interaction system instance ALIS 2. It comprises different sensing modalities for visual (depth blobs, planar surfaces, motion) and auditory (speech, localization) signals and self-collision free behavior generation on the robot ASIMO. The system design emphasizes the split into a completely autonomous reactive layer and an expectation generation layer. Different feature channels can be classified and named with arbitrary speech labels in on-line learning sessions. The feasibility of the proposed approach is shown by interaction experiments.


ieee-ras international conference on humanoid robots | 2007

Towards incremental hierarchical behavior generation for humanoids

Christian Goerick; Bram Bolder; Herbert Janssen; Michael Gienger; Hisashi Sugiura; Mark Dunn; Inna Mikhailova; Tobias Rodemann; Heiko Wersing; Stephan Kirstein

The contribution of this paper is twofold. First, we present a new conceptual framework for modeling incremental hierarchical behavior control systems for humanoids. The biological motivation and the key elements are discussed. Second, we show our current instance of such a behavior control system, called ALIS. It is designed according to the concepts presented within the framework. The system is integrated with the humanoid ASIMO and comprises visual saliency computation and auditory source localization for gaze selection, a visual proto-object based fixation and short term memory of the current visual field of view, the online learning of visual appearances of such proto-objects and an interaction oriented control of the humanoid body including walking. Humans can freely interact with the system in real-time. Experiments show the feasibility of the chosen ansatz.


ieee-ras international conference on humanoid robots | 2009

Interactive online multimodal association for internal concept building in humanoids

Christian Goerick; Jens Schmüdderich; Bram Bolder; Herbert Janssen; Michael Gienger; Achim Bendig; Martin Heckmann; Tobias Rodemann; Holger Brandl; Xavier Domont; Inna Mikhailova

In this paper we report the results of our research on learning and developing cognitive systems. The results are integrated into ALIS 3, our Autonomous Learning and Interacting System version 3 realized the humanoid robot ASIMO. The results presented address crucial issues in autonomously acquiring mental concepts in artifacts. The major contributions are the following: We researched distributed learning in various modalities in which the local learning decisions mutually support each other. Associations between the different modalities (speech, vision, behavior) are learnt online, thus addressing the issue of grounding semantics. The data from the different modalities is uniformly represented in a hybrid data representation for global decisions and local novelty detection. On the behavior generation side proximity sensor driven reflexive grasping and releasing have been integrated with a planning approach based on whole body motion control. The feasibility of the chosen approach is demonstrated in interactive experiments with the integrated system. The system interactively learns visually defined classes like ¿left¿, ¿right¿, ¿up¿, ¿down¿, ¿large¿, ¿small¿, learns corresponding auditory labels and creates associations linking the auditory labels to the visually defined classes or basic behaviors for building internal concepts.


ieee-ras international conference on humanoid robots | 2008

Organizing multimodal perception for autonomous learning and interactive systems

Jens Schmuedderich; Holger Brandl; Bram Bolder; Martin Heracles; Herbert Janssen; Inna Mikhailova; Christian Goerick

A stable perception of the environment is a crucial prerequisite for researching the learning of semantics from human-robot interaction and also for the generation of behavior relying on the robots perception. In this paper, we propose several contributions to this research field. To organize visual perception the concept of proto-objects is used for the representation of scene elements. These proto-objects are created by several different sources and can be combined to provide the means for interactive autonomous behavior generation. They are also processed by several classifiers, extracting different visual properties. The robot learns to associate speech labels with these properties by using the outcome of the classifiers for online training of a speech recognition system. To ease the combination of visual and speech classifier outputs, a necessity for the online training and basis for future learning of semantics, a common representation for all classifier results is used. This uniform handling of multimodal information provides the necessary flexibility for further extension. We will show the feasibility of the proposed approach by interactive experiments with the humanoid robot ASIMO.


robot and human interactive communication | 2009

Teaching a humanoid robot: Headset-free speech interaction for audio-visual association learning

Martin Heckmann; Holger Brandl; Jens Schmuedderich; Xavier Domont; Bram Bolder; Inna Mikhailova; Herbert Janssen; Michael Gienger; Achim Bendig; Tobias Rodemann; Mark Dunn; Frank Joublin; Christian Goerick

Based on inspirations from infant development we present a system which learns associations between acoustic labels and visual representations in interaction with its tutor. The system is integrated with a humanoid robot. Except for a few trigger phrases to start learning all acoustical representations are learned online and in interaction. Similar, for the visual domain the clusters are not predefined and fully learned online. In contrast to other interactive systems the interaction with the acoustic environment is solely based on the two microphones mounted on the robots head. In this paper we give an overview on all key elements of the system and focus on the challenges arising from the headset-free learning of speech labels. In particular we present a mechanism for auditory attention integrating bottom-up and top-down information for the segmentation of the acoustic stream. The performance of the system is evaluated based on offline tests of individual parts of the system and an analysis of the online behavior.


autonome mobile systeme | 2007

Predictive Behavior Generation — A Sensor-Based Walking and Reaching Architecture for Humanoid Robots

Michael Gienger; Bram Bolder; Mark Dunn; Hisashi Sugiura; Herbert Janssen; Christian Goerick

This paper presents a sensor-based walking and reaching architecture for humanoid robots. It enables the robot to interact with its environment using a smooth whole body motion control driven by stabilized visual targets. Interactive selection mechanisms are used to switch between behavior alternatives for searching or tracking objects as well as different whole body motion strategies for reaching. The decision between different motion strategies is made based on internal predictions that are evaluated by parallel running instances of virtual whole-body controllers. The results show robust object tracking and a smooth interaction behavior that includes a large variety of whole-body postures.


intelligent robots and systems | 2009

Decentralized planning for dynamic motion generation of multi-link robotic systems

Yuichi Tazaki; Hisashi Sugiura; Herbert Janssen; Christian Goerick

This paper presents a decentralized planning method for generating dynamic whole body motions of multilink robots including humanoids. First, a robotic system will be modeled as a general multi-body dynamical system. The planning problem of a multi-body system will then be formulated as a constraint resolution problem. The problem will be solved by means of an extended Gauss-Seidel method, which is capable of handling multiple constraint groups with different priorities. The method will be demonstrated in whole-body motion generation tasks of a humanoid, both in numerical simulations and in experiments using a real humanoid robot.

Collaboration


Dive into the Herbert Janssen's collaboration.

Top Co-Authors

Avatar

Volker Hinrichsen

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

C. Drefke

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Hermann Winner

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

J. Stegner

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge