Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Ulbrich is active.

Publication


Featured researches published by Stefan Ulbrich.


simulation modeling and programming for autonomous robots | 2010

OpenGRASP: a toolkit for robot grasping simulation

Beatriz León; Stefan Ulbrich; Rosen Diankov; Gustavo Puche; Markus Przybylski; Antonio Morales; Tamim Asfour; Sami Moisio; Jeannette Bohg; James J. Kuffner; Rüdiger Dillmann

Simulation is essential for different robotic research fields such as mobile robotics, motion planning and grasp planning. For grasping in particular, there are no software simulation packages, which provide a holistic environment that can deal with the variety of aspects associated with this problem. These aspects include development and testing of new algorithms, modeling of the environments and robots, including the modeling of actuators, sensors and contacts. In this paper, we present a new simulation toolkit for grasping and dexterous manipulation called OpenGRASP addressing those aspects in addition to extensibility, interoperability and public availability. OpenGRASP is based on a modular architecture, that supports the creation and addition of new functionality and the integration of existing and widely-used technologies and standards. In addition, a designated editor has been created for the generation and migration of such models. We demonstrate the current state of OpenGRASPs development and its application in a grasp evaluation environment.


ieee-ras international conference on humanoid robots | 2014

Master Motor Map (MMM) — Framework and toolkit for capturing, representing, and reproducing human motion on humanoid robots

Orner Terlemez; Stefan Ulbrich; Christian Mandery; Martin Do; Nikolaus Vahrenkamp; Tamim Asfour

We present an extended version of our work on the design and implementation of a reference model of the human body, the Master Motor Map (MMM) which should serve as a unifying framework for capturing human motions, their representation in standard data structures and formats as well as their reproduction on humanoid robots. The MMM combines the definition of a comprehensive kinematics and dynamics model of the human body with 104 DoF including hands and feet with procedures and tools for unified capturing of human motions. We present online motion converters for the mapping of human and object motions to the MMM model while taking into account subject specific anthropométrie data as well as for the mapping of MMM motion to a target robot kinematics. Experimental evaluation of the approach performed on VICON motion recordings demonstrate the benefits of the MMM as an important step towards standardized human motion representation and mapping to humanoid robots.


IAS (1) | 2013

Simox: A Robotics Toolbox for Simulation, Motion and Grasp Planning

Nikolaus Vahrenkamp; Manfred Kröhnert; Stefan Ulbrich; Tamim Asfour; Giorgio Metta; Rüdiger Dillmann; Giulio Sandini

Software development plays a major role besides hardware setup and mechanical design when it comes to building complex robots such as mobile manipulators or humanoids. Different requirements have to be addressed depending on the application. A low-level controller for example must be implemented for realtime use, whereas a task planning component will interact with the robot on a higher abstraction level. Hence, developing robotics software is subject to several constraints such as performance and robustness.


intelligent robots and systems | 2011

The OpenGRASP benchmarking suite: An environment for the comparative analysis of grasping and dexterous manipulation

Stefan Ulbrich; Daniel Kappler; Tamim Asfour; Nikolaus Vahrenkamp; Alexander Bierbaum; Markus Przybylski; Rüdiger Dillmann

In this work, we present a new software environment for the comparative evaluation of algorithms for grasping and dexterous manipulation. The key aspect in its development is to provide a tool that allows the reproduction of well-defined experiments in real-life scenarios in every laboratory and, hence, benchmarks that pave the way for objective comparison and competition in the field of grasping. In order to achieve this, experiments are performed on a sound open-source software platform with an extendable structure in order to be able to include a wider range of benchmarks defined by robotics researchers. The environment is integrated into the OpenGRASP toolkit that is built upon the OpenRAVE project and includes grasp-specific extensions and a tool for the creation/integration of new robot models. Currently, benchmarks for grasp and motion planningare included as case studies, as well as a library of domestic everyday objects models, and a real-life scenario that features a humanoid robot acting in a kitchen.


ieee-ras international conference on humanoid robots | 2009

Rapid learning of humanoid body schemas with Kinematic Bézier Maps

Stefan Ulbrich; Vicente Ruiz de Angulo; Tamim Asfour; Carme Torras; Rüdiger Dillmann

This paper addresses the problem of hand-eye coordination and, more specifically, tool-eye recalibration of humanoid robots. Inspired by results from neuroscience, a novel method to learn the forward kinematics model as part of the body schema of humanoid robots is presented. By making extensive use of techniques borrowed from the field of computer-aided geometry, the proposed kinematic Bezier maps (KB-Maps) permit reducing this complex problem to a linearly-solvable, although high-dimensional, one. Therefore, in the absence of noise, an exact kinematic model is obtained. This leads to rapid learning which, unlike in other approaches, is combined with good extrapolation capabilities. These promising theoretical advantages have been validated through simulation, and the applicability of the method to real hardware has been demonstrated through experiments on the humanoid robot ARMAR-IIIa.


Proceedings of the 2015 Joint MORSE/VAO Workshop on Model-Driven Robot Software Engineering and View-based Software-Engineering | 2015

A Domain-Specific Language (DSL) for Integrating Neuronal Networks in Robot Control

Georg Hinkel; Henning Groenda; Lorenzo Vannucci; Oliver Denninger; Nino Cauli; Stefan Ulbrich

Although robotics has made progress with respect to adaptability and interaction in natural environments, it cannot match the capabilities of biological systems. A promising approach to solve this problem is to create biologically plausible robot controllers that use detailed neuronal networks. However, this approach yields a large gap between the neuronal network and its connection to the robot on the one side and the technical implementation on the other. Existing approaches neglect bridging this gap between disciplines and their focus on different abstractions layers but manually hand-craft the simulations. This makes the tight technical integration cumbersome and error-prone impairing round-trip validation and academic advancements. Our approach maps the problem to model-driven engineering techniques and defines a domain-specific language (DSL) for integrating biologically plausible Neuronal Networks in robot control algorithms. It provides different levels of abstraction and sets an interface standard for integration. Our approach is implemented in the Neuro-Robotics Platform (NRP) of the Human Brain Project (HBP). Its practical applicability is validated in a minimalist experiment inspired by the Braitenberg vehicles based on the simulation of a four-wheeled Husky robot controlled by a neuronal network.


systems man and cybernetics | 2012

Kinematic Bézier Maps

Stefan Ulbrich; V.R. de Angulo; Tamim Asfour; Carme Torras; Rüdiger Dillmann

The kinematics of a robot with many degrees of freedom is a very complex function. Learning this function for a large workspace with a good precision requires a huge number of training samples, i.e., robot movements. In this paper, we introduce the Kinematic Bézier Map (KB-Map), a parameterizable model without the generality of other systems but whose structure readily incorporates some of the geometric constraints of a kinematic function. In this way, the number of training samples required is drastically reduced. Moreover, the simplicity of the model reduces learning to solving a linear least squares problem. Systematic experiments have been carried out showing the excellent interpolation and extrapolation capabilities of KB-Maps and their relatively low sensitivity to noise.


IEEE Transactions on Neural Networks | 2012

General Robot Kinematics Decomposition Without Intermediate Markers

Stefan Ulbrich; V.R. de Angulo; Tamim Asfour; Carme Torras; Rüdiger Dillmann

The calibration of serial manipulators with high numbers of degrees of freedom by means of machine learning is a complex and time-consuming task. With the help of a simple strategy, this complexity can be drastically reduced and the speed of the learning procedure can be increased. When the robot is virtually divided into shorter kinematic chains, these subchains can be learned separately and hence much more efficiently than the complete kinematics. Such decompositions, however, require either the possibility to capture the poses of all end effectors of all subchains at the same time, or they are limited to robots that fulfill special constraints. In this paper, an alternative decomposition is presented that does not suffer from these limitations. An offline training algorithm is provided in which the composite subchains are learned sequentially with dedicated movements. A second training scheme is provided to train composite chains simultaneously and online. Both schemes can be used together with many machine learning algorithms. In the simulations, an algorithm using parameterized self-organizing maps modified for online learning and Gaussian mixture models (GMMs) were chosen to show the correctness of the approach. The experimental results show that, using a twofold decomposition, the number of samples required to reach a given precision is reduced to twice the square root of the original number.


Frontiers in Neurorobotics | 2017

Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform

Egidio Falotico; Lorenzo Vannucci; Alessandro Ambrosano; Ugo Albanese; Stefan Ulbrich; Juan Camilo Vasquez Tieck; Georg Hinkel; Jacques Kaiser; Igor Peric; Oliver Denninger; Nino Cauli; Murat Kirtay; Arne Roennau; Gudrun Klinker; Axel Von Arnim; Luc Guyot; Daniel Peppicelli; Pablo Martínez-Cañada; Eduardo Ros; Patrick Maier; Sandro Weber; Manuel J. Huber; David A. Plecher; Florian Röhrbein; Stefan Deser; Alina Roitberg; Patrick van der Smagt; Rüdiger Dillman; Paul Levi; Cecilia Laschi

Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.


ieee-ras international conference on humanoid robots | 2015

A visual tracking model implemented on the iCub robot as a use case for a novel neurorobotic toolkit integrating brain and physics simulation

Lorenzo Vannucci; Alessandro Ambrosano; Nino Cauli; Ugo Albanese; Egidio Falotico; Stefan Ulbrich; Lars Pfotzer; Georg Hinkel; Oliver Denninger; Daniel Peppicelli; Luc Guyot; Axel Von Arnim; Stefan Deser; Patrick Maier; Rüdiger Dillman; Gundrun Klinker; Paul Levi; Alois Knoll; Marc-Oliver Gewaltig; Cecilia Laschi

Developing neuro-inspired computing paradigms that mimic nervous system function is an emerging field of research that fosters our model understanding of the biological system and targets technical applications in artificial systems. The computational power of simulated brain circuits makes them a very promising tool for the development for brain-controlled robots. Early phases of robotic controllers development make extensive use of simulators as they are easy, fast and cheap tools. In order to develop robotics controllers that encompass brain models, a tool that include both neural simulation and physics simulation is missing. Such a tool would require the capability of orchestrating and synchronizing both simulations as well as managing the exchange of data between them. The Neurorobotics Platform (NRP) aims at filling this gap through an integrated software toolkit enabling an experimenter to design and execute a virtual experiment with a simulated robot using customized brain models. As a use case for the NRP, the iCub robot has been integrated into the platform and connected to a spiking neural network. In particular, experiments of visual tracking have been conducted in order to demonstrate the potentiality of such a platform.

Collaboration


Dive into the Stefan Ulbrich's collaboration.

Top Co-Authors

Avatar

Rüdiger Dillmann

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Tamim Asfour

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Arne Roennau

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Igor Peric

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Jacques Kaiser

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Lorenzo Vannucci

Sant'Anna School of Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Oliver Denninger

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Cecilia Laschi

Sant'Anna School of Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Egidio Falotico

Sant'Anna School of Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Nino Cauli

Sant'Anna School of Advanced Studies

View shared research outputs
Researchain Logo
Decentralizing Knowledge