Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Daniel is active.

Publication


Featured researches published by Christian Daniel.


intelligent robots and systems | 2012

Learning concurrent motor skills in versatile solution spaces

Christian Daniel; Gerhard Neumann; Jan Peters

Future robots need to autonomously acquire motor skills in order to reduce their reliance on human programming. Many motor skill learning methods concentrate on learning a single solution for a given task. However, discarding information about additional solutions during learning unnecessarily limits autonomy. Such favoring of single solutions often requires re-learning of motor skills when the task, the environment or the robots body changes in a way that renders the learned solution infeasible. Future robots need to be able to adapt to such changes and, ideally, have a large repertoire of movements to cope with such problems. In contrast to current methods, our approach simultaneously learns multiple distinct solutions for the same task, such that a partial degeneration of this solution space does not prevent the successful completion of the task. In this paper, we present a complete framework that is capable of learning different solution strategies for a real robot Tetherball task.


international conference on robotics and automation | 2013

Learning sequential motor tasks

Christian Daniel; Gerhard Neumann; Oliver Kroemer; Jan Peters

Many real robot applications require the sequential use of multiple distinct motor primitives. This requirement implies the need to learn the individual primitives as well as a strategy to select the primitives sequentially. Such hierarchical learning problems are commonly either treated as one complex monolithic problem which is hard to learn, or as separate tasks learned in isolation. However, there exists a strong link between the robots strategy and its motor primitives. Consequently, a consistent framework is needed that can learn jointly on the level of the individual primitives and the robots strategy. We present a hierarchical learning method which improves individual motor primitives and, simultaneously, learns how to combine these motor primitives sequentially to solve complex motor tasks. We evaluate our method on the game of robot hockey, which is both difficult to learn in terms of the required motor primitives as well as its strategic elements.


international conference on robotics and automation | 2015

Towards learning hierarchical skills for multi-phase manipulation tasks

Oliver Kroemer; Christian Daniel; Gerhard Neumann; Herke van Hoof; Jan Peters

Most manipulation tasks can be decomposed into a sequence of phases, where the robots actions have different effects in each phase. The robot can perform actions to transition between phases and, thus, alter the effects of its actions, e.g. grasp an object in order to then lift it. The robot can thus reach a phase that affords the desired manipulation. In this paper, we present an approach for exploiting the phase structure of tasks in order to learn manipulation skills more efficiently. Starting with human demonstrations, the robot learns a probabilistic model of the phases and the phase transitions. The robot then employs model-based reinforcement learning to create a library of motor primitives for transitioning between phases. The learned motor primitives generalize to new situations and tasks. Given this library, the robot uses a value function approach to learn a high-level policy for sequencing the motor primitives. The proposed method was successfully evaluated on a real robot performing a bimanual grasping task.


robotics science and systems | 2014

Active Reward Learning

Christian Daniel; Malte Viering; Jan Metz; Oliver Kroemer; Jan Peters

While reward functions are an essential component of many robot learning methods, defining such functions remains a hard problem in many practical applications. For tasks such as grasping, there are no reliable success measures available. Defining reward functions by hand requires extensive task knowledge and often leads to undesired emergent behavior. Instead, we propose to learn the reward function through active learning, querying human expert knowledge for a subset of the agent’s rollouts. We introduce a framework, wherein a traditional learning algorithm interplays with the reward learning component, such that the evolution of the action learner guides the queries of the reward learner. We demonstrate results of our method on a robot grasping task and show that the learned reward function generalizes to a similar task.


Machine Learning | 2016

Probabilistic inference for determining options in reinforcement learning

Christian Daniel; Herke van Hoof; Jan Peters; Gerhard Neumann

Tasks that require many sequential decisions or complex solutions are hard to solve using conventional reinforcement learning algorithms. Based on the semi Markov decision process setting (SMDP) and the option framework, we propose a model which aims to alleviate these concerns. Instead of learning a single monolithic policy, the agent learns a set of simpler sub-policies as well as the initiation and termination probabilities for each of those sub-policies. While existing option learning algorithms frequently require manual specification of components such as the sub-policies, we present an algorithm which infers all relevant components of the option framework from data. Furthermore, the proposed approach is based on parametric option representations and works well in combination with current policy search methods, which are particularly well suited for continuous real-world tasks. We present results on SMDPs with discrete as well as continuous state-action spaces. The results show that the presented algorithm can combine simple sub-policies to solve complex tasks and can improve learning performance on simpler tasks.


Autonomous Robots | 2015

Active reward learning with a novel acquisition function

Christian Daniel; Oliver Kroemer; Malte Viering; Jan Metz; Jan Peters

Reward functions are an essential component of many robot learning methods. Defining such functions, however, remains hard in many practical applications. For tasks such as grasping, there are no reliable success measures available. Defining reward functions by hand requires extensive task knowledge and often leads to undesired emergent behavior. We introduce a framework, wherein the robot simultaneously learns an action policy and a model of the reward function by actively querying a human expert for ratings. We represent the reward model using a Gaussian process and evaluate several classical acquisition functions (AFs) from the Bayesian optimization literature in this context. Furthermore, we present a novel AF, expected policy divergence. We demonstrate results of our method for a robot grasping task and show that the learned reward function generalizes to a similar task. Additionally, we evaluate the proposed novel AF on a real robot pendulum swing-up task.


Frontiers in Computational Neuroscience | 2014

Learning modular policies for robotics

Gerhard Neumann; Christian Daniel; Alexandros Paraschos; Andras Gabor Kupcsik; Jan Peters

A promising idea for scaling robot learning to more complex tasks is to use elemental behaviors as building blocks to compose more complex behavior. Ideally, such building blocks are used in combination with a learning algorithm that is able to learn to select, adapt, sequence and co-activate the building blocks. While there has been a lot of work on approaches that support one of these requirements, no learning algorithm exists that unifies all these properties in one framework. In this paper we present our work on a unified approach for learning such a modular control architecture. We introduce new policy search algorithms that are based on information-theoretic principles and are able to learn to select, adapt and sequence the building blocks. Furthermore, we developed a new representation for the individual building block that supports co-activation and principled ways for adapting the movement. Finally, we summarize our experiments for learning modular control architectures in simulation and with real robots.


international symposium on neural networks | 2013

Autonomous reinforcement learning with hierarchical REPS

Christian Daniel; Gerhard Neumann; Jan Peters

Future intelligent robots will need to interact with uncertain and changing environments. One key aspect to allow robotic agents to adapt to such situations is to enable them to learn multiple solution strategies to one problem, such that the agent can remain flexible and employ alternative solutions even if the preferred solution is no longer viable. We propose a unifying framework that allows the use of hierarchical policies and which can, thus, learn multiple solutions at once. We build our method on the basis of relative entropy policy search, an information theoretic policy search approach to reinforcement learning, and evaluate our method on a real robot system.


intelligent robots and systems | 2015

Reinforcement learning vs human programming in tetherball robot games

Simone Parisi; Hany Abdulsamad; Alexandros Paraschos; Christian Daniel; Jan Peters

Reinforcement learning of motor skills is an important challenge in order to endow robots with the ability to learn a wide range of skills and solve complex tasks. However, comparing reinforcement learning against human programming is not straightforward. In this paper, we create a motor learning framework consisting of state-of-the-art components in motor skill learning and compare it to a manually designed program on the task of robot tetherball. We use dynamical motor primitives for representing the robots trajectories and relative entropy policy search to train the motor framework and improve its behavior by trial and error. These algorithmic components allow for high-quality skill learning while the experimental setup enables an accurate evaluation of our framework as robot players can compete against each other. In the complex game of robot tetherball, we show that our learning approach outperforms and wins a match against a high quality hand-crafted system.


neural information processing systems | 2013

Probabilistic Movement Primitives

Alexandros Paraschos; Christian Daniel; Jan Peters; Gerhard Neumann

Collaboration


Dive into the Christian Daniel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oliver Kroemer

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Alexandros Paraschos

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Herke van Hoof

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Jan Metz

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Malte Viering

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge