Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rudolf Lioutikov is active.

Publication


Featured researches published by Rudolf Lioutikov.


international conference on robotics and automation | 2015

Learning multiple collaborative tasks with a mixture of Interaction Primitives

Marco Ewerton; Gerhard Neumann; Rudolf Lioutikov; Heni Ben Amor; Jan Peters; Guilherme Maeda

Robots that interact with humans must learn to not only adapt to different human partners but also to new interactions. Such a form of learning can be achieved by demonstrations and imitation. A recently introduced method to learn interactions from demonstrations is the framework of Interaction Primitives. While this framework is limited to represent and generalize a single interaction pattern, in practice, interactions between a human and a robot can consist of many different patterns. To overcome this limitation this paper proposes a Mixture of Interaction Primitives to learn multiple interaction patterns from unlabeled demonstrations. Specifically the proposed method uses Gaussian Mixture Models of Interaction Primitives to model nonlinear correlations between the movements of the different agents. We validate our algorithm with two experiments involving interactive tasks between a human and a lightweight robotic arm. In the first, we compare our proposed method with conventional Interaction Primitives in a toy problem scenario where the robot and the human are not linearly correlated. In the second, we present a proof-of-concept experiment where the robot assists a human in assembling a box.


ieee-ras international conference on humanoid robots | 2014

Learning interaction for collaborative tasks with probabilistic movement primitives

Guilherme Maeda; Marco Ewerton; Rudolf Lioutikov; Heni Ben Amor; Jan Peters; Gerhard Neumann

This paper proposes a probabilistic framework based on movement primitives for robots that work in collaboration with a human coworker. Since the human coworker can execute a variety of unforeseen tasks a requirement of our system is that the robot assistant must be able to adapt and learn new skills on-demand, without the need of an expert programmer. Thus, this paper leverages on the framework of imitation learning and its application to human-robot interaction using the concept of Interaction Primitives (IPs). We introduce the use of Probabilistic Movement Primitives (ProMPs) to devise an interaction method that both recognizes the action of a human and generates the appropriate movement primitive of the robot assistant. We evaluate our method on experiments using a lightweight arm interacting with a human partner and also using motion capture trajectories of two humans assembling a box. The advantages of ProMPs in relation to the original formulation for interaction are exposed and compared.


Autonomous Robots | 2017

Probabilistic movement primitives for coordination of multiple human---robot collaborative tasks

Guilherme Maeda; Gerhard Neumann; Marco Ewerton; Rudolf Lioutikov; Oliver Kroemer; Jan Peters

This paper proposes an interaction learning method for collaborative and assistive robots based on movement primitives. The method allows for both action recognition and human–robot movement coordination. It uses imitation learning to construct a mixture model of human–robot interaction primitives. This probabilistic model allows the assistive trajectory of the robot to be inferred from human observations. The method is scalable in relation to the number of tasks and can learn nonlinear correlations between the trajectories that describe the human–robot interaction. We evaluated the method experimentally with a lightweight robot arm in a variety of assistive scenarios, including the coordinated handover of a bottle to a human, and the collaborative assembly of a toolbox. Potential applications of the method are personal caregiver robots, control of intelligent prosthetic devices, and robot coworkers in factories.


ieee-ras international conference on humanoid robots | 2015

Probabilistic segmentation applied to an assembly task

Rudolf Lioutikov; Gerhard Neumann; Guilherme Maeda; Jan Peters

Movement primitives are a well established approach for encoding and executing robot movements. While the primitives themselves have been extensively researched, the concept of movement primitive libraries has not received as much attention. Libraries of movement primitives represent the skill set of an agent and can be queried and sequenced in order to solve specific tasks. The goal of this work is to segment unlabeled demonstrations into an optimal set of skills. Our novel approach segments the demonstrations while learning a probabilistic representation of movement primitives. The method differs from current approaches by taking advantage of the often neglected, mutual dependencies between the segments contained in the demonstrations and the primitives to be encoded. Therefore, improving the combined quality of both segmentation and skill learning. Furthermore, our method allows incorporating domain specific insights using heuristics, which are subsequently evaluated and assessed through probabilistic inference methods. We demonstrate our method on a real robot application, where the robot segments demonstrations of a chair assembly task into a skill library. The library is subsequently used to assemble the chair in an order not present in the demonstrations.


ISRR (2) | 2018

A Probabilistic Framework for Semi-autonomous Robots Based on Interaction Primitives with Phase Estimation

Guilherme Maeda; Gerhard Neumann; Marco Ewerton; Rudolf Lioutikov; Jan Peters

This paper proposes an interaction learning method suited for semi-autonomous robots that work with or assist a human partner. The method aims at generating a collaborative trajectory of the robot as a function of the current action of the human. The trajectory generation is based on action recognition and prediction of the human movement given intermittent observations of his/her positions under unknown speeds of execution; a problem typically found when using motion capture systems in occluded scenarios. Of particular interest, the ability to predict the human movement while observing the initial part of the trajectory, allows for faster robot reactions. The method is based on probabilistically modelling the coupling between human-robot movement primitives and eliminates the need of time-alignment of the training data while being scalable in relation to the number of tasks. We evaluated the method using a 7-DoF lightweight robot arm equipped with a 5-finger hand in a multi-task collaborative assembly experiment, also comparing results with our previous method based on time-aligned trajectories.


The International Journal of Robotics Research | 2017

Phase estimation for fast action recognition and trajectory generation in human–robot collaboration

Guilherme Maeda; Marco Ewerton; Gerhard Neumann; Rudolf Lioutikov; Jan Peters

This paper proposes a method to achieve fast and fluid human–robot interaction by estimating the progress of the movement of the human. The method allows the progress, also referred to as the phase of the movement, to be estimated even when observations of the human are partial and occluded; a problem typically found when using motion capture systems in cluttered environments. By leveraging on the framework of Interaction Probabilistic Movement Primitives, phase estimation makes it possible to classify the human action, and to generate a corresponding robot trajectory before the human finishes his/her movement. The method is therefore suited for semi-autonomous robots acting as assistants and coworkers. Since observations may be sparse, our method is based on computing the probability of different phase candidates to find the phase that best aligns the Interaction Probabilistic Movement Primitives with the current observations. The method is fundamentally different from approaches based on Dynamic Time Warping that must rely on a consistent stream of measurements at runtime. The resulting framework can achieve phase estimation, action recognition and robot trajectory coordination using a single probabilistic representation. We evaluated the method using a seven-degree-of-freedom lightweight robot arm equipped with a five-finger hand in single and multi-task collaborative experiments. We compare the accuracy achieved by phase estimation with our previous method based on dynamic time warping.


international conference on robotics and automation | 2017

Guiding Trajectory Optimization by Demonstrated Distributions

Takayuki Osa; Amir Masoud Ghalamzan Esfahani; Rustam Stolkin; Rudolf Lioutikov; Jan Peters; Gerhard Neumann

Trajectory optimization is an essential tool for motion planning under multiple constraints of robotic manipulators. Optimization-based methods can explicitly optimize a trajectory by leveraging prior knowledge of the system and have been used in various applications such as collision avoidance. However, these methods often require a hand-coded cost function in order to achieve the desired behavior. Specifying such cost function for a complex desired behavior, e.g., disentangling a rope, is a nontrivial task that is often even infeasible. Learning from demonstration (LfD) methods offer an alternative way to program robot motion. LfD methods are less dependent on analytical models and instead learn the behavior of experts implicitly from the demonstrated trajectories. However, the problem of adapting the demonstrations to new situations, e.g., avoiding newly introduced obstacles, has not been fully investigated in the literature. In this letter, we present a motion planning framework that combines the advantages of optimization-based and demonstration-based methods. We learn a distribution of trajectories demonstrated by human experts and use it to guide the trajectory optimization process. The resulting trajectory maintains the demonstrated behaviors, which are essential to performing the task successfully, while adapting the trajectory to avoid obstacles. In simulated experiments and with a real robotic system, we verify that our approach optimizes the trajectory to avoid obstacles and encodes the demonstrated behavior in the resulting trajectory.


ieee-ras international conference on humanoid robots | 2016

Demonstration based trajectory optimization for generalizable robot motions

Dorothea Koert; Guilherme Maeda; Rudolf Lioutikov; Gerhard Neumann; Jan Peters

Learning motions from human demonstrations provides an intuitive way for non-expert users to teach tasks to robots. In particular, intelligent robotic co-workers should not only mimic human demonstrations but should also be able to adapt them to varying application scenarios. As such, robots must have the ability to generalize motions to different workspaces, e.g. to avoid obstacles not present during original demonstrations. Towards this goal our work proposes a unified method to (1) generalize robot motions to different workspaces, using a novel formulation of trajectory optimization that explicitly incorporates human demonstrations, and (2) to locally adapt and reuse the optimized solution in the form of a distribution of trajectories. This optimized distribution can be used, online, to quickly satisfy via-points and goals of a specific task. We validate the method using a 7 degrees of freedom (DoF) lightweight arm that grasps and places a ball into different boxes while avoiding obstacles that were not present during the original human demonstrations.


IAS | 2016

Learning Manipulation by Sequencing Motor Primitives with a Two-Armed Robot

Rudolf Lioutikov; Oliver Kroemer; Guilherme Maeda; Jan Peters

Learning to perform complex tasks out of a sequence of simple small demonstrations is a key ability for more flexible robots. In this paper, we present a system that allows for the acquisition of such task executions based on dynamical movement primitives (DMPs). DMPs are a successful approach to encode and generalize robot movements. However, current applications involving DMPs mainly explore movements that, although challenging in terms of dexterity and dimensionality, usually comprise a single continuous movement. This article describes the implementation of a novel system that allows sequencing of simple demonstrations, each one encoded by its own DMP, to achieve a bimanual manipulation task that is too complex to be demonstrated with a single teaching action. As the experimental results show, the resulting system can successfully accomplish a sequenced task of grasping, placing and cutting a vegetable using a setup of a bimanual robot.


international conference on robotics and automation | 2017

Probabilistic prioritization of movement primitives

Alexandros Paraschos; Rudolf Lioutikov; Jan Peters; Gerhard Neumann

Movement prioritization is a common approach to combine controllers of different tasks for redundant robots, where each task is assigned a priority. The priorities of the tasks are often handtuned or the result of an optimization, but seldomly learned from data. This letter combines Bayesian task prioritization with probabilistic movement primitives (ProMPs) to prioritize full motion sequences that are learned from demonstrations. ProMPs can encode distributions of movements over full motion sequences and provide control laws to exactly follow these distributions. The probabilistic formulation allows for a natural application of Bayesian task prioritization. We extend the ProMP controllers with an additional feedback component that accounts inaccuracies in following the distribution and allows for a more robust prioritization of primitives. We demonstrate how the task priorities can be obtained from imitation learning and how different primitives can be combined to solve even unseen task-combinations. Due to the prioritization, our approach can efficiently learn a combination of tasks without requiring individual models per task combination. Furthermore, our approach can adapt an existing primitive library by prioritizing additional controllers, for example, for implementing obstacle avoidance. Hence, the need of retraining the whole library is avoided in many cases. We evaluate our approach on reaching movements under constraints with redundant simulated planar robots and two physical robot platforms, the humanoid robot “iCub” and a KUKA LWR robot arm.

Collaboration


Dive into the Rudolf Lioutikov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco Ewerton

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Alexandros Paraschos

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Oliver Kroemer

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Heni Ben Amor

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Daniel Wilbers

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Dorothea Koert

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Elmar Rueckert

Technische Universität Darmstadt

View shared research outputs
Researchain Logo
Decentralizing Knowledge