Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leonel Dario Rozo is active.

Publication


Featured researches published by Leonel Dario Rozo.


Intelligent Service Robotics | 2013

A robot learning from demonstration framework to perform force-based manipulation tasks

Leonel Dario Rozo; Pablo Jiménez; Carme Torras

This paper proposes an end-to-end learning from demonstration framework for teaching force-based manipulation tasks to robots. The strengths of this work are manyfold. First, we deal with the problem of learning through force perceptions exclusively. Second, we propose to exploit haptic feedback both as a means for improving teacher demonstrations and as a human–robot interaction tool, establishing a bidirectional communication channel between the teacher and the robot, in contrast to the works using kinesthetic teaching. Third, we address the well-known what to imitate? problem from a different point of view, based on the mutual information between perceptions and actions. Lastly, the teacher’s demonstrations are encoded using a Hidden Markov Model, and the robot execution phase is developed by implementing a modified version of Gaussian Mixture Regression that uses implicit temporal information from the probabilistic model, needed when tackling tasks with ambiguous perceptions. Experimental results show that the robot is able to learn and reproduce two different manipulation tasks, with a performance comparable to the teacher’s one.


IEEE Transactions on Robotics | 2016

Learning Physical Collaborative Robot Behaviors From Human Demonstrations

Leonel Dario Rozo; Sylvain Calinon; Darwin G. Caldwell; Pablo Jiménez; Carme Torras

Robots are becoming safe and smart enough to work alongside people not only on manufacturing production lines, but also in spaces such as houses, museums, or hospitals. This can be significantly exploited in situations in which a human needs the help of another person to perform a task, because a robot may take the role of the helper. In this sense, a human and the robotic assistant may cooperatively carry out a variety of tasks, therefore requiring the robot to communicate with the person, understand his/her needs, and behave accordingly. To achieve this, we propose a framework for a user to teach a robot collaborative skills from demonstrations. We mainly focus on tasks involving physical contact with the user, in which not only position, but also force sensing and compliance become highly relevant. Specifically, we present an approach that combines probabilistic learning, dynamical systems, and stiffness estimation to encode the robot behavior along the task. Our method allows a robot to learn not only trajectory following skills, but also impedance behaviors. To show the functionality and flexibility of our approach, two different testbeds are used: a transportation task and a collaborative table assembly.


intelligent robots and systems | 2015

Learning optimal controllers in human-robot cooperative transportation tasks with position and force constraints

Leonel Dario Rozo; Danilo Bruno; Sylvain Calinon; Darwin G. Caldwell

Human-robot collaboration seeks to have humans and robots closely interacting in everyday situations. For some tasks, physical contact between the user and the robot may occur, originating significant challenges at safety, cognition, perception and control levels, among others. This paper focuses on robot motion adaptation to parameters of a collaborative task, extraction of the desired robot behavior, and variable impedance control for human-safe interaction. We propose to teach a robot cooperative behaviors from demonstrations, which are probabilistically encoded by a task-parametrized formulation of a Gaussian mixture model. Such encoding is later used for specifying both the desired state of the robot, and an optimal feedback control law that exploits the variability in position, velocity and force spaces observed during the demonstrations. The whole framework allows the robot to modify its movements as a function of parameters of the task, while showing different impedance behaviors. Tests were successfully carried out in a scenario where a 7 DOF backdrivable manipulator learns to cooperate with a human to transport an object.


international workshop on robot motion and control | 2013

Force-based robot learning of pouring skills using parametric hidden Markov models

Leonel Dario Rozo; Pablo Jiménez; Carme Torras

Robot learning from demonstration faces new challenges when applied to tasks in which forces play a key role. Pouring liquid from a bottle into a glass is one such task, where not just a motion with a certain force profile needs to be learned, but the motion is subtly conditioned by the amount of liquid in the bottle. In this paper, the pouring skill is taught to a robot as follows. In a training phase, the human teleoperates the robot using a haptic device, and data from the demonstrations are statistically encoded by a parametric hidden Markov model, which compactly encapsulates the relation between the task parameter (dependent on the bottle weight) and the force-torque traces. Gaussian mixture regression is then used at the reproduction stage for retrieving the suitable robot actions based on the force perceptions. Computational and experimental results show that the robot is able to learn to pour drinks using the proposed framework, outperforming other approaches such as the classical hidden Markov models in that it requires less training, yields more compact encodings and shows better generalization capabilities.


intelligent robots and systems | 2015

Learning bimanual end-effector poses from demonstrations using task-parameterized dynamical systems

João Silvério; Leonel Dario Rozo; Sylvain Calinon; Darwin G. Caldwell

Very often, when addressing the problem of human-robot skill transfer in task space, only the Cartesian position of the end-effector is encoded by the learning algorithms, instead of the full pose. However, orientation is just as important as position, if not more, when it comes to successfully performing a manipulation task. In this paper, we present a framework that allows robots to learn the full poses of their end-effectors in a task-parameterized manner. Our approach permits the encoding of complex skills, such as those found in bimanual manipulation scenarios, where the generalized coordination patterns between end-effectors (i.e. position and orientation patterns) need to be considered. The proposed framework combines a dynamical systems formulation of the demonstrated trajectories, both in ℝ3 and SO(3), and task-parameterized probabilistic models that build local task representations in both spaces, based on which it is possible to extract the relevant features of the demonstrated skill. We validate our approach with an experiment in which two 7-DoF WAM robots learn to perform a bimanual sweeping task.


robot and human interactive communication | 2014

Learning force and position constraints in human-robot cooperative transportation

Leonel Dario Rozo; Sylvain Calinon; Darwin G. Caldwell

Physical interaction between humans and robots arises a large set of challenging problems involving hardware, safety, control and cognitive aspects, among others. In this context, the cooperative (two or more people/robots) transportation of bulky loads in manufacturing plants is a practical example where these challenges are evident. In this paper, we address the problem of teaching a robot collaborative behaviors from human demonstrations. Specifically, we present an approach that combines: probabilistic learning and dynamical systems, to encode the robots motion along the task. Our method allows us to learn not only a desired path to take the object through, but also, the force the robot needs to apply to the load during the interaction. Moreover, the robot is able to learn and reproduce the task with varying initial and final locations of the object. The proposed approach can be used in scenarios where not only the path to be followed by the transported object matters, but also the force applied to it. Tests were successfully carried out in a scenario where a 7 DOFs backdrivable manipulator learns to cooperate, with a human, to transport an object while satisfying the position and force constraints of the task.


international conference on advanced robotics | 2011

Robot learning from demonstration of force-based tasks with multiple solution trajectories

Leonel Dario Rozo; Pablo Jiménez; Carme Torras

A learning framework with a bidirectional communication channel is proposed, where a human performs several demonstrations of a task using a haptic device (providing him/her with force-torque feedback) while a robot captures these executions using only its force-based perceptive system. Our work departs from the usual approaches to learning by demonstration in that the robot has to execute the task blindly, relying only on force-torque perceptions, and, more essential, we address goal-driven manipulation tasks with multiple solution trajectories, whereas most works tackle tasks that can be learned by just finding a generalization at the trajectory level. To cope with these multiple-solution tasks, in our framework demonstrations are represented by means of a Hidden Markov Model (HMM) and the robot reproduction of the task is performed using a modified version of Gaussian Mixture Regression that incorporates temporal information (GMRa) through the forward variable of the HMM. Also, we exploit the haptic device as a teaching and communication tool in a human-robot interaction context, as an alternative to kinesthetic-based teaching systems. Results show that the robot is able to learn a container-emptying task relying only on force-based perceptions and to achieve the goal from several non-trained initial conditions.


Frontiers in Robotics and AI | 2016

Learning Controllers for Reactive and Proactive Behaviors in Human–Robot Collaboration

Leonel Dario Rozo; João Silvério; Sylvain Calinon; Darwin G. Caldwell

Designed to safely share the same workspace as humans and assist them in a variety of tasks, the new collaborative robots are targeting manufacturing and service applications that once were considered unattainable. The large diversity of tasks to carry out, the unstructured environments and the close interaction with humans call for collaborative robots to seamlessly adapt their behaviors so as to cooperate with the users successfully under different and possibly new situations (characterized, for example, by positions of objects/landmarks in the environment, or by the user pose). This paper investigates how controllers capable of reactive and proactive behaviors in collaborative tasks can be learned from demonstrations. The proposed approach exploits the temporal coherence and dynamic characteristics of the task observed during the training phase to build a probabilistic model that enables the robot to both react to the user actions and lead the task when needed. The method is an extension of the Hidden Semi-Markov Model where the duration probability distribution is adapted according to the interaction with the user. This Adaptive Duration Hidden Semi-Markov Model (ADHSMM) is used to retrieve a sequence of states governing a trajectory optimization that provides the reference and gain matrices to the robot controller. A proof-of-concept evaluation is first carried out in a pouring task. The proposed framework is then tested in a collaborative task using a 7 DOF backdrivable manipulator.


conference on artificial intelligence research and development | 2010

Learning Force-Based Robot Skills from Haptic Demonstration

Leonel Dario Rozo; Pablo Jiménez; Carme Torras

Locally weighted as well as Gaussian mixtures learning algorithms are suitable strategies for trajectory learning and skill acquisition, in the context of programming by demonstration. Input streams other than visual information, as used in most applications up to date, reveal themselves as quite useful in trajectory learning experiments where visual sources are not available. For the first time, force/torque feedback through a haptic device has been used for teaching a teleoperated robot to empty a rigid container. The memory-based LWPLS and the non-memory-based LWPR algorithms [1,2,3], as well as both the batch and the incremental versions of GMM/GMR [4,5] were implemented, their comparison leading to very similar results, with the same pattern as regards to both the involved robot joints and the different initial experimental conditions. Tests where the teacher was instructed to follow a strategy compared to others where he was not lead to useful conclusions that permit devising the new research stages, where the taught motion will be refined by autonomous robot rehearsal through reinforcement learning.


international conference on robotics and automation | 2017

A Method for Derivation of Robot Task-Frame Control Authority from Repeated Sensory Observations

Luka Peternel; Leonel Dario Rozo; Darwin G. Caldwell; Arash Ajoudani

In this letter, we propose a novel method that enables the robot to autonomously devise an appropriate control strategy from human demonstrations without a prior knowledge of the demonstrated task. The method is primarily based on observing the patterns and consistency in the observed dataset. This is obtained through a demonstration setting that uses a motion capture system, a force sensor, and muscle activity measurements. The variables (position and force) in the collected dataset are then segmented and analysed for each axis of the observed task frame separately. While checking several conditions based on the consistency, value range, and magnitude of repeated observations, the appropriate controller (i.e., position or force) is delegated to each axis of the task frame. In the final stage, the method also checks for a correlation between variables and muscle activity patterns to determine the desired stiffness behaviour. The robot then uses the derived control strategies in autonomous operation through a hybrid force/impedance controller. To validate the proposed method, we performed experiments on real-life tasks involving physical interaction with the environment, where we considered surface wiping, material sawing, and drilling.

Collaboration


Dive into the Leonel Dario Rozo's collaboration.

Top Co-Authors

Avatar

Darwin G. Caldwell

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

João Silvério

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Yanlong Huang

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Carme Torras

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Pablo Jiménez

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arash Ajoudani

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Brian Delhaisse

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Danilo Bruno

Istituto Italiano di Tecnologia

View shared research outputs
Researchain Logo
Decentralizing Knowledge