Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luka Peternel is active.

Publication


Featured researches published by Luka Peternel.


Autonomous Robots | 2014

Teaching robots to cooperate with humans in dynamic manipulation tasks based on multi-modal human-in-the-loop approach

Luka Peternel; Tadej Petrič; Erhan Oztop; Jan Babič

We propose an approach to efficiently teach robots how to perform dynamic manipulation tasks in cooperation with a human partner. The approach utilises human sensorimotor learning ability where the human tutor controls the robot through a multi-modal interface to make it perform the desired task. During the tutoring, the robot simultaneously learns the action policy of the tutor and through time gains full autonomy. We demonstrate our approach by an experiment where we taught a robot how to perform a wood sawing task with a human partner using a two-person cross-cut saw. The challenge of this experiment is that it requires precise coordination of the robot’s motion and compliance according to the partner’s actions. To transfer the sawing skill from the tutor to the robot we used Locally Weighted Regression for trajectory generalisation, and adaptive oscillators for adaptation of the robot to the partner’s motion.


PLOS ONE | 2016

Adaptive Control of Exoskeleton Robots for Periodic Assistive Behaviours Based on EMG Feedback Minimisation

Luka Peternel; Tomoyuki Noda; Tadej Petrič; Ales Ude; Jun Morimoto; Jan Babič

In this paper we propose an exoskeleton control method for adaptive learning of assistive joint torque profiles in periodic tasks. We use human muscle activity as feedback to adapt the assistive joint torque behaviour in a way that the muscle activity is minimised. The user can then relax while the exoskeleton takes over the task execution. If the task is altered and the existing assistive behaviour becomes inadequate, the exoskeleton gradually adapts to the new task execution so that the increased muscle activity caused by the new desired task can be reduced. The advantage of the proposed method is that it does not require biomechanical or dynamical models. Our proposed learning system uses Dynamical Movement Primitives (DMPs) as a trajectory generator and parameters of DMPs are modulated using Locally Weighted Regression. Then, the learning system is combined with adaptive oscillators that determine the phase and frequency of motion according to measured Electromyography (EMG) signals. We tested the method with real robot experiments where subjects wearing an elbow exoskeleton had to move an object of an unknown mass according to a predefined reference motion. We further evaluated the proposed approach on a whole-arm exoskeleton to show that it is able to adaptively derive assistive torques even for multiple-joint motion.


international conference on robotics and automation | 2015

Human-in-the-loop approach for teaching robot assembly tasks using impedance control interface

Luka Peternel; Tadej Petrič; Jan Babič

In this paper we propose a human-in-the-loop approach for teaching robots how to solve part assembly tasks. In the proposed setup the human tutor controls the robot through a haptic interface and a hand-held impedance control interface. The impedance control interface is based on a linear spring-return potentiometer that maps the button position to the robot arm stiffness. This setup allows the tutor to modulate the robot compliance based on the given task requirements. The demonstrated motion and stiffness trajectories are encoded using Dynamical Movement Primitives and learnt using Locally Weight Regression. To validate the proposed approach we performed experiments using Kuka Light Weight Robot and HapticMaster robot. The task of the experiment was to teach the robot how to perform an assembly task involving sliding a bolt fitting inside a groove in order to mount two parts together. Different stiffness was required in different stages of the task execution to accommodate the interaction of the robot with the environment and possible human-robot cooperation.


Advanced Robotics | 2013

Learning of compliant human–robot interaction using full-body haptic interface

Luka Peternel; Jan Babič

We present a novel approach where a human demonstrator can intuitively teach robot full-body skills. The aim of this approach is to exploit human sensorimotor ability to learn how to operate a humanoid robot in real time to perform tasks involving interaction with the environment. The human skill is then used to design a controller to autonomously control the robot. To provide the demonstrator with the robot’s state suitable for the full-body motion control, we developed a novel method that transforms robot’s sensory readings into feedback appropriate for the human. This method was implemented through a haptic interface that was designed to exert forces on the demonstrator’s centre of mass corresponding to the state of the robot’s centre of mass. To evaluate the feasibility of this approach, we performed an experiment where the human demonstrator taught the robot how to compliantly interact with another human. The results of the experiment showed that the proposed approach allowed the human to intuitively teach the robot how to compliantly interact with a human.


international conference on robotics and automation | 2013

Humanoid robot posture-control learning in real-time based on human sensorimotor learning ability

Luka Peternel; Jan Babič

In this paper we propose a system capable of teaching humanoid robots new skills in real-time. The system aims to simplify the robot control and to provide a natural and intuitive interaction between the human and the robot. The key element of the system is exploitation of the human sensorimotor learning ability where a human demonstrator learns how to operate a robot in the same fashion as humans adapt to various everyday tasks. Another key aspect of the proposed system is that the robot learns the task simultaneously while the human is operating the robot. This enables the control of the robot to be gradually transferred from the human to the robot during the demonstration. The control is transferred based on the accuracy of the imitated task. We demonstrated our approach using an experiment where a human demonstrator taught a humanoid robot how to maintain the postural stability in the presence of the perturbations. To provide the appropriate feedback information of the robots postural stability to the human sensorimotor system, we utilized a custom-built haptic interface. To absorb the demonstrated skill by the robot, we used Locally Weighted Projection Regression machine learning method. A novel approach was implemented to gradually transfer the control responsibility from the human to the incrementally built autonomous robot controller.


Gait & Posture | 2014

Effects of supportive hand contact on reactive postural control during support perturbations

Jan Babič; Tadej Petrič; Luka Peternel; Nejc Sarabon

There are many everyday situations in which a supportive hand contact is required for an individual to counteract various postural perturbations. By emulating situations when balance of an individual is challenged, we examined functional role of supportive hand contact at different locations where balance of an individual was perturbed by translational perturbations of the support surface. We examined the effects of handle location, perturbation direction and perturbation intensity on the postural control and the forces generated in the handle. There were significantly larger centre-of-pressure (CoP) displacements for perturbations in posterior direction than for perturbations in anterior direction. Besides, the perturbation intensity significantly affected the peak CoP displacement in both perturbation directions. However, the position of the handle had no effects on the peak CoP displacement. On the contrary, there were significant effects of perturbation direction, perturbation intensity and handle position on the maximal force in the handle. The effect of the handle position was significant for the perturbations in posterior direction where the lowest maximal forces were recorded in the handle located at the shoulder height. They were comparable to the forces in the handle at eye height and significantly lower than the forces in the handle located either lower or further away from the shoulder. In summary, our results indicate that although the location of a supportive hand contact has no effect on the peak CoP displacement of healthy individuals, it affects the forces that an individual needs to exert on the handle in order to counteract support perturbations.


intelligent robots and systems | 2016

Towards multi-modal intention interfaces for human-robot co-manipulation

Luka Peternel; Nikos G. Tsagarakis; Arash Ajoudani

This paper presents a novel approach for human-robot cooperation in tasks with dynamic uncertainties. The essential element of the proposed method is a multi-modal interface that provides the robot with the feedback about the human motor behaviour in real-time. The human muscle activity measurements and the arm force manipulability properties encode the information about the motion and impedance, and the intended configuration of the task frame, respectively. Through this human-in-the-loop framework, the developed hybrid controller of the robot can adapt its actions to provide the desired motion and impedance regulation in different phases of the cooperative task. We experimentally evaluate the proposed approach in a two-person sawing task that requires an appropriate complementary behaviour from the two agents.


ieee-ras international conference on humanoid robots | 2016

Adaptation of robot physical behaviour to human fatigue in human-robot co-manipulation

Luka Peternel; Nikos G. Tsagarakis; Darwin G. Caldwell; Arash Ajoudani

In this paper, we propose a method that allows the robot to adapt its physical behaviour to the human fatigue in human-robot co-manipulation tasks. The robot initially imitates the human to perform the collaborative task in a leader-follower setting, using a feedback about the human motor behaviour. Simultaneously, the robot obtains the skill in online manner. When a predetermined level of human fatigue is detected, the robot uses the learnt skill to take over the physically demanding aspect of the task and contributes to a significant reduction of the human effort. The human, on the other hand, controls and supervises the high-level interaction behaviour and performs the aspects that require the contribution of both agents in such a dynamic co-manipulation setup. The robot adaptation system is based on the Dynamical Movement Primitives, Locally Weighted Regression and Adaptive Frequency Oscillators. The estimation of the human motor fatigue is carried out using a proposed online model, which is based on the human muscle activity measured by the electromyography. We demonstrate the proposed method with experiments on a real-world co-manipulation task with environmental constraints and dynamic uncertainties.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2017

A Human–Robot Co-Manipulation Approach Based on Human Sensorimotor Information

Luka Peternel; Nikos G. Tsagarakis; Arash Ajoudani

This paper aims to improve the interaction and coordination between the human and the robot in cooperative execution of complex, powerful, and dynamic tasks. We propose a novel approach that integrates online information about the human motor function and manipulability properties into the hybrid controller of the assistive robot. Through this human-in-the-loop framework, the robot can adapt to the human motor behavior and provide the appropriate assistive response in different phases of the cooperative task. We experimentally evaluate the proposed approach in two human–robot co-manipulation tasks that require specific complementary behavior from the two agents. Results suggest that the proposed technique, which relies on a minimum degree of task-level pre-programming, can achieve an enhanced physical human–robot interaction performance and deliver appropriate level of assistance to the human operator.


Autonomous Robots | 2018

Robotic assembly solution by human-in-the-loop teaching method based on real-time stiffness modulation

Luka Peternel; Tadej Petrič; Jan Babič

We propose a novel human-in-the-loop approach for teaching robots how to solve assembly tasks in unpredictable and unstructured environments. In the proposed method the human sensorimotor system is integrated into the robot control loop though a teleoperation setup. The approach combines a 3-DoF end-effector force feedback with an interface for modulation of the robot end-effector stiffness. When operating in unpredictable and unstructured environments, modulation of limb impedance is essential in terms of successful task execution, stability and safety. We developed a novel hand-held stiffness control interface that is controlled by the motion of the human finger. A teaching approach was then used to achieve autonomous robot operation. In the experiments, we analysed and solved two part-assembly tasks: sliding a bolt fitting inside a groove and driving a self-tapping screw into a material of unknown properties. We experimentally compared the proposed method to complementary robot learning methods and analysed the potential benefits of direct stiffness modulation in the force-feedback teleoperation.

Collaboration


Dive into the Luka Peternel's collaboration.

Top Co-Authors

Avatar

Jan Babič

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Arash Ajoudani

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Tadej Petrič

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Nikos G. Tsagarakis

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Jun Morimoto

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Darwin G. Caldwell

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge