Thiusius Rajeeth Savarimuthu
University of Southern Denmark
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thiusius Rajeeth Savarimuthu.
Autonomous Robots | 2015
Fares J. Abu-Dakka; Bojan Nemec; Jimmy Alison Jørgensen; Thiusius Rajeeth Savarimuthu; Norbert Krüger; Ales Ude
We propose a new methodology for learning and adaption of manipulation skills that involve physical contact with the environment. Pure position control is unsuitable for such tasks because even small errors in the desired trajectory can cause significant deviations from the desired forces and torques. The proposed algorithm takes a reference Cartesian trajectory and force/torque profile as input and adapts the movement so that the resulting forces and torques match the reference profiles. The learning algorithm is based on dynamic movement primitives and quaternion representation of orientation, which provide a mathematical machinery for efficient and stable adaptation. Experimentally we show that the robot’s performance can be significantly improved within a few iteration steps, compensating for vision and other errors that might arise during the execution of the task. We also show that our methodology is suitable both for robots with admittance and for robots with impedance control.
international workshop on robot motion and control | 2013
Thiusius Rajeeth Savarimuthu; Danny Liljekrans; Lars-Peter Ellekilde; Ales Ude; Bojan Nemec; Norbert Krüger
In this paper, we perform a quantitative and qualitative analysis of human peg-in-hole operations in a tele-operating setting with a moderate degree of dexterity. Peg-in-hole operation with different starting configurations are performed with the aim to derive a strategy for performing such actions with a robot. The robot is a 6 DoF robot arm with the dexterous 3 finger SDH-2 gripper. From the extracted data, we can distill important insights about (1) feasible grasps depending on the pegs pose, (2) the object trajectory, (3) the occurrence of a particular force-torque pattern during the monitoring of the action and (4) an appropriate insertion strategy. At the end of the paper, we discuss consequences for using these insights for deriving algorithms for robot execution of peg-in-hole actions with dexterous manipulators.
international conference on ultra modern telecommunications | 2014
Kamil Kukliński; Kerstin Fischer; Ilka Marhenke; Franziska Kirstein; Maria Vanessa aus der Wieschen; Dorthe Sølvason; Norbert Krüger; Thiusius Rajeeth Savarimuthu
Learning by demonstration is a useful technique to augment a robots behavioral inventory, and teleoperation allows lay users to demonstrate novel behaviors intuitively to the robot. In this paper, we compare two modes of teleoperation of an industrial robot, the demonstration by means of a data glove and by means of a control object (peg). Experiments with 16 lay users, performing assembly task on the Cranfield benchmark objects, show that the control peg leads to more success, more efficient demonstration and fewer errors.
Journal of Real-time Image Processing | 2011
Thiusius Rajeeth Savarimuthu; Anders Kjær-Nielsen; Anders Stengaard Sørensen
Image processing involving correlation based filter algorithms have proved extremely useful for image enhancement, feature extraction and recognition, in a wide range of medical applications, but is almost exclusively used with still images due to the amount of computations required by the correlations. In this paper, we present two different practical methods for applying correlation-based algorithms to real-time video images, using hardware accelerated correlation, as well as our results in applying the method to optical venography. The first method employs a GPU accelerated personal computer, while the second method employs an embedded FPGA. We will discuss major difference between the two approaches, and their suitability for clinical use. The system presented detects blood vessels in human forearms in images from NIR camera setup for the use in a clinical environment.
international conference on advanced robotics | 2013
Bojan Nemec; Fares J. Abu-Dakka; Barry Ridge; Ales Ude; Jimmy Alison Jørgensen; Thiusius Rajeeth Savarimuthu; Jerome Jouffroy; Henrik Gordon Petersen; Norbert Krüger
In this paper we propose a new algorithm that can be used for adaptation of robot trajectories in automated assembly tasks. Initial trajectories and forces are obtained by demonstration and iteratively adapted to specific environment configurations. The algorithm adapts Cartesian space trajectories to match the forces recorded during the human demonstration. Experimentally we show the effectiveness of our approach on learning of Peg-in-Hole (PiH) task. We performed our experiments on two different robotic platforms with workpieces of different shapes.
Künstliche Intelligenz | 2014
Norbert Krüger; Ales Ude; Henrik Gordon Petersen; Bojan Nemec; Lars-Peter Ellekilde; Thiusius Rajeeth Savarimuthu; Jimmy Alison Rytz; Kerstin Fischer; Anders Buch; Dirk Kraft; Wail Mustafa; Eren Erdal Aksoy; Jeremie Papon; Aljaž Kramberger; Florentin Wörgötter
In this article, we describe technologies facilitating the set-up of automated assembly solutions which have been developed in the context of the IntellAct project (2011–2014). Tedious procedures are currently still required to establish such robot solutions. This hinders especially the automation of so called few-of-a-kind production. Therefore, most production of this kind is done manually and thus often performed in low-wage countries. In the IntellAct project, we have developed a set of methods which facilitate the set-up of a complex automatic assembly process, and here we present our work on tele-operation, dexterous grasping, pose estimation and learning of control strategies. The prototype developed in IntellAct is at a TRL4 (corresponding to ‘demonstration in lab environment’).
human-robot interaction | 2014
Ilka Marhenke; Kerstin Fischer; Thiusius Rajeeth Savarimuthu
In this paper, the causes for singularity of a robot arm in teleoperation for robot learning from demonstration are analyzed. Singularity is the alignment of robot joints, which prevents the configuration of the inverse kinematics. Inspired by users’ own hypotheses, we investigated speed and delay as possible causes. The results show that delay causes problems during teleoperation though not in direct control with a control panel because users expect a different, more intuitive control in teleoperation. Speed on the other hand was not found to have an effect on the occurrence of singularity. Categories and Subject Description H1.2 User/Machine Systems General Terms Human Factors
Production Engineering | 2014
Christian Schlette; Anders Buch; Eren Erdal Aksoy; Thomas Steil; Jeremie Papon; Thiusius Rajeeth Savarimuthu; Florentin Wörgötter; Norbert Krüger; Jürgen Roßmann
AbstractThe development of programming paradigms for industrial assembly currently gets fresh impetus from approaches in human demonstration and programming-by-demonstration. Major low- and mid-level prerequisites for machine vision and learning in these intelligent robotic applications are pose estimation, stereo reconstruction and action recognition. As a basis for the machine vision and learning involved, pose estimation is used for deriving object positions and orientations and thus target frames for robot execution. Our contribution introduces and applies a novel benchmark for typical multi-sensor setups and algorithms in the field of demonstration-based automated assembly. The benchmark platform is equipped with a multi-sensor setup consisting of stereo cameras and depth scanning devices (see Fig. 1). The dimensions and abilities of the platform have been chosen in order to reflect typical manual assembly tasks. Following the eRobotics methodology, a simulatable 3D representation of this platform was modelled in virtual reality. Based on a detailed camera and sensor simulation, we generated a set of benchmark images and point clouds with controlled levels of noise as well as ground truth data such as object positions and time stamps. We demonstrate the application of the benchmark to evaluate our latest developments in pose estimation, stereo reconstruction and action recognition and publish the benchmark data for objective comparison of sensor setups and algorithms in industry.
scandinavian conference on image analysis | 2013
Anders Buch; Jeppe Barsøe Jessen; Dirk Kraft; Thiusius Rajeeth Savarimuthu; Norbert Krüger
We propose a method for the extraction of complete and rich symbolic line segments in 3D based on RGB-D data. Edges are detected by combining cues from the RGB image and the aligned depth map. 3D line segments are then reconstructed by back-projecting 2D line segments and intersecting this with local surface patches computed from the 3D point cloud. Different edge types are classified using the new enriched representation and the potential of this representation for the task of pose estimation is demonstrated.
human robot interaction | 2016
Kerstin Fischer; Franziska Kirstein; Lars Christian Jensen; Norbert Krüger; Kamil Kukliński; Maria Vanessa aus der Wieschen; Thiusius Rajeeth Savarimuthu
Programming by Demonstration (PbD) is an efficient way for non-experts to teach new skills to a robot. PbD can be carried out in different ways, for instance, by kinesthetic guidance, teleoperation or by using external controls. In this paper, we compare these three ways of controlling a robot in terms of efficiency, effectiveness (success and error rate) and usability. In an industrial assembly scenario, 51 participants carried out peg-in-hole tasks using one of the three control modalities. The results show that kinesthetic guidance produces the best results. In order to test whether the problems during teleoperation are due to the fact that users cannot, like in kinesthetic guidance, switch between control points using traditional teleoperation devices, we designed a new device that allows users to switch between controls for large and small movements. A user study with 15 participants shows that the novel teleoperation device yields almost as good results as kinesthetic guidance.