Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jimmy Alison Jørgensen is active.

Publication


Featured researches published by Jimmy Alison Jørgensen.


IEEE Transactions on Robotics | 2011

Assessing Grasp Stability Based on Learning and Haptic Data

Yasemin Bekiroglu; Janne Laaksonen; Jimmy Alison Jørgensen; Ville Kyrki; Danica Kragic

An important ability of a robot that interacts with the environment and manipulates objects is to deal with the uncertainty in sensory data. Sensory information is necessary to, for example, perform online assessment of grasp stability. We present methods to assess grasp stability based on haptic data and machine-learning methods, including AdaBoost, support vector machines (SVMs), and hidden Markov models (HMMs). In particular, we study the effect of different sensory streams to grasp stability. This includes object information such as shape; grasp information such as approach vector; tactile measurements from fingertips; and joint configuration of the hand. Sensory knowledge affects the success of the grasping process both in the planning stage (before a grasp is executed) and during the execution of the grasp (closed-loop online control). In this paper, we study both of these aspects. We propose a probabilistic learning framework to assess grasp stability and demonstrate that knowledge about grasp stability can be inferred using information from tactile sensors. Experiments on both simulated and real data are shown. The results indicate that the idea to exploit the learning approach is applicable in realistic scenarios, which opens a number of interesting venues for the future research.


intelligent robots and systems | 2011

Grasping unknown objects using an Early Cognitive Vision system for general scene understanding

Mila Popovic; Gert Kootstra; Jimmy Alison Jørgensen; Danica Kragic; Norbert Krüger

Grasping unknown objects based on real-world visual input is a challenging problem. In this paper, we present an Early Cognitive Vision system that builds a hierarchical representation based on edge and texture information, which is a sparse but powerful description of the scene. Based on this representation we generate edge-based and surface-based grasps. The results show that the method generates successful grasps, that the edge and surface information are complementary, and that the method can deal with more complex scenes. We furthermore present a benchmark for visual-based grasping.


The International Journal of Robotics Research | 2012

Enabling grasping of unknown objects through a synergistic use of edge and surface information

Gert Kootstra; Mila Popovic; Jimmy Alison Jørgensen; Kamil Kukliński; Konstantsin Miatliuk; Danica Kragic; Norbert Krüger

Grasping unknown objects based on visual input, where no a priori knowledge about the objects is used, is a challenging problem. In this paper, we present an Early Cognitive Vision system that builds a hierarchical representation based on edge and texture information which provides a sparse but powerful description of the scene. Based on this representation, we generate contour-based and surface-based grasps. We test our method in two real-world scenarios, as well as on a vision-based grasping benchmark providing a hybrid scenario using real-world stereo images as input and a simulator for extensive and repetitive evaluation of the grasps. The results show that the proposed method is able to generate successful grasps, and in particular that the contour and surface information are complementary for the task of grasping unknown objects. This allows for dealing with rather complex scenes.


Autonomous Robots | 2015

Adaptation of manipulation skills in physical contact with the environment to reference force profiles

Fares J. Abu-Dakka; Bojan Nemec; Jimmy Alison Jørgensen; Thiusius Rajeeth Savarimuthu; Norbert Krüger; Ales Ude

We propose a new methodology for learning and adaption of manipulation skills that involve physical contact with the environment. Pure position control is unsuitable for such tasks because even small errors in the desired trajectory can cause significant deviations from the desired forces and torques. The proposed algorithm takes a reference Cartesian trajectory and force/torque profile as input and adapts the movement so that the resulting forces and torques match the reference profiles. The learning algorithm is based on dynamic movement primitives and quaternion representation of orientation, which provide a mathematical machinery for efficient and stable adaptation. Experimentally we show that the robot’s performance can be significantly improved within a few iteration steps, compensating for vision and other errors that might arise during the execution of the task. We also show that our methodology is suitable both for robots with admittance and for robots with impedance control.


international conference on advanced robotics | 2013

Transfer of assembly operations to new workpiece poses by adaptation to the desired force profile

Bojan Nemec; Fares J. Abu-Dakka; Barry Ridge; Ales Ude; Jimmy Alison Jørgensen; Thiusius Rajeeth Savarimuthu; Jerome Jouffroy; Henrik Gordon Petersen; Norbert Krüger

In this paper we propose a new algorithm that can be used for adaptation of robot trajectories in automated assembly tasks. Initial trajectories and forces are obtained by demonstration and iteratively adapted to specific environment configurations. The algorithm adapts Cartesian space trajectories to match the forces recorded during the human demonstration. Experimentally we show the effectiveness of our approach on learning of Peg-in-Hole (PiH) task. We performed our experiments on two different robotic platforms with workpieces of different shapes.


Paladyn | 2012

VisGraB: A benchmark for vision-based grasping

Gert Kootstra; Mila Popovic; Jimmy Alison Jørgensen; Danica Kragic; Henrik Gordon Petersen; Norbert Krüger

We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different configurations are included in the database. The user needs to provide a method for grasp generation based on the real visual input. The grasps are then planned, executed, and evaluated by the provided grasp simulator where several grasp-quality measures are used for evaluation. This setup has the advantage that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision methods instead. As a baseline, benchmark results of our grasp strategy are included.


international conference on robotics and automation | 2012

Simulating robot handling of large scale deformable objects: Manufacturing of unique concrete reinforcement structures

Jens Cortsen; Jimmy Alison Jørgensen; Dorthe Sølvason; Henrik Gordon Petersen

Automatic offline programming of industrial robotic systems is becoming increasingly important due to the larger percentage of desired automation of low volume tasks. Often, such tasks may involve handling of items that can have rather large deflections which are important to take into account when doing offline programming. In this paper such a problem is presented, namely robotic assembly of unique concrete reinforcement structures. Reinforcement bars of 3 meters may deflect up to around 50cm. We illustrate experimentally how the reinforcement bar can be precisely modelled by a structure consisting of rigid parts connected by “deflection joints”. Such a model can be directly integrated into existing physics simulation engines such as the Open Dynamics Engine (ODE). Finally, we discuss how the simulation will be used for automatic offline programming and present a video with a dynamic simulation of the reinforcement assembly process.


Technology Transfer Experiments from the ECHORD Project | 2014

Automatic Grasp Generation and Improvement for Industrial Bin-Picking

Dirk Kraft; Lars-Peter Ellekilde; Jimmy Alison Jørgensen

This paper presents work on automatic grasp generation and grasp learning for reducing the manual setup time and increase grasp success rates within bin-picking applications. We propose an approach that is able to generate good grasps automatically using a dynamic grasp simulator, a newly developed robust grasp quality measure and post-processing methods. In addition we present an offline learning approach that is able to adjust grasp priorities based on prior performance. We show, on two real world platforms, that one can replace manual grasp selection by our automatic grasp selection process and achieve comparable results and that our learning approach can improve system performance significantly. Automatic bin-picking is an important industrial process that can lead to significant savings and potentially keep production in countries with high labour cost rather than outsourcing it. The presented work allows to minimize cycle time as well as setup cost, which are essential factors in automatic bin-picking. It therefore leads to a wider applicability of bin-picking in industry.


intelligent robots and systems | 2012

Applying a learning framework for improving success rates in industrial bin picking

Lars-Peter Ellekilde; Jimmy Alison Jørgensen; Dirk Kraft; Norbert Krüger; Justus H. Piater; Henrik Gordon Petersen

In this paper, we present what appears to be the first studies of how to apply learning methods for improving the grasp success probability in industrial bin picking. Our study comprises experiments with both a pneumatic parallel gripper and a suction cup. The baseline is a prioritized list of grasps that have been chosen manually by an experienced engineer. We discuss generally the probability space for success probability in bin picking and we provide suggestions for robust success probability estimates for difference sizes of experimental sets. By performing grasps equivalent to one or two days in production, we show that the success probabilities can be significantly improved by the proposed learning procedure.


international conference on methods and models in automation and robotics | 2015

Task and context sensitive optimization of gripper design using dynamic grasp simulation

Adam Wolniakowski; Jimmy Alison Jørgensen; Konstantsin Miatliuk; Henrik Gordon Petersen; Norbert Krüger

In this work, we present a generic approach to optimize the design of a parametrized robot gripper including both gripper parameters and parameters of the finger geometry. We demonstrate our gripper optimization on a parallel jaw type gripper which we have parametrized in a 11 dimensional space. We furthermore present a parametrization of the grasping task and context, which is essential as input to the computation of gripper performance. We exemplify the feasibility of our approach by computing several optimized grippers on a real world industrial object in three different scenarios.

Collaboration


Dive into the Jimmy Alison Jørgensen's collaboration.

Top Co-Authors

Avatar

Norbert Krüger

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Henrik Gordon Petersen

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Lars-Peter Ellekilde

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Danica Kragic

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dirk Kraft

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Mila Popovic

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Adam Wolniakowski

Bialystok University of Technology

View shared research outputs
Top Co-Authors

Avatar

Konstantsin Miatliuk

Bialystok University of Technology

View shared research outputs
Top Co-Authors

Avatar

Gert Kootstra

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasemin Bekiroglu

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge