Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Skoglund is active.

Publication


Featured researches published by Alexander Skoglund.


computational intelligence in robotics and automation | 2007

Programming by Demonstration of Pick-and-Place Tasks for Industrial Manipulators using Task Primitives

Alexander Skoglund; Boyko Iliev; Bourhane Kadmiry; Rainer Palm

This article presents an approach to Programming by Demonstration (PbD) to simplify programming of industrial manipulators. By using a set of task primitives for a known task type, the demonstration is interpreted and a manipulator program is automatically generated. A pick-and-place task is analyzed, based on the velocity profile, and decomposed in task primitives. Task primitives are basic actions of the robot/gripper, which can be executed in a sequence to form a complete a task. For modeling and generation of the demonstrated trajectory, fuzzy time clustering is used, resulting in smooth and accurate motions. To illustrate our approach, we carried out our experiments on a real industrial manipulator.


Robotics and Autonomous Systems | 2010

Programming-by-Demonstration of reaching motions-A next-state-planner approach

Alexander Skoglund; Boyko Iliev; Rainer Palm

This paper presents a novel approach to skill acquisition from human demonstration. A robot manipulator with a morphology which is very different from the human arm simply cannot copy a human motion, but has to execute its own version of the skill. When a skill once has been acquired the robot must also be able to generalize to other similar skills, without a new learning process. By using a motion planner that operates in an object-related world frame called hand-state, we show that this representation simplifies skill reconstruction and preserves the essential parts of the skill.


Archive | 2010

Programming-by-Demonstration of Reaching Motions using a Next-State-Planner

Alexander Skoglund; Boyko Iliev; Rainer Palm

Programming-by-Demonstration (PbD) is a central research topic in robotics since it is an important part of human-robot interaction. A key scientific challenge in PbD is to make robots capable of imitating a human. PbD means to instruct a robot how to perform a novel task by observing a human demonstrator performing it. Current research has demonstrated that PbD is a promising approach for effective task learning which greatly simplifies the programming process (Calinon et al., 2007), (Pardowitz et al., 2007), (Skoglund et al., 2007) and (Takamatsu et al., 2007). In this chapter a method for imitation learning is presented, based on fuzzy modeling and a next-state-planner in a PbD framework. For recent and comprehensive overviews of PbD, (also called “Imitation Learning” or “Learning from Demostration”) see (Argall et al., 2009), (Billard et al., 2008) or (Bandera et al., 2007). What might occur as a straightforward idea to copy human motion trajectories using a simple teaching-playback method, it turns out to be unrealistic for several reasons. As pointed out by Nehaniv & Dautenhahn (2002), there is significant difference in morphology between body of the robot and the robot, in imitation learning known as the correspondence problem. Further complicating the picture, the initial location of the human demonstrator and the robot in relation to task (i.e., object) might force the robot, into unreachable sections of the workspace or singular arm configurations. Moreover, in a grasping scenario it will not be possible to reproduce the motions of the human hand since there so far do not exist any robotic hand that can match the human hand in terms of functionality and sensing. In this chapter we will demonstrate that the robot can generate an appropriate reaching motion towards the target object, provided that a robotic hand with autonomous grasping capabilities is used to execute the grasp. In the approach we present here the robot observes a human first demonstrating the environment of the task (i.e., objects of interest) and the the actual task. This knowledge, i.e., grasp-related object properties, hand-object relational trajectories, and coordination of reachand-grasp motions is encoded and generalized in terms of hand-state space trajectories. The hand-state components are defined such that they are invariant with respect to perception, and includes the mapping between the human and robot hand, i.e., the correspondence. To 24


Archive | 2010

Programming-by-Demonstration of Robot Motions

Alexander Skoglund; Boyko Iliev; Rainer Palm

In this chapter a novel approach to skill acquisition from human demonstration is presented. Usually the morphology of a robot manipulator is very different from the human arm and cannot simply copy a human motion. Instead the robot has to execute its own version of the skill demonstrated by the operator. Once a skill has been acquired by the robot it must also be able to generalize to other similar skills without starting a new learning process. By using a motion planner that operates in an object-related world-frame called hand-state, we show that this representation simplifies a skill reconstruction and preserves the essential parts of the skill.


Archive | 2004

Position teaching of a robot arm by demonstration with a wearable input device

Jacopo Aleotti; Alexander Skoglund; Tom Duckett


Archive | 2009

Programming by demonstration of robot manipulators

Alexander Skoglund


Archive | 2008

A Hand State Approach to Imitation with a Next-State-Planner for Industrial Manipulators

Alexander Skoglund; Boyko Iliev; Rainer Palm


international conference on advanced robotics | 2009

Real life grasping using an under-actuated robot hand - Simulation and experiments

Johan Tegin; Boyko Iliev; Alexander Skoglund; Danica Kragic; Jan Wikander


SAIS-SSLS 2005, 3rd joint workshop of the Swedish AI and learning systems societies | 2005

Towards a supervised dyna-Q application on a robotic manipulator

Alexander Skoglund; Rainer Palm; Tom Duckett


Archive | 2006

Towards Manipulator Learning by Demonstration and Reinforcement Learning

Alexander Skoglund

Collaboration


Dive into the Alexander Skoglund's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johan Tegin

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danica Kragic

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jan Wikander

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge