Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dirk Kraft is active.

Publication


Featured researches published by Dirk Kraft.


Robotics and Autonomous Systems | 2011

Object-action complexes: grounded abstractions of sensory-motor processes

Norbert Krüger; Christopher W. Geib; Justus H. Piater; Ronald P. A. Petrick; Mark Steedman; Florentin Wörgötter; Ales Ude; Tamim Asfour; Dirk Kraft; Damir Omrcen; Alejandro Agostini; Rüdiger Dillmann

Abstract This paper formalises Object–Action Complexes (OACs) as a basis for symbolic representations of sensory–motor experience and behaviours. OACs are designed to capture the interaction between objects and associated actions in artificial cognitive systems. This paper gives a formal definition of OACs, provides examples of their use for autonomous cognitive robots, and enumerates a number of critical learning problems in terms of OACs.


Robotics and Autonomous Systems | 2010

A strategy for grasping unknown objects based on co-planarity and colour information

Mila Popovic; Dirk Kraft; Leon Bodenhagen; Emre Baseski; Nicolas Pugeault; Danica Kragic; Tamim Asfour; Norbert Krüger

In this work, we describe and evaluate a grasping mechanism that does not make use of any specific object prior knowledge. The mechanism makes use of second-order relations between visually extracted multi-modal 3D features provided by an early cognitive vision system. More specifically, the algorithm is based on two relations covering geometric information in terms of a co-planarity constraint as well as appearance based information in terms of co-occurrence of colour properties. We show that our algorithm, although making use of such rather simple constraints, is able to grasp objects with a reasonable success rate in rather complex environments (i.e., cluttered scenes with multiple objects). Moreover, we have embedded the algorithm within a cognitive system that allows for autonomous exploration and learning in different contexts. First, the system is able to perform long action sequences which, although the grasping attempts not being always successful, can recover from mistakes and more importantly, is able to evaluate the success of the grasps autonomously by haptic feedback (i.e., by a force torque sensor at the wrist and proprioceptive information about the distance of the gripper after a gasping attempt). Such labelled data is then used for improving the initially hard-wired algorithm by learning. Moreover, the grasping behaviour has been used in a cognitive system to trigger higher level processes such as object learning and learning of object specific grasping.


international conference on robotics and automation | 2013

Pose estimation using local structure-specific shape and appearance context

Anders Buch; Dirk Kraft; Joni-Kristian Kamarainen; Henrik Gordon Petersen; Norbert Krüger

We address the problem of estimating the alignment pose between two models using structure-specific local descriptors. Our descriptors are generated using a combination of 2D image data and 3D contextual shape data, resulting in a set of semi-local descriptors containing rich appearance and shape information for both edge and texture structures. This is achieved by defining feature space relations which describe the neighborhood of a descriptor. By quantitative evaluations, we show that our descriptors provide high discriminative power compared to state of the art approaches. In addition, we show how to utilize this for the estimation of the alignment pose between two point sets. We present experiments both in controlled and real-life scenarios to validate our approach.


Paladyn: Journal of Behavioral Robotics | 2011

Learning grasp affordance densities

Renaud Detry; Dirk Kraft; Oliver Kroemer; Leon Bodenhagen; Jan Peters; Norbert Krüger; Justus H. Piater

We address the issue of learning and representing object grasp affordance models. We model grasp affordances with continuous probability density functions (grasp densities) which link object-relative grasp poses to their success probability. The underlying function representation is nonparametric and relies on kernel density estimation to provide a continuous model. Grasp densities are learned and refined from exploration, by letting a robot “play” with an object in a sequence of grasp-and-drop actions: the robot uses visual cues to generate a set of grasp hypotheses, which it then executes and records their outcomes. When a satisfactory amount of grasp data is available, an importance-sampling algorithm turns it into a grasp density. We evaluate our method in a largely autonomous learning experiment, run on three objects with distinct shapes. The experiment shows how learning increases success rates. It also measures the success rate of grasps chosen to maximize the probability of success, given reaching constraints.


IEEE Transactions on Autonomous Mental Development | 2013

A Survey of the Ontogeny of Tool Use: From Sensorimotor Experience to Planning

Frank Guerin; Norbert Krüger; Dirk Kraft

In this paper, we review current knowledge on tool use development in infants in order to provide relevant information to cognitive developmental roboticists seeking to design artificial systems that develop tool use abilities. This information covers: 1) sketching developmental pathways leading to tool use competences; 2) the characterization of learning and test situations; 3) the crystallization of seven mechanisms underlying the developmental process; and 4) the formulation of a number of challenges and recommendations for designing artificial systems that exhibit tool use abilities in complex contexts.


international conference on advanced robotics | 2007

Early Reactive Grasping with Second Order 3D Feature Relations

Daniel Aarno; Johan Sommerfeld; Danica Kragic; Nicolas Pugeault; Sinan Kalkan; Florentin Wörgötter; Dirk Kraft; Norbert Krüger

One of the main challenges in the field of robotics is to make robots ubiquitous. To intelligently interact with the world, such robots need to understand the environment and situations around them and react appropriately, they need context-awareness. But how to equip robots with capabilities of gathering and interpreting the necessary information for novel tasks through interaction with the environment and by providing some minimal knowledge in advance? This has been a longterm question and one of the main drives in the field of cognitive system development.


IEEE Transactions on Autonomous Mental Development | 2010

Development of Object and Grasping Knowledge by Robot Exploration

Dirk Kraft; Renaud Detry; Nicolas Pugeault; Emre Başeski; Frank Guerin; Justus H. Piater; Norbert Krüger

We describe a bootstrapping cognitive robot system that-mainly based on pure exploration-acquires rich object representations and associated object-specific grasp affordances. Such bootstrapping becomes possible by combining innate competences and behaviors by which the system gradually enriches its internal representations, and thereby develops an increasingly mature interpretation of the world and its ability to act within it. We compare the systems prior competences and developmental progress with human innate competences and developmental stages of infants.


Journal of Real-time Image Processing | 2015

Real-time extraction of surface patches with associated uncertainties by means of Kinect cameras

Søren Maagaard Olesen; Simon Lyder; Dirk Kraft; Norbert Krüger; Jeppe Barsøe Jessen

In this paper, we present our work on GPU-based real-time extraction of surface patches by means of Kinect cameras. This paper makes four contributions: (1) we derive an uncertainty model for pixel-wise depth reconstruction on Kinect cameras; (2) we implement a real-time algorithm for surface patch (here called ‘texlet’) extraction based on Kinect depth data on a GPU. For that we compare and evaluate different implementation alternatives. (3) Based on (1) we derive and implement an appropriate uncertainty model for texlets which is also computed in real-time. (4) We investigate and quantify the effect of interferences on the depth extraction process when using multiple Kinect cameras. By these contributions we present insights into the processing of depth and how to achieve higher precision reconstructions by means of Kinect cameras as well as extend their use for higher level visual processing. The introduced algorithms are available in the C++ vision library CoViS.


The International Journal of Robotics Research | 2011

Learning visual representations for perception-action systems

Justus H. Piater; Sébastien Jodogne; Renaud Detry; Dirk Kraft; Norbert Krüger; Oliver Kroemer; Jan Peters

We discuss vision as a sensory modality for systems that interact flexibly with uncontrolled environments. Instead of trying to build a generic vision system that produces task-independent representations, we argue in favor of task-specific, learn-able representations. This concept is illustrated by two examples of our own work. First, our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split perceptual states so as to reduce perceptual aliasing. This results in an adaptive discretization of the perceptual space based on the presence or absence of visual features. Its extension, RLJC, additionally handles continuous action spaces. In contrast to the minimalistic visual representations produced by RLVC and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a non-parametric representation of grasp success likelihoods over gripper poses, which we call a grasp density. Thus, object detection in a novel scene simultaneously produces suitable grasping options.


international conference on robotics and automation | 2010

Refining grasp affordance models by experience

Renaud Detry; Dirk Kraft; Anders Buch; Norbert Krüger; Justus H. Piater

We present a method for learning object grasp affordance models in 3D from experience, and demonstrate its applicability through extensive testing and evaluation on a realistic and largely autonomous platform. Grasp affordance refers here to relative object-gripper configurations that yield stable grasps. These affordances are represented probabilistically with grasp densities, which correspond to continuous density functions defined on the space of 6D gripper poses. A grasp density characterizes an objects grasp affordance; densities are linked to visual stimuli through registration with a visual model of the object they characterize. We explore a batch-oriented, experience-based learning paradigm where grasps sampled randomly from a density are performed, and an importance-sampling algorithm learns a refined density from the outcomes of these experiences. The first such learning cycle is bootstrapped with a grasp density formed from visual cues. We show that the robot effectively applies its experience by downweighting poor grasp solutions, which results in increased success rates at subsequent learning cycles. We also present success rates in a practical scenario where a robot needs to repeatedly grasp an object lying in an arbitrary pose, where each pose imposes a specific reaching constraint, and thus forces the robot to make use of the entire grasp density to select the most promising achievable grasp.

Collaboration


Dive into the Dirk Kraft's collaboration.

Top Co-Authors

Avatar

Norbert Krüger

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anders Buch

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Henrik Gordon Petersen

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Lars-Peter Ellekilde

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Tamim Asfour

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Emre Baseski

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Lars Carøe Sørensen

University of Southern Denmark

View shared research outputs
Researchain Logo
Decentralizing Knowledge