Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Danica Kragic is active.

Publication


Featured researches published by Danica Kragic.


IEEE Transactions on Robotics | 2014

Data-Driven Grasp Synthesis—A Survey

Jeannette Bohg; Antonio Morales; Tamim Asfour; Danica Kragic

We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.


Robotics and Autonomous Systems | 2012

Short survey: Dual arm manipulation-A survey

Christian Smith; Yiannis Karayiannidis; Lazaros Nalpantidis; Xavi Gratal; Peng Qi; Dimos V. Dimarogonas; Danica Kragic

Recent advances in both anthropomorphic robots and bimanual industrial manipulators had led to an increased interest in the specific problems pertaining to dual arm manipulation. For the future, we foresee robots performing human-like tasks in both domestic and industrial settings. It is therefore natural to study specifics of dual arm manipulation in humans and methods for using the resulting knowledge in robot control. The related scientific problems range from low-level control to high level task planning and execution. This review aims to summarize the current state of the art from the heterogenous range of fields that study the different aspects of these problems specifically in dual arm manipulation.


Computer Vision and Image Understanding | 2011

Visual object-action recognition: Inferring object affordances from human demonstration

Hedvig Kjellström; Javier Romero; Danica Kragic

This paper investigates object categorization according to function, i.e., learning the affordances of objects from human demonstration. Object affordances (functionality) are inferred from observations of humans using the objects in different types of actions. The intended application is learning from demonstration, in which a robot learns to employ objects in household tasks, from observing a human performing the same tasks with the objects. We present a method for categorizing manipulated objects and human manipulation actions in context of each other. The method is able to simultaneously segment and classify human hand actions, and detect and classify the objects involved in the action. This can serve as an initial step in a learning from demonstration method. Experiments show that the contextual information improves the classification of both objects and actions.


international conference on robotics and automation | 2008

Minimum volume bounding box decomposition for shape approximation in robot grasping

Kai Huebner; Steffen Ruthotto; Danica Kragic

Thinking about intelligent robots involves consideration of how such systems can be enabled to perceive, interpret and act in arbitrary and dynamic environments. While sensor perception and model interpretation focus on the robots internal representation of the world rather passively, robot grasping capabilities are needed to actively execute tasks, modify scenarios and thereby reach versatile goals. These capabilities should also include the generation of stable grasps to safely handle even objects unknown to the robot. We believe that the key to this ability is not to select a good grasp depending on the identification of an object (e.g. as a cup), but on its shape (e.g. as a composition of shape primitives). In this paper, we envelop given 3D data points into primitive box shapes by a fit-and-split algorithm that is based on an efficient Minimum Volume Bounding Box implementation. Though box shapes are not able to approximate arbitrary data in a precise manner, they give efficient clues for planning grasps on arbitrary objects. We present the algorithm and experiments using the 3D grasping simulator Grasplt!.


international conference on robotics and automation | 2005

Grasp Recognition for Programming by Demonstration

Staffan Ekvall; Danica Kragic

The demand for flexible and re-programmable robots has increased the need for programming by demonstration systems. In this paper, grasp recognition is considered in a programming by demonstration framework. Three methods for grasp recognition are presented and evaluated. The first method uses Hidden Markov Models to model the hand posture sequence during the grasp sequence, while the second method relies on the hand trajectory and hand rotation. The third method is a hybrid method, in which both the first two methods are active in parallel. The particular contribution is that all methods rely on the grasp sequence and not just the final posture of the hand. This facilitates grasp recognition before the grasp is completed. Also, by analyzing the entire sequence and not just the final grasp, the decision is based on more information and increased robustness of the overall system is achieved. The experimental results show that both arm trajectory and final hand posture provide important information for grasp classification. By combining them, the recognition rate of the overall system is increased.


Advanced Robotics | 2007

The Meaning of Action: a review on action recognition and mapping

Volker Krüger; Danica Kragic; Ales Ude; Christopher W. Geib

In this paper, we analyze the different approaches taken to date within the computer vision, robotics and artificial intelligence communities for the representation, recognition, synthesis and understanding of action. We deal with action at different levels of complexity and provide the reader with the necessary related literature references. We put the literature references further into context and outline a possible interpretation of action by taking into account the different aspects of action recognition, action synthesis and task-level planning.


IEEE Transactions on Robotics | 2011

Assessing Grasp Stability Based on Learning and Haptic Data

Yasemin Bekiroglu; Janne Laaksonen; Jimmy Alison Jørgensen; Ville Kyrki; Danica Kragic

An important ability of a robot that interacts with the environment and manipulates objects is to deal with the uncertainty in sensory data. Sensory information is necessary to, for example, perform online assessment of grasp stability. We present methods to assess grasp stability based on haptic data and machine-learning methods, including AdaBoost, support vector machines (SVMs), and hidden Markov models (HMMs). In particular, we study the effect of different sensory streams to grasp stability. This includes object information such as shape; grasp information such as approach vector; tactile measurements from fingertips; and joint configuration of the hand. Sensory knowledge affects the success of the grasping process both in the planning stage (before a grasp is executed) and during the execution of the grasp (closed-loop online control). In this paper, we study both of these aspects. We propose a probabilistic learning framework to assess grasp stability and demonstrate that knowledge about grasp stability can be inferred using information from tactile sensors. Experiments on both simulated and real data are shown. The results indicate that the idea to exploit the learning approach is applicable in realistic scenarios, which opens a number of interesting venues for the future research.


international conference on robotics and automation | 2006

A framework for vision based bearing only 3D SLAM

Patric Jensfelt; Danica Kragic; John Folkesson; Mårten Björkman

This paper presents a framework for 3D vision based bearing only SLAM using a single camera, an interesting setup for many real applications due to its low cost. The focus in is on the management of the features to achieve real-time performance in extraction, matching and loop detection. For matching image features to map landmarks a modified, rotationally variant SIFT descriptor is used in combination with a Harris-Laplace detector. To reduce the complexity in the map estimation while maintaining matching performance only a few, high quality, image features are used for map landmarks. The rest of the features are used for matching. The framework has been combined with an EKF implementation for SLAM. Experiments performed in indoor environments are presented. These experiments demonstrate the validity and effectiveness of the approach. In particular they show how the robot is able to successfully match current image features to the map when revisiting an area


international conference on robotics and automation | 2001

Real-time tracking meets online grasp planning

Danica Kragic; Andrew T. Miller; Peter K. Allen

Describes a synergistic integration of a grasping simulator and a real-time visual tracking system, that work in concert to (1) find an objects pose, (2) plan grasps and movement trajectories, and (3) visually monitor task execution. Starting with a CAD model of an object to be grasped, the system can find the objects pose through vision which then synchronizes the state of the robot workcell with an online, model-based grasp planning and visualization system we have developed called GraspIt. GraspIt can then plan a stable grasp for the object, and direct the robotic hand system to perform the grasp. It can also generate trajectories for the movement of the grasped object, which are used by the visual control system to monitor the task and compare the actual grasp and trajectory with the planned ones. We present experimental results using typical grasping tasks.


international conference on robotics and automation | 2001

Cue integration for visual servoing

Danica Kragic; Henrik I. Christensen

The robustness and reliability of vision algorithms is, nowadays, the key issue in robotic research and industrial applications. To control a robot in a closed-loop fashion, different tracking systems have been reported in the literature. A common approach to increased robustness of a tracking system is the use of different models (CAD model of the object, motion model) known a priori. Our hypothesis is that fusion of multiple features facilitates robust detection and tracking of objects in scenes of realistic complexity. A particular application is the estimation of a robots end-effector position in a sequence of images. The research investigates the following two different approaches to cue integration: 1) voting and 2) fuzzy logic-based fusion. The two approaches have been tested in association with scenes of varying complexity. Experimental results clearly demonstrate that fusion of cues results in a tracking system with a robust performance. The robustness is in particular evident for scenes with multiple moving objects and partial occlusion of the tracked object.

Collaboration


Dive into the Danica Kragic's collaboration.

Top Co-Authors

Avatar

Carl Henrik Ek

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Henrik I. Christensen

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mårten Björkman

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hedvig Kjellström

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasemin Bekiroglu

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Staffan Ekvall

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Smith

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Johannes A. Stork

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge