Mila Popovic
University of Southern Denmark
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mila Popovic.
Robotics and Autonomous Systems | 2010
Mila Popovic; Dirk Kraft; Leon Bodenhagen; Emre Baseski; Nicolas Pugeault; Danica Kragic; Tamim Asfour; Norbert Krüger
In this work, we describe and evaluate a grasping mechanism that does not make use of any specific object prior knowledge. The mechanism makes use of second-order relations between visually extracted multi-modal 3D features provided by an early cognitive vision system. More specifically, the algorithm is based on two relations covering geometric information in terms of a co-planarity constraint as well as appearance based information in terms of co-occurrence of colour properties. We show that our algorithm, although making use of such rather simple constraints, is able to grasp objects with a reasonable success rate in rather complex environments (i.e., cluttered scenes with multiple objects). Moreover, we have embedded the algorithm within a cognitive system that allows for autonomous exploration and learning in different contexts. First, the system is able to perform long action sequences which, although the grasping attempts not being always successful, can recover from mistakes and more importantly, is able to evaluate the success of the grasps autonomously by haptic feedback (i.e., by a force torque sensor at the wrist and proprioceptive information about the distance of the gripper after a gasping attempt). Such labelled data is then used for improving the initially hard-wired algorithm by learning. Moreover, the grasping behaviour has been used in a cognitive system to trigger higher level processes such as object learning and learning of object specific grasping.
international conference on development and learning | 2009
Renaud Detry; Emre Baseski; Mila Popovic; Younes Touati; Norbert Krüger; Oliver Kroemer; Jan Peters; Justus H. Piater
This paper addresses the issue of learning and representing object grasp affordances, i.e. object-gripper relative configurations that lead to successful grasps. The purpose of grasp affordances is to organize and store the whole knowledge that an agent has about the grasping of an object, in order to facilitate reasoning on grasping solutions and their achievability. The affordance representation consists in a continuous probability density function defined on the 6D gripper pose space - 3D position and orientation -, within an object-relative reference frame. Grasp affordances are initially learned from various sources, e.g. from imitation or from visual cues, leading to grasp hypothesis densities. Grasp densities are attached to a learned 3D visual object model, and pose estimation of the visual model allows a robotic agent to execute samples from a grasp hypothesis density under various object poses. Grasp outcomes are used to learn grasp empirical densities, i.e. grasps that have been confirmed through experience. We show the result of learning grasp hypothesis densities from both imitation and visual cues, and present grasp empirical densities learned from physical experience by a robot.
intelligent robots and systems | 2011
Mila Popovic; Gert Kootstra; Jimmy Alison Jørgensen; Danica Kragic; Norbert Krüger
Grasping unknown objects based on real-world visual input is a challenging problem. In this paper, we present an Early Cognitive Vision system that builds a hierarchical representation based on edge and texture information, which is a sparse but powerful description of the scene. Based on this representation we generate edge-based and surface-based grasps. The results show that the method generates successful grasps, that the edge and surface information are complementary, and that the method can deal with more complex scenes. We furthermore present a benchmark for visual-based grasping.
The International Journal of Robotics Research | 2012
Gert Kootstra; Mila Popovic; Jimmy Alison Jørgensen; Kamil Kukliński; Konstantsin Miatliuk; Danica Kragic; Norbert Krüger
Grasping unknown objects based on visual input, where no a priori knowledge about the objects is used, is a challenging problem. In this paper, we present an Early Cognitive Vision system that builds a hierarchical representation based on edge and texture information which provides a sparse but powerful description of the scene. Based on this representation, we generate contour-based and surface-based grasps. We test our method in two real-world scenarios, as well as on a vision-based grasping benchmark providing a hybrid scenario using real-world stereo images as input and a simulator for extensive and repetitive evaluation of the grasps. The results show that the proposed method is able to generate successful grasps, and in particular that the contour and surface information are complementary for the task of grasping unknown objects. This allows for dealing with rather complex scenes.
From Motor Learning to Interaction Learning in Robots | 2010
Renaud Detry; Emre Baseski; Mila Popovic; Younes Touati; Norbert Krüger; Oliver Kroemer; Jan Peters; Justus H. Piater
We develop means of learning and representing object grasp affordances probabilistically. By grasp affordance, we refer to an entity that is able to assess whether a given relative object-gripper configuration will yield a stable grasp. These affordances are represented with grasp densities, continuous probability density functions defined on the space of 3D positions and orientations. Grasp densities are registered with a visual model of the object they characterize. They are exploited by aligning them to a target object using visual pose estimation. Grasp densities are refined through experience: A robot “plays” with an object by executing grasps drawn randomly for the object’s grasp density. The robot then uses the outcomes of these grasps to build a richer density through an importance sampling mechanism. Initial grasp densities, called hypothesis densities, are bootstrapped from grasps collected using a motion capture system, or from grasps generated from the visual model of the object. Refined densities, called empirical densities, represent affordances that have been confirmed through physical experience. The applicability of our method is demonstrated by producing empirical densities for two object with a real robot and its 3-finger hand. Hypothesis densities are created from visual cues and human demonstration.
computational intelligence in robotics and automation | 2009
Leon Bodenhagen; Dirk Kraft; Mila Popovic; Emre Baseski; Peter Eggenberger Hotz; Norbert Krüger
In this work we refine an initial grasping behavior based on 3D edge information by learning. Based on a set of autonomously generated evaluated grasps and relations between the semi-global 3D edges, a prediction function is learned that computes a likelihood for the success of a grasp using either an offline or an online learning scheme. Both methods are implemented using a hybrid artificial neural network containing standard nodes with a sigmoid activation function and nodes with a radial basis function. We show that a significant performance improvement can be achieved.
Paladyn | 2012
Gert Kootstra; Mila Popovic; Jimmy Alison Jørgensen; Danica Kragic; Henrik Gordon Petersen; Norbert Krüger
We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different configurations are included in the database. The user needs to provide a method for grasp generation based on the real visual input. The grasps are then planned, executed, and evaluated by the provided grasp simulator where several grasp-quality measures are used for evaluation. This setup has the advantage that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision methods instead. As a baseline, benchmark results of our grasp strategy are included.
machine vision applications | 2015
Dirk Kraft; Wail Mustafa; Mila Popovic; Jeppe Barsøe Jessen; Anders Buch; Thiusius Rajeeth Savarimuthu; Nicolas Pugeault; Norbert Krüger
We present a deep hierarchical visual system with two parallel hierarchies for edge and surface information. In the two hierarchies, complementary visual information is represented on different levels of granularity together with the associated uncertainties and confidences. At all levels, geometric and appearance information is coded explicitly in 2D and 3D allowing to access this information separately and to link between the different levels. We demonstrate the advantages of such hierarchies in three applications covering grasping, viewpoint independent object representation, and pose estimation.
International Journal of Humanoid Robotics | 2008
Dirk Kraft; Nicolas Pugeault; Emre Baseski; Mila Popovic; Danica Kragic; Sinan Kalkan; Florentin Wörgötter; Norbert Krüger
International Conference on Cognitive Systems (CogSys 2008) | 2008
Dirk Kraft; Emre Baseski; Mila Popovic; Anna Marta Batog; Anders Kjær-Nielsen; Norbert Krüger; Ronald P. A. Petrick; Christopher W. Geib; Nicolas Pugeault; Mark Steedman; Tamim Asfour; Rüdiger Dillmann; Sinan Kalkan; Florentin Wörgötter; Bermhard Hommel; Renaud Detry; Justus H. Piater