Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kai Huebner is active.

Publication


Featured researches published by Kai Huebner.


international conference on robotics and automation | 2008

Minimum volume bounding box decomposition for shape approximation in robot grasping

Kai Huebner; Steffen Ruthotto; Danica Kragic

Thinking about intelligent robots involves consideration of how such systems can be enabled to perceive, interpret and act in arbitrary and dynamic environments. While sensor perception and model interpretation focus on the robots internal representation of the world rather passively, robot grasping capabilities are needed to actively execute tasks, modify scenarios and thereby reach versatile goals. These capabilities should also include the generation of stable grasps to safely handle even objects unknown to the robot. We believe that the key to this ability is not to select a good grasp depending on the identification of an object (e.g. as a cup), but on its shape (e.g. as a composition of shape primitives). In this paper, we envelop given 3D data points into primitive box shapes by a fit-and-split algorithm that is based on an efficient Minimum Volume Bounding Box implementation. Though box shapes are not able to approximate arbitrary data in a precise manner, they give efficient clues for planning grasps on arbitrary objects. We present the algorithm and experiments using the 3D grasping simulator Grasplt!.


The International Journal of Robotics Research | 2010

An Active Vision System for Detecting, Fixating and Manipulating Objects in the Real World

Babak Rasolzadeh; Mårten Björkman; Kai Huebner; Danica Kragic

The ability to autonomously acquire new knowledge through interaction with the environment is an important research topic in the field of robotics. The knowledge can only be acquired if suitable perception— action capabilities are present: a robotic system has to be able to detect, attend to and manipulate objects in its surrounding. In this paper, we present the results of our long-term work in the area of vision-based sensing and control. The work on finding, attending, recognizing and manipulating objects in domestic environments is studied. We present a stereo-based vision system framework where aspects of top-down and bottom-up attention as well as foveated attention are put into focus and demonstrate how the system can be utilized for robotic object grasping.


intelligent robots and systems | 2008

Selection of robot pre-grasps using box-based shape approximation

Kai Huebner; Danica Kragic

Grasping is a central issue of various robot applications, especially when unknown objects have to be manipulated by the system. In earlier work, we have shown the efficiency of 3D object shape approximation by box primitives for the purpose of grasping. A point cloud was approximated by box primitives [1]. In this paper, we present a continuation of these ideas and focus on the box representation itself. On the number of grasp hypotheses from box face normals, we apply heuristic selection integrating task, orientation and shape issues. Finally, an off-line trained neural network is applied to chose a final best hypothesis as the final grasp. We motivate how boxes as one of the simplest representations can be applied in a more sophisticated manner to generate task-dependent grasps.


intelligent robots and systems | 2010

Learning task constraints for robot grasping using graphical models

Dan Song; Kai Huebner; Ville Kyrki; Danica Kragic

This paper studies the learning of task constraints that allow grasp generation in a goal-directed manner. We show how an object representation and a grasp generated on it can be integrated with the task requirements. The scientific problems tackled are (i) identification and modeling of such task constraints, and (ii) integration between a semantically expressed goal of a task and quantitative constraint functions defined in the continuous object-action domains. We first define constraint functions given a set of object and action attributes, and then model the relationships between object, action, constraint features and the task using Bayesian networks. The probabilistic framework deals with uncertainty, combines a-priori knowledge with observed data, and allows inference on target attributes given only partial observations. We present a system designed to structure data generation and constraint learning processes that is applicable to new tasks, embodiments and sensory data. The application of the task constraint model is demonstrated in a goal-directed imitation experiment.


international conference on robotics and automation | 2011

Multivariate discretization for Bayesian Network structure learning in robot grasping

Dan Song; Carl Henrik Ek; Kai Huebner; Danica Kragic

A major challenge in modeling with BNs is learning the structure from both discrete and multivariate continuous data. A common approach in such situations is to discretize continuous data before structure learning. However efficient methods to discretize high-dimensional variables are largely lacking. This paper presents a novel method specifically aiming at discretization of high-dimensional, high-correlated data. The method consists of two integrated steps: non-linear dimensionality reduction using sparse Gaussian process latent variable models, and discretization by application of a mixture model. The model is fully probabilistic and capable to facilitate structure learning from discretized data, while at the same time retain the continuous representation. We evaluate the effectiveness of the method in the domain of robot grasping. Compared with traditional discretization schemes, our model excels both in task classification and prediction of hand grasp configurations. Further, being a fully probabilistic model it handles uncertainty in the data and can easily be integrated into other frameworks in a principled manner.


robotics: science and systems | 2009

Learning of 2D grasping strategies from box-based 3D object approximations.

Sebastian Geidenstam; Kai Huebner; Daniel Banksell; Danica Kragic

In this paper, we bridge and extend the approaches of 3D shape approximation and 2D grasping strategies. We begin by applying a shape decomposition to an object, i.e. its extracted 3D point data, u ...


international conference on robotics and automation | 2011

Integrating grasp planning with online stability assessment using tactile sensing

Yasemin Bekiroglu; Kai Huebner; Danica Kragic

This paper presents an integration of grasp planning and online grasp stability assessment based on tactile data. We show how the uncertainty in grasp execution posterior to grasp planning can be dealt with using tactile sensing and machine learning techniques. The majority of the state-of-the-art grasp planners demonstrate impressive results in simulation. However, these results are mostly based on perfect scene/object knowledge allowing for analytical measures to be employed. It is questionable how well these measures can be used in realistic scenarios where the information about the object and robot hand may be incomplete and/or uncertain. Thus, tactile and force-torque sensory information is necessary for successful online grasp stability assessment. We show how a grasp planner can be integrated with a probabilistic technique for grasp stability assessment in order to improve the hypotheses about suitable grasps on different types of objects. Experimental evaluation with a three-fingered robot hand equipped with tactile array sensors shows the feasibility and strength of the integrated approach.


eurographics | 2012

SHREC'12 track: 3D mesh segmentation

Guillaume Lavoué; Jean-Philippe Vandeborre; Halim Benhabiles; Mohamed Daoudi; Kai Huebner; Michela Mortara; Michela Spagnuolo

3D mesh segmentation is a fundamental process in many applications such as shape retrieval, compression, deformation, etc. The objective of this track is to evaluate the performance of recent segmentation methods using a ground-truth corpus and an accurate similarity metric. The ground-truth corpus is composed of 28 watertight models, grouped in five classes (animal, furniture, hand, human and bust) and each associated with 4 ground-truth segmentations done by human subjects. 3 research groups have participated to this track, the accuracy of their segmentation algorithms have been evaluated and compared with 4 other state-of-the-art methods.


IEEE Transactions on Robotics | 2015

Task-Based Robot Grasp Planning Using Probabilistic Inference

Dan Song; Carl Henrik Ek; Kai Huebner; Danica Kragic

Grasping and manipulating everyday objects in a goal-directed manner is an important ability of a service robot. The robot needs to reason about task requirements and ground these in the sensorimotor information. Grasping and interaction with objects are challenging in real-world scenarios, where sensorimotor uncertainty is prevalent. This paper presents a probabilistic framework for the representation and modeling of robot-grasping tasks. The framework consists of Gaussian mixture models for generic data discretization, and discrete Bayesian networks for encoding the probabilistic relations among various task-relevant variables, including object and action features as well as task constraints. We evaluate the framework using a grasp database generated in a simulated environment including a human and two robot hand models. The generative modeling approach allows the prediction of grasping tasks given uncertain sensory data, as well as object and grasp selection in a task-oriented manner. Furthermore, the graphical model framework provides insights into dependencies between variables and features relevant for object grasping.


international conference on computer vision systems | 2008

Integration of visual and shape attributes for object action complexes

Kai Huebner; Mårten Björkman; Babak Rasolzadeh; Martina Schmidt; Danica Kragic

Our work is oriented towards the idea of developing cognitive capabilities in artificial systems through Object Action Complexes (OACs) [7]. The theory comes up with the claim that objects and actions are inseparably intertwined. Categories of objects are not built by visual appearance only, as very common in computer vision, but by the actions an agent can perform and by attributes perceivable. The core of the OAC concept is constituting objects from a set of attributes, which can be manifold in type (e.g. color, shape, mass, material), to actions. This twofold of attributes and actions provides the base for categories. The work presented here is embedded in the development of an extensible system for providing and evolving attributes, beginning with attributes extractable from visual data.

Collaboration


Dive into the Kai Huebner's collaboration.

Top Co-Authors

Avatar

Danica Kragic

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dan Song

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Carl Henrik Ek

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Babak Rasolzadeh

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mårten Björkman

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Carl Barck-Holst

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Banksell

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Maria Ralph

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge