Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Kappler is active.

Publication


Featured researches published by Daniel Kappler.


international conference on robotics and automation | 2015

Leveraging big data for grasp planning

Daniel Kappler; Jeannette Bohg; Stefan Schaal

We propose a new large-scale database containing grasps that are applied to a large set of objects from numerous categories. These grasps are generated in simulation and are annotated with different grasp stability metrics. We use a descriptive and efficient representation of the local object shape at which each grasp is applied. Given this data, we present a two-fold analysis: (i) We use crowdsourcing to analyze the correlation of the metrics with grasp success as predicted by humans. The results show that the metric based on physics simulation is a more consistent predictor for grasp success than the standard υ-metric. The results also support the hypothesis that human labels are not required for good ground truth grasp data. Instead the physics-metric can be used to generate datasets in simulation that may then be used to bootstrap learning in the real world. (ii) We apply a deep learning method and show that it can better leverage the large-scale database for prediction of grasp success compared to logistic regression. Furthermore, the results suggest that labels based on the physics-metric are less noisy than those from the υ-metric and therefore lead to a better classification performance.


european conference on computer vision | 2016

Superpixel Convolutional Networks using Bilateral Inceptions

Raghudeep Gadde; Varun Jampani; Martin Kiefel; Daniel Kappler; Peter V. Gehler

In this paper we propose a CNN architecture for semantic image segmentation. We introduce a new “bilateral inception” module that can be inserted in existing CNN architectures and performs bilateral filtering, at multiple feature-scales, between superpixels in an image. The feature spaces for bilateral filtering and other parameters of the module are learned end-to-end using standard backpropagation techniques. The bilateral inception module addresses two issues that arise with general CNN segmentation architectures. First, this module propagates information between (super) pixels while respecting image edges, thus using the structured information of the problem for improved results. Second, the layer recovers a full resolution segmentation result from the lower resolution solution of a CNN. In the experiments, we modify several existing CNN architectures by inserting our inception module between the last CNN (\(1\times 1\) convolution) layers. Empirical results on three different datasets show reliable improvements not only in comparison to the baseline networks, but also in comparison to several dense-pixel prediction techniques such as CRFs, while being competitive in time.


robotics science and systems | 2015

Data-Driven Online Decision Making for Autonomous Manipulation

Daniel Kappler; Peter Pastor; Mrinal Kalakrishnan; Manuel Wüthrich; Stefan Schaal

One of the main challenges in autonomous manipulation is to generate appropriate multi-modal reference trajectories that enable feedback controllers to compute control commands that compensate for unmodeled perturbations and therefore to achieve the task at hand. We propose a data-driven approach to incrementally acquire reference signals from experience and decide online when and to which successive behavior to switch, ensuring successful task execution. We reformulate this online decision making problem as a pair of related classification problems. Both process the current sensor readings, composed from multiple sensor modalities, in real-time (at 30 Hz). Our approach exploits that movement generation can dictate sensor feedback. Thus, enforcing stereotypical behavior will yield stereotypical sensory events which can be accumulated and stored along with the movement plan. Such movement primitives, augmented with sensor experience, are called Associative Skill Memories (ASMs). Sensor experience consists of (real) sensors, including haptic, auditory information and visual information, as well as additional (virtual) features. We show that our approach can be used to teach dexterous tasks, e.g. a bimanual manipulation task on a real platform that requires precise manipulation of relatively small objects. Task execution is robust against perturbation and sensor noise, because our method decides online whether or not to switch to alternative ASMs due to unexpected sensory signals.


intelligent robots and systems | 2016

Towards robust online inverse dynamics learning

Franziska Meier; Daniel Kappler; Nathan D. Ratliff; Stefan Schaal

Learning of inverse dynamics modeling errors is key for compliant or force control when analytical models are only rough approximations. Thus, designing real time capable function approximation algorithms has been a necessary focus towards the goal of online model learning. However, because these approaches learn a mapping from actual state and acceleration to torque, good tracking is required to observe data points on the desired path. Recently it has been shown how online gradient descent on a simple modeling error offset term to minimize tracking at acceleration level can address this issue. However, to adapt to larger errors a high learning rate of the online learner is required, resulting in reduced compliancy. Thus, here we propose to combine both approaches: The online adapted offset term ensures good tracking such that a nonlinear function approximator is able to learn an error model on the desired trajectory. This, in turn, reduces the load on the adaptive feedback, enabling it to use a lower learning rate. Combined this creates a controller with variable feedback and low gains, and a feedforward model that can account for larger modeling errors. We demonstrate the effectiveness of this framework, in simulation and on a real system.


international conference on robotics and automation | 2016

Robot arm pose estimation by pixel-wise regression of joint angles

Felix Widmaier; Daniel Kappler; Stefan Schaal; Jeannette Bohg

To achieve accurate vision-based control with a robotic arm, a good hand-eye coordination is required. However, knowing the current configuration of the arm can be very difficult due to noisy readings from joint encoders or an inaccurate hand-eye calibration. We propose an approach for robot arm pose estimation that uses depth images of the arm as input to directly estimate angular joint positions. This is a frame-by-frame method which does not rely on good initialisation of the solution from the previous frames or knowledge from the joint encoders. For estimation, we employ a random regression forest which is trained on synthetically generated data. We compare different training objectives of the forest and also analyse the influence of prior segmentation of the arms on accuracy. We show that this approach improves previous work both in terms of computational complexity and accuracy. Despite being trained on synthetic data only, we demonstrate that the estimation also works on real depth images.


international conference on robotics and automation | 2016

Optimizing for what matters: The top grasp hypothesis

Daniel Kappler; Stefan Schaal; Jeannette Bohg

In this paper, we consider the problem of robotic grasping of objects when only partial and noisy sensor data of the environment is available. We are specifically interested in the problem of reliably selecting the best hypothesis from a whole set. This is commonly the case when trying to grasp an object for which we can only observe a partial point cloud from one viewpoint through noisy sensors. There will be many possible ways to successfully grasp this object, and even more which will fail. We propose a supervised learning method that is trained with a ranking loss. This explicitly encourages that the top-ranked training grasp in a hypothesis set is also positively labeled. We show how we adapt the standard ranking loss to work with data that has binary labels and explain the benefits of this formulation. Additionally, we show how we can efficiently optimize this loss with stochastic gradient descent. In quantitative experiments, we show that we can outperform previous models by a large margin.


international conference on robotics and automation | 2015

The Coordinate Particle Filter - a novel Particle Filter for high dimensional systems

Manuel Wüthrich; Jeannette Bohg; Daniel Kappler; Claudia Pfreundt; Stefan Schaal

Parametric filters, such as the Extended Kalman Filter and the Unscented Kalman Filter, typically scale well with the dimensionality of the problem, but they are known to fail if the posterior state distribution cannot be closely approximated by a density of the assumed parametric form.


intelligent robots and systems | 2017

On the relevance of grasp metrics for predicting grasp success

Carlos Rubert; Daniel Kappler; Antonio Morales; Stefan Schaal; Jeannette Bohg

We aim to reliably predict whether a grasp on a known object is successful before it is executed in the real world. There is an entire suite of grasp metrics that has already been developed which rely on precisely known contact points between object and hand. However, it remains unclear whether and how they may be combined into a general purpose grasp stability predictor. In this paper, we analyze these questions by leveraging a large scale database of simulated grasps on a wide variety of objects. For each grasp, we compute the value of seven metrics. Each grasp is annotated by human subjects with ground truth stability labels. Given this data set, we train several classification methods to find out whether there is some underlying, non-trivial structure in the data that is difficult to model manually but can be learned. Quantitative and qualitative results show the complexity of the prediction problem. We found that a good prediction performance critically depends on using a combination of metrics as input features. Furthermore, non-parametric and non-linear classifiers best capture the structure in the data.


intelligent robots and systems | 2017

A new data source for inverse dynamics learning

Daniel Kappler; Franziska Meier; Nathan D. Ratliff; Stefan Schaal

Modern robotics is gravitating toward increasingly collaborative human robot interaction. Tools such as acceleration policies can naturally support the realization of reactive, adaptive, and compliant robots. These tools require us to model the system dynamics accurately — a difficult task. The fundamental problem remains that simulation and reality diverge-we do not know how to accurately change a robots state. Thus, recent research on improving inverse dynamics models has been focused on making use of machine learning techniques. Traditional learning techniques train on the actual realized accelerations, instead of the policys desired accelerations, which is an indirect data source. Here we show how an additional training signal — measured at the desired accelerations — can be derived from a feedback control signal. This effectively creates a second data source for learning inverse dynamics models. Furthermore, we show how both the traditional and this new data source, can be used to train task-specific models of the inverse dynamics, when used independently or combined. We analyze the use of both data sources in simulation and demonstrate its effectiveness on a real-world robotic platform. We show that our system incrementally improves the learned inverse dynamics model, and when using both data sources combined converges more consistently and faster.


international conference on robotics and automation | 2016

Exemplar-based prediction of global object shape from local shape similarity

Jeannette Bohg; Daniel Kappler; Stefan Schaal

We propose a novel method that enables a robot to identify a graspable object part of an unknown object given only noisy and partial information that is obtained from an RGB-D camera. Our method combines the benefits of local with the advantages of global methods. It learns a classifier that takes a local shape representation as input and outputs the probability that a grasp applied at this location will be successful. Given a query data point that is classified in this way, we can retrieve all the locally similar training data points and use them to predict latent global object shape. This information may help to further prune positively labeled grasp hypotheses based on, e.g. relation to the predicted average global shape or suitability for a specific task. This prediction can also guide scene exploration to prune object shape hypotheses. To learn the function that maps local shape to grasp stability we use a Random Forest Classifier. We show that our method reaches the same classification performance as the current state-of-the-art on this dataset which uses a Convolutional Neural Network. Additionally, we exploit the natural ability of the Random Forest to cluster similar data. For a positively predicted query data point, we retrieve all the locally similar training data points that are associated with the same leaf nodes of the Random Forest. The main insight from this work is that local object shape that affords a grasp is also a good predictor of global object shape. We empirically support this claim with quantitative experiments. Additionally, we demonstrate the predictive capability of the method on some real data examples.

Collaboration


Dive into the Daniel Kappler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Franziska Meier

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Nathan D. Ratliff

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge