Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Plagemann is active.

Publication


Featured researches published by Christian Plagemann.


computer vision and pattern recognition | 2010

Real time motion capture using a single time-of-flight camera

Varun Ganapathi; Christian Plagemann; Daphne Koller; Sebastian Thrun

Markerless tracking of human pose is a hard yet relevant problem. In this paper, we derive an efficient filtering algorithm for tracking human pose using a stream of monocular depth images. The key idea is to combine an accurate generative model — which is achievable in this setting using programmable graphics hardware — with a discriminative model that provides data-driven evidence about body part locations. In each filter iteration, we apply a form of local model-based search that exploits the nature of the kinematic chain. As fast movements and occlusion can disrupt the local search, we utilize a set of discriminatively trained patch classifiers to detect body parts. We describe a novel algorithm for propagating this noisy evidence about body part locations up the kinematic chain using the un-scented transform. The resulting distribution of body configurations allows us to reinitialize the model-based search. We provide extensive experimental results on 28 real-world sequences using automatic ground-truth annotations from a commercial motion capture system.


international conference on robotics and automation | 2010

Real-time identification and localization of body parts from depth images

Christian Plagemann; Varun Ganapathi; Daphne Koller; Sebastian Thrun

We deal with the problem of detecting and identifying body parts in depth images at video frame rates. Our solution involves a novel interest point detector for mesh and range data that is particularly well suited for analyzing human shape. The interest points, which are based on identifying geodesic extrema on the surface mesh, coincide with salient points of the body, which can be classified as, e.g., hand, foot or head using local shape descriptors. Our approach also provides a natural way of estimating a 3D orientation vector for a given interest point. This can be used to normalize the local shape descriptors to simplify the classification problem as well as to directly estimate the orientation of body parts in space. Experiments involving ground truth labels acquired via an active motion capture system show that our interest points in conjunction with a boosted patch classifier are significantly better in detecting body parts in depth images than state-of-the-art sliding-window based detectors.


computer vision and pattern recognition | 2010

Upsampling range data in dynamic environments

Jennifer Dolson; Jongmin Baek; Christian Plagemann; Sebastian Thrun

We present a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.


european conference on computer vision | 2012

Real-time human pose tracking from range data

Varun Ganapathi; Christian Plagemann; Daphne Koller; Sebastian Thrun

Tracking human pose in real-time is a difficult problem with many interesting applications. Existing solutions suffer from a variety of problems, especially when confronted with unusual human poses. In this paper, we derive an algorithm for tracking human pose in real-time from depth sequences based on MAP inference in a probabilistic temporal model. The key idea is to extend the iterative closest points (ICP) objective by modeling the constraint that the observed subject cannot enter free space, the area of space in front of the true range measurements. Our primary contribution is an extension to the articulated ICP algorithm that can efficiently enforce this constraint. The resulting filter runs at 125 frames per second using a single desktop CPU core. We provide extensive experimental results on challenging real-world data, which show that the algorithm outperforms the previous state-of-the-art trackers both in computational efficiency and accuracy.


Autonomous Robots | 2009

Learning gas distribution models using sparse Gaussian process mixtures

Cyrill Stachniss; Christian Plagemann; Achim J. Lilienthal

In this paper, we consider the problem of learning two-dimensional spatial models of gas distributions. To build models of gas distributions that can be used to accurately predict the gas concentration at query locations is a challenging task due to the chaotic nature of gas dispersal. We formulate this task as a regression problem. To deal with the specific properties of gas distributions, we propose a sparse Gaussian process mixture model, which allows us to accurately represent the smooth background signal and the areas with patches of high concentrations. We furthermore integrate the sparsification of the training data into an EM procedure that we apply for learning the mixture components and the gating function. Our approach has been implemented and tested using datasets recorded with a real mobile robot equipped with an electronic nose. The experiments demonstrate that our technique is well-suited for predicting gas concentrations at new query locations and that it outperforms alternative and previously proposed methods in robotics.


Journal of Physiology-paris | 2009

Body Schema Learning for Robotic Manipulators from Visual Self-Perception

Jürgen Sturm; Christian Plagemann; Wolfram Burgard

We present an approach to learning the kinematic model of a robotic manipulator arm from scratch using self-observation via a single monocular camera. We introduce a flexible model based on Bayesian networks that allows a robot to simultaneously identify its kinematic structure and to learn the geometrical relationships between its body parts as a function of the joint angles. Further, we show how the robot can monitor the prediction quality of its internal kinematic model and how to adapt it when its body changes-for example due to failure, repair, or material fatigue. In experiments carried out both on real and simulated robotic manipulators, we verified the validity of our approach for real-world problems such as end-effector pose prediction and end-effector pose control.


international joint conference on artificial intelligence | 2009

Learning kinematic models for articulated objects

Jürgen Sturm; Vijay Pradeep; Cyrill Stachniss; Christian Plagemann; Kurt Konolige; Wolfram Burgard

Robots operating in home environments must be able to interact with articulated objects such as doors or drawers. Ideally, robots are able to autonomously infer articulation models by observation. In this paper, we present an approach to learn kinematic models by inferring the connectivity of rigid parts and the articulation models for the corresponding links. Our method uses a mixture of parameterized and parameter-free (Gaussian process) representations and finds low-dimensional manifolds that provide the best explanation of the given observations. Our approach has been implemented and evaluated using real data obtained in various realistic home environment settings.


international conference on robotics and automation | 2010

A probabilistic approach to mixed open-loop and closed-loop control, with application to extreme autonomous driving

J. Zico Kolter; Christian Plagemann; David T. Jackson; Andrew Y. Ng; Sebastian Thrun

We consider the task of accurately controlling a complex system, such as autonomously sliding a car sideways into a parking spot. Although certain regions of this domain are extremely hard to model (i.e., the dynamics of the car while skidding), we observe that in practice such systems are often remarkably deterministic over short periods of time, even in difficult-to-model regions. Motivated by this intuition, we develop a probabilistic method for combining closed-loop control in the well-modeled regions and open-loop control in the difficult-to-model regions. In particular, we show that by combining 1) an inaccurate model of the system and 2) a demonstration of the desired behavior, our approach can accurately and robustly control highly challenging systems, without the need to explicitly model the dynamics in the most complex regions and without the need to hand-tune the switching control law. We apply our approach to the task of autonomous sideways sliding into a parking spot, and show that we can repeatedly and accurately control the system, placing the car within about 2 feet of the desired location; to the best of our knowledge, this represents the state of the art in terms of accurately controlling a vehicle in such a maneuver.


Robotics and Autonomous Systems | 2010

A nonparametric learning approach to range sensing from omnidirectional vision

Christian Plagemann; Cyrill Stachniss; Jürgen Hess; Felix Endres; Nathan Franklin

We present a novel approach to estimating depth from single omnidirectional camera images by learning the relationship between visual features and range measurements available during a training phase. Our model not only yields the most likely distance to obstacles in all directions, but also the predictive uncertainties for these estimates. This information can be utilized by a mobile robot to build an occupancy grid map of the environment or to avoid obstacles during exploration-tasks that typically require dedicated proximity sensors such as laser range finders or sonars. We show in this paper how an omnidirectional camera can be used as an alternative to such range sensors. As the learning engine, we apply Gaussian processes, a nonparametric approach to function regression, as well as a recently developed extension for dealing with input-dependent noise. In practical experiments carried out in different indoor environments with a mobile robot equipped with an omnidirectional camera system, we demonstrate that our system is able to estimate range with an accuracy comparable to that of dedicated sensors based on sonar or infrared light.


Journal of Field Robotics | 2009

A Bayesian regression approach to terrain mapping and an application to legged robot locomotion

Christian Plagemann; Sebastian Mischke; Sam Prentice; Kristian Kersting; Nicholas Roy; Wolfram Burgard

Collaboration


Dive into the Christian Plagemann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge