Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Klanke is active.

Publication


Featured researches published by Stefan Klanke.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

Principal surfaces from unsupervised kernel regression

Peter Meinicke; Stefan Klanke; Roland Memisevic; Helge Ritter

We propose a nonparametric approach to learning of principal surfaces based on an unsupervised formulation of the Nadaraya-Watson kernel regression estimator. As compared with previous approaches to principal curves and surfaces, the new method offers several advantages: first, it provides a practical solution to the model selection problem because all parameters can be estimated by leave-one-out cross-validation without additional computational cost. In addition, our approach allows for a convenient incorporation of nonlinear spectral methods for parameter initialization, beyond classical initializations based on linear PCA. Furthermore, it shows a simple way to fit principal surfaces in general feature spaces, beyond the usual data space setup. The experimental results illustrate these convenient features on simulated and real data.


From Motor Learning to Interaction Learning in Robots | 2010

Adaptive Optimal Feedback Control with Learned Internal Dynamics Models

Djordje Mitrovic; Stefan Klanke; Sethu Vijayakumar

Optimal Feedback Control (OFC) has been proposed as an attractive movement generation strategy in goal reaching tasks for anthropomorphic manipulator systems. Recent developments, such as the Iterative Linear Quadratic Gaussian (ILQG) algorithm, have focused on the case of non-linear, but still analytically available, dynamics. For realistic control systems, however, the dynamics may often be unknown, difficult to estimate, or subject to frequent systematic changes. In this chapter, we combine the ILQG framework with learning the forward dynamics for simulated arms, which exhibit large redundancies, both, in kinematics and in the actuation. We demonstrate how our approach can compensate for complex dynamic perturbations in an online fashion. The specific adaptive framework introduced lends itself to a computationally more efficient implementation of the ILQG optimisation without sacrificing control accuracy – allowing the method to scale to large DoF systems.


PLOS ONE | 2010

A Computational Model of Limb Impedance Control Based on Principles of Internal Model Uncertainty

Djordje Mitrovic; Stefan Klanke; Rieko Osu; Mitsuo Kawato; Sethu Vijayakumar

Efficient human motor control is characterized by an extensive use of joint impedance modulation, which is achieved by co-contracting antagonistic muscles in a way that is beneficial to the specific task. While there is much experimental evidence available that the nervous system employs such strategies, no generally-valid computational model of impedance control derived from first principles has been proposed so far. Here we develop a new impedance control model for antagonistic limb systems which is based on a minimization of uncertainties in the internal model predictions. In contrast to previously proposed models, our framework predicts a wide range of impedance control patterns, during stationary and adaptive tasks. This indicates that many well-known impedance control phenomena naturally emerge from the first principles of a stochastic optimization process that minimizes for internal model prediction uncertainties, along with energy and accuracy demands. The insights from this computational model could be used to interpret existing experimental impedance control data from the viewpoint of optimality or could even govern the design of future experiments based on principles of internal model uncertainty.


The International Journal of Robotics Research | 2011

Learning impedance control of antagonistic systems based on stochastic optimization principles

Djordje Mitrovic; Stefan Klanke; Sethu Vijayakumar

Novel anthropomorphic robotic systems increasingly employ variable impedance actuation with a view to achieving robustness against uncertainty, superior agility and improved efficiency that are hallmarks of biological systems. Controlling and modulating impedance profiles such that they are optimally tuned to the controlled plant is crucial in realizing these benefits. In this work, we propose a methodology to generate optimal control commands for variable impedance actuators under a prescribed tradeoff of task accuracy and energy cost. We employ a supervised learning paradigm to acquire both the plant dynamics and its stochastic properties. This enables us to prescribe an optimal impedance and command profile (i) tuned to the hard-to-model plant noise characteristics and (ii) adaptable to systematic changes. To evaluate the scalability of our framework to real hardware, we designed and built a novel antagonistic series elastic actuator (SEA) characterized by a simple mechanical architecture and we ran several evaluations on a variety of reach and hold tasks. These results highlight, for the first time on real hardware, how impedance modulation profiles tuned to the plant dynamics emerge from the first principles of stochastic optimization, achieving clear performance gains over classical methods that ignore or are incapable of incorporating stochastic information.


From Motor Learning to Interaction Learning in Robots | 2010

Methods for Learning Control Policies from Variable-Constraint Demonstrations

Matthew Howard; Stefan Klanke; Michael Gienger; Christian Goerick; Sethu Vijayakumar

Many everyday human skills can be framed in terms of performing some task subject to constraints imposed by the task or the environment. Constraints are usually not observable and frequently change between contexts. In this chapter, we explore the problem of learning control policies from data containing variable, dynamic and non-linear constraints on motion. We discuss how an effective approach for doing this is to learn the unconstrained policy in a way that is consistent with the constraints. We then go on to discuss several recent algorithms for extracting policies from movement data, where observations are recorded under variable, unknown constraints. We review a number of experiments testing the performance of these algorithms and demonstrating how the resultant policy models generalise over constraints allowing prediction of behaviour under unseen settings where new constraints apply.


Autonomous Robots | 2009

A Novel Method for Learning Policies from Variable Constraint Data

Matthew Howard; Stefan Klanke; Michael Gienger; Christian Goerick; Sethu Vijayakumar

Many everyday human skills can be framed in terms of performing some task subject to constraints imposed by the environment. Constraints are usually unobservable and frequently change between contexts. In this paper, we present a novel approach for learning (unconstrained) control policies from movement data, where observations come from movements under different constraints. As a key ingredient, we introduce a small but highly effective modification to the standard risk functional, allowing us to make a meaningful comparison between the estimated policy and constrained observations. We demonstrate our approach on systems of varying complexity, including kinematic data from the ASIMO humanoid robot with 27 degrees of freedom, and present results for learning from human demonstration.


international conference on robotics and automation | 2010

Optimal Feedback Control for anthropomorphic manipulators

Djordje Mitrovic; Sho Nagashima; Stefan Klanke; Takamitsu Matsubara; Sethu Vijayakumar

We study target reaching tasks of redundant anthropomorphic manipulators under the premise of minimal energy consumption and compliance during motion. We formulate this motor control problem in the framework of Optimal Feedback Control (OFC) by introducing a specific cost function that accounts for the physical constraints of the controlled plant. Using an approximative computational optimal control method we can optimally control a high-dimensional anthropomorphic robot without having to specify an explicit inverse kinematics, inverse dynamics or feedback control law. We highlight the benefits of this biologically plausible motor control strategy over traditional (open loop) optimal controllers: The presented approach proves to be significantly more energy efficient and compliant, while being accurate with respect to the task at hand. These properties are crucial for the control of mobile anthropomorphic robots, that are designed to interact safely in a human environment. To the best of our knowledge this is the first OFC implementation on a high-dimensional (redundant) manipulator.


simulation of adaptive behavior | 2008

Adaptive Optimal Control for Redundantly Actuated Arms

Djordje Mitrovic; Stefan Klanke; Sethu Vijayakumar

Optimal feedback control has been proposed as an attractive movement generation strategy in goal reaching tasks for anthropomorphic manipulator systems. Recent developments, such as the iterative Linear Quadratic Gaussian (iLQG) algorithm, have focused on the case of non-linear, but still analytically available, dynamics. For realistic control systems, however, the dynamics may often be unknown, difficult to estimate, or subject to frequent systematic changes. In this paper, we combine the iLQG framework with learning the forward dynamics for a simulated arm with two limbs and six antagonistic muscles, and we demonstrate how our approach can compensate for complex dynamic perturbations in an online fashion.


Neurocomputing | 2007

Variants of unsupervised kernel regression: General cost functions

Stefan Klanke; Helge Ritter

We present an extension to unsupervised kernel regression (UKR), a recent method for learning of nonlinear manifolds, which can utilize leave-one-out cross-validation as an automatic complexity control without additional computational cost. Our extension allows us to incorporate general cost functions, by which the UKR algorithm can be made more robust or be tuned to specific noise models. We focus on Hubers loss and on the @e-insensitive loss, which we present together with a practical optimization approach. We demonstrate our method on both toy and real data.


ieee-ras international conference on humanoid robots | 2010

Exploiting sensorimotor stochasticity for learning control of variable impedance actuators

Djordje Mitrovic; Stefan Klanke; Matthew Howard; Sethu Vijayakumar

Novel anthropomorphic robotic systems increasingly employ variable impedance actuation in order to achieve robustness to uncertainty, superior agility and efficiency that are hallmarks of biological systems. Controlling and modulating impedance profiles such that it is optimally tuned to the controlled plant is crucial to realise these benefits. In this work, we propose a methodology to generate optimal control commands for variable impedance actuators under a prescribed trade-off of task accuracy and energy cost. In contrast to classical optimal control methods that typically require an accurate analytical plant dynamics model, we employ a supervised learning paradigm to acquire both the process dynamics as well as the stochastic properties. This enables us to prescribe an optimal impedance and command profile (i) tuned to the hard-to-model stochastic characteristics of a plant and (ii) adapt to the systematic changes such as a change in load.

Collaboration


Dive into the Stefan Klanke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrian Haith

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jochen J. Steil

Braunschweig University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge