Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Felix Berkenkamp is active.

Publication


Featured researches published by Felix Berkenkamp.


international conference on robotics and automation | 2016

Safe controller optimization for quadrotors with Gaussian processes

Felix Berkenkamp; Angela P. Schoellig; Andreas Krause

One of the most fundamental problems when designing controllers for dynamic systems is the tuning of the controller parameters. Typically, a model of the system is used to obtain an initial controller, but ultimately the controller parameters must be tuned manually on the real system to achieve the best performance. To avoid this manual tuning step, methods from machine learning, such as Bayesian optimization, have been used. However, as these methods evaluate different controller parameters on the real system, safety-critical system failures may happen. In this paper, we overcome this problem by applying, for the first time, a recently developed safe optimization algorithm, SafeOpt, to the problem of automatic controller parameter tuning. Given an initial, low-performance controller, SafeOpt automatically optimizes the parameters of a control law while guaranteeing safety. It models the underlying performance measure as a Gaussian process and only explores new controller parameters whose performance lies above a safe performance threshold with high probability. Experimental results on a quadrotor vehicle indicate that the proposed method enables fast, automatic, and safe optimization of controller parameters without human intervention.


european control conference | 2015

Safe and robust learning control with Gaussian processes

Felix Berkenkamp; Angela P. Schoellig

This paper introduces a learning-based robust control algorithm that provides robust stability and performance guarantees during learning. The approach uses Gaussian process (GP) regression based on data gathered during operation to update an initial model of the system and to gradually decrease the uncertainty related to this model. Embedding this data-based update scheme in a robust control framework guarantees stability during the learning process. Traditional robust control approaches have not considered online adaptation of the model and its uncertainty before. As a result, their controllers do not improve performance during operation. Typical machine learning algorithms that have achieved similar high-performance behavior by adapting the model and controller online do not provide the guarantees presented in this paper. In particular, this paper considers a stabilization task, linearizes the nonlinear, GP-based model around a desired operating point, and solves a convex optimization problem to obtain a linear robust controller. The resulting performance improvements due to the learning-based controller are demonstrated in experiments on a quadrotor vehicle.


conference on decision and control | 2016

Safe learning of regions of attraction for uncertain, nonlinear systems with Gaussian processes

Felix Berkenkamp; Riccardo Moriconi; Angela P. Schoellig; Andreas Krause

Control theory can provide useful insights into the properties of controlled, dynamic systems. One important property of nonlinear systems is the region of attraction (ROA), a safe subset of the state space in which a given controller renders an equilibrium point asymptotically stable. The ROA is typically estimated based on a model of the system. However, since models are only an approximation of the real world, the resulting estimated safe region can contain states outside the ROA of the real system. This is not acceptable in safety-critical applications. In this paper, we consider an approach that learns the ROA from experiments on a real system, without ever leaving the true ROA and, thus, without risking safety-critical failures. Based on regularity assumptions on the model errors in terms of a Gaussian process prior, we use an underlying Lyapunov function in order to determine a region in which an equilibrium point is asymptotically stable with high probability. Moreover, we provide an algorithm to actively and safely explore the state space in order to expand the ROA estimate. We demonstrate the effectiveness of this method in simulation.


international conference on robotics and automation | 2017

Virtual vs. real: Trading off simulations and physical experiments in reinforcement learning with Bayesian optimization

Alonso Marco; Felix Berkenkamp; Philipp Hennig; Angela P. Schoellig; Andreas Krause; Stefan Schaal; Sebastian Trimpe

In practice, the parameters of control policies are often tuned manually. This is time-consuming and frustrating. Reinforcement learning is a promising alternative that aims to automate this process, yet often requires too many experiments to be practical. In this paper, we propose a solution to this problem by exploiting prior knowledge from simulations, which are readily available for most robotic platforms. Specifically, we extend Entropy Search, a Bayesian optimization algorithm that maximizes information gain from each experiment, to the case of multiple information sources. The result is a principled way to automatically combine cheap, but inaccurate information from simulations with expensive and accurate physical experiments in a cost-effective manner. We apply the resulting method to a cart-pole system, which confirms that the algorithm can find good control policies with fewer experiments than standard Bayesian optimization on the physical system only.


european control conference | 2016

Bayesian optimization for maximum power point tracking in photovoltaic power plants

Hany Abdelrahman; Felix Berkenkamp; Jan Poland; Andreas Krause

The amount of power that a photovoltaic (PV) power plant generates depends on the DC voltage that is applied to the PV panels. The relationship between this control input and the generated power is non-convex and has multiple local maxima. Moreover, since the generated power depends on time-varying environmental conditions, such as solar irradiation, the location of the global maximum changes over time. Maximizing the amount of energy that is generated over time is known as the maximum power point tracking (MPPT) problem. Traditional approaches to solve the MPPT problem rely on heuristics and data-based gradient estimates. These methods typically converge to local optima and thus waste energy. Our approach formalizes the MPPT problem as a Bayesian optimization problem. This formalization admits algorithms that can find the maximum power point after only a few evaluations at different input voltages. Specifically, we model the power-voltage curve as a Gaussian process (GP) and use the predictive uncertainty information in this model to choose control inputs that are informative about the location of the maximum. We extend the basic approach by including operational constraints and making it computationally tractable so that the method can be used on real systems. We evaluate our method together with two standard baselines in experiments, which show that our approach outperforms both.


neural information processing systems | 2017

Safe Model-based Reinforcement Learning with Stability Guarantees

Felix Berkenkamp; Matteo Turchetta; Angela P. Schoellig; Andreas Krause


arXiv: Robotics | 2016

Bayesian Optimization with Safety Constraints: Safe and Automatic Parameter Tuning in Robotics.

Felix Berkenkamp; Andreas Krause; Angela P. Schoellig


neural information processing systems | 2016

Safe Exploration in Finite Markov Decision Processes with Gaussian Processes

Matteo Turchetta; Felix Berkenkamp; Andreas Krause


Archive | 2015

Derivation of a linear, robust H2 controller for systems with parametric uncertainty

Felix Berkenkamp; Angela P. Schoellig


international conference on robotics and automation | 2018

Verifying Controllers Against Adversarial Examples with Bayesian Optimization

Shromona Ghosh; Felix Berkenkamp; Gireeja Ranade; Shaz Qadeer; Ashish Kapoor

Collaboration


Dive into the Felix Berkenkamp's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge