Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bastian Bischoff is active.

Publication


Featured researches published by Bastian Bischoff.


european conference on machine learning | 2015

Safe Exploration for Active Learning with Gaussian Processes

Jens Schreiter; Duy Nguyen-Tuong; Mona Eberts; Bastian Bischoff; Heiner Markert; Marc Toussaint

In this paper, the problem of safe exploration in the active learning context is considered. Safe exploration is especially important for data sampling from technical and industrial systems, e.g. combustion engines and gas turbines, where critical and unsafe measurements need to be avoided. The objective is to learn data-based regression models from such technical systems using a limited budget of measured, i.e. labelled, points while ensuring that critical regions of the considered systems are avoided during measurements. We propose an approach for learning such models and exploring new data regions based on Gaussian processes GPs. In particular, we employ a problem specific GP classifier to identify safe and unsafe regions, while using a differential entropy criterion for exploring relevant data regions. A theoretical analysis is shown for the proposed algorithm, where we provide an upper bound for the probability of failure. To demonstrate the efficiency and robustness of our safe exploration scheme in the active learning setting, we test the approach on a policy exploration task for the inverse pendulum hold up problem.


european conference on machine learning | 2013

Learning Throttle Valve Control Using Policy Search

Bastian Bischoff; Duy Nguyen-Tuong; Torsten Koller; Heiner Markert; Alois Knoll

The throttle valve is a technical device used for regulating a fluid or a gas flow. Throttle valve control is a challenging task, due to its complex dynamics and demanding constraints for the controller. Using state-of-the-art throttle valve control, such as model-free PID controllers, time-consuming and manual adjusting of the controller is necessary. In this paper, we investigate how reinforcement learning (RL) can help to alleviate the effort of manual controller design by automatically learning a control policy from experiences. In order to obtain a valid control policy for the throttle valve, several constraints need to be addressed, such as no-overshoot. Furthermore, the learned controller must be able to follow given desired trajectories, while moving the valve from any start to any goal position and, thus, multi-targets policy learning needs to be considered for RL. In this study, we employ a policy search RL approach, Pilco [2], to learn a throttle valve control policy. We adapt the Pilco algorithm, while taking into account the practical requirements and constraints for the controller. For evaluation, we employ the resulting algorithm to solve several control tasks in simulation, as well as on a physical throttle valve system. The results show that policy search RL is able to learn a consistent control policy for complex, real-world systems.


international conference on control, automation, robotics and vision | 2012

Fusing vision and odometry for accurate indoor robot localization

Bastian Bischoff; Duy Nguyen-Tuong; Felix Streichert; Marlon Ramon Ewert; Alois Knoll

For service robotics, localization is an essential component required in many applications, e.g. indoor robot navigation. Today, accurate localization relies mostly on high-end devices, such as A.R.T. DTrack, VICON systems or laser scanners. These systems are often expensive and, thus, require substantial investments. In this paper, our focus is on the development of a localization method using low-priced devices, such as cameras, while being sufficiently accurate in tracking performance. Vision data contains much information and potentially yields high tracking accuracy. However, due to high computational requirements vision-based localization can only be performed at a low frequency. In order to speed up the visual localization and increase accuracy, we combine vision information with robots odometry using a Kalman-Filter. The resulting approach enables sufficiently accurate tracking performance (errors in the range of few cm) at a frequency of about 35Hz. To evaluate the proposed method, we compare our tracking performance with the high precision A.R.T. DTrack localization as ground truth. The evaluations on real robot show that our low-priced localization approach is competitive for indoor robot localization tasks.


international conference on robotics and automation | 2014

Policy Search For Learning Robot Control Using Sparse Data

Bastian Bischoff; Duy Nguyen-Tuong; H. van Hoof; A. McHutchon; Carl Edward Rasmussen; Alois Knoll; Jan Peters; Marc Peter Deisenroth

In many complex robot applications, such as grasping and manipulation, it is difficult to program desired task solutions beforehand, as robots are within an uncertain and dynamic environment. In such cases, learning tasks from experience can be a useful alternative. To obtain a sound learning and generalization performance, machine learning, especially, reinforcement learning, usually requires sufficient data. However, in cases where only little data is available for learning, due to system constraints and practical issues, reinforcement learning can act suboptimally. In this paper, we investigate how model-based reinforcement learning, in particular the probabilistic inference for learning control method (Pilco), can be tailored to cope with the case of sparse data to speed up learning. The basic idea is to include further prior knowledge into the learning process. As Pilco is built on the probabilistic Gaussian processes framework, additional system knowledge can be incorporated by defining appropriate prior distributions, e.g. a linear mean Gaussian prior. The resulting Pilco formulation remains in closed form and analytically tractable. The proposed approach is evaluated in simulation as well as on a physical robot, the Festo Robotino XT. For the robot evaluation, we employ the approach for learning an object pick-up task. The results show that by including prior knowledge, policy learning can be sped up in presence of sparse data.


international conference on learning representations | 2017

On Detecting Adversarial Perturbations

Jan Hendrik Metzen; Tim Genewein; Volker Fischer; Bastian Bischoff


international conference on machine learning | 2016

Stability of controllers for Gaussian process forward models

Julia Vinogradska; Bastian Bischoff; Duy Nguyen-Tuong; Henner Schmidt; Jan Peters


the european symposium on artificial neural networks | 2013

Hierarchical Reinforcement Learning for Robot Navigation

Bastian Bischoff; Duy Nguyen-Tuong; I-Hsuan Lee; Felix Streichert; Alois Knoll


Journal of Machine Learning Research | 2017

Stability of Controllers for Gaussian Process Dynamics

Julia Vinogradska; Bastian Bischoff; Duy Nguyen-Tuong; Jan Peters


Archive | 2014

METHOD FOR AGING-EFFICIENT AND ENERGY-EFFICIENT OPERATION IN PARTICULAR OF A MOTOR VEHICLE

Udo Schulz; Christian Staengle; Bastian Bischoff; Jochen Pflueger; Oliver Dieter Koller


scandinavian conference on ai | 2013

Solving the 15-Puzzle Game Using Local Value-Iteration

Bastian Bischoff; Duy Nguyen-Tuong; Heiner Markert; Alois Knoll

Collaboration


Dive into the Bastian Bischoff's collaboration.

Researchain Logo
Decentralizing Knowledge