Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Hentschel is active.

Publication


Featured researches published by Alexander Hentschel.


Engineering Applications of Artificial Intelligence | 2017

Particle swarm optimization for generating interpretable fuzzy reinforcement learning policies

Daniel Hein; Alexander Hentschel; Thomas A. Runkler; Steffen Udluft

Abstract Fuzzy controllers are efficient and interpretable system controllers for continuous state and action spaces. To date, such controllers have been constructed manually or trained automatically either using expert-generated problem-specific cost functions or incorporating detailed knowledge about the optimal control strategy. Both requirements for automatic training processes are not found in most real-world reinforcement learning (RL) problems. In such applications, online learning is often prohibited for safety reasons because it requires exploration of the problem’s dynamics during policy training. We introduce a fuzzy particle swarm reinforcement learning (FPSRL) approach that can construct fuzzy RL policies solely by training parameters on world models that simulate real system dynamics. These world models are created by employing an autonomous machine learning technique that uses previously generated transition samples of a real system. To the best of our knowledge, this approach is the first to relate self-organizing fuzzy controllers to model-based batch RL. FPSRL is intended to solve problems in domains where online learning is prohibited, system dynamics are relatively easy to model from previously generated default policy transition samples, and it is expected that a relatively easily interpretable control policy exists. The efficiency of the proposed approach with problems from such domains is demonstrated using three standard RL benchmarks, i.e., mountain car, cart-pole balancing, and cart-pole swing-up. Our experimental results demonstrate high-performing, interpretable fuzzy policies.


International Journal of Swarm Intelligence Research | 2016

Reinforcement Learning with Particle Swarm Optimization Policy PSO-P in Continuous State and Action Spaces

Daniel Hein; Alexander Hentschel; Thomas A. Runkler; Steffen Udluft

This article introduces a model-based reinforcement learning RL approach for continuous state and action spaces. While most RL methods try to find closed-form policies, the approach taken here employs numerical on-line optimization of control action sequences. First, a general method for reformulating RL problems as optimization tasks is provided. Subsequently, Particle Swarm Optimization PSO is applied to search for optimal solutions. This Particle Swarm Optimization Policy PSO-P is effective for high dimensional state spaces and does not require a priori assumptions about adequate policy representations. Furthermore, by translating RL problems into optimization tasks, the rich collection of real-world inspired RL benchmarks is made available for benchmarking numerical optimization techniques. The effectiveness of PSO-P is demonstrated on the two standard benchmarks: mountain car and cart pole.


international symposium on neural networks | 2017

Batch reinforcement learning on the industrial benchmark: First experiences

Daniel Hein; Steffen Udluft; Michel Tokic; Alexander Hentschel; Thomas A. Runkler; Volkmar Sterzing

The Particle Swarm Optimization Policy (PSO-P) has been recently introduced and proven to produce remarkable results on interacting with academic reinforcement learning benchmarks in an off-policy, batch-based setting. To further investigate the properties and feasibility on real-world applications, this paper investigates PSO-P on the so-called Industrial Benchmark (IB), a novel reinforcement learning (RL) benchmark that aims at being realistic by including a variety of aspects found in industrial applications, such as continuous state and action spaces, a high dimensional, partially observable state space, delayed effects, and complex stochasticity. The experimental results of PSO-P on IB are compared to results of closed-form control policies derived from the model-based Recurrent Control Neural Network (RCNN) and the model-free Neural Fitted Q-Iteration (NFQ). Experiments show that PSO-P is not only of interest for academic benchmarks, but also for real-world industrial applications, since it also yielded the best performing policy in our IB setting. Compared to other well established RL techniques, PSO-P produced outstanding results in performance and robustness, requiring only a relatively low amount of effort in finding adequate parameters or making complex design decisions.


Neurocomputing | 2015

Exploiting Similarity in System Identification Tasks with Recurrent Neural Networks

Sigurd Spieckermann; Siegmund Düll; Steffen Udluft; Alexander Hentschel; Thomas A. Runkler

Abstract A novel dual-task learning approach based on recurrent neural networks with factored tensor components for system identification tasks is presented. The goal is to identify the dynamics of a system given few observations which are augmented by auxiliary data from a similar system. The problem is motivated by real-world use cases and a mathematical problem description is given. Further, our proposed model—the factored tensor recurrent neural network (FTRNN)—and two alternative models are introduced which are benchmarked on the cart-pole and mountain car simulations. We show that the FTRNN consistently and significantly outperformed the competing models in accuracy and data-efficiency.


Archive | 2012

METHOD FOR THE COMPUTER-SUPPORTED GENERATION OF A DATA-DRIVEN MODEL OF A TECHNICAL SYSTEM, IN PARTICULAR OF A GAS TURBINE OR WIND TURBINE

Siegmund Düll; Alexander Hentschel; Volkmar Sterzing; Steffen Udluft


ieee symposium series on computational intelligence | 2017

A benchmark environment motivated by industrial control problems

Daniel Hein; Stefan Depeweg; Michel Tokic; Steffen Udluft; Alexander Hentschel; Thomas A. Runkler; Volkmar Sterzing


arXiv: Learning | 2016

Introduction to the "Industrial Benchmark".

Daniel Hein; Alexander Hentschel; Volkmar Sterzing; Michel Tokic; Steffen Udluft


Archive | 2015

METHOD FOR CONTROLLING AND/OR REGULATING A TECHNICAL SYSTEM IN A COMPUTER-ASSISTED MANNER

Siegmund Düll; Alexander Hentschel; Steffen Udluft


Archive | 2017

METHOD FOR COMPUTER-AIDED INSTALLATION CONTROL OPTIMIZATION USING A SIMULATION MODULE

Siegmund Düll; Alexander Hentschel; Jatinder Singh; Volkmar Sterzing; Steffen Udluft


arXiv: Neural and Evolutionary Computing | 2016

Particle Swarm Optimization for Generating Fuzzy Reinforcement Learning Policies.

Daniel Hein; Alexander Hentschel; Thomas A. Runkler; Steffen Udluft

Collaboration


Dive into the Alexander Hentschel's collaboration.

Researchain Logo
Decentralizing Knowledge