Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anton Maximilian Schäfer is active.

Publication


Featured researches published by Anton Maximilian Schäfer.


international conference on artificial neural networks | 2006

Recurrent neural networks are universal approximators

Anton Maximilian Schäfer; Hans Georg Zimmermann

Recurrent Neural Networks (RNN) have been developed for a better understanding and analysis of open dynamical systems. Still the question often arises if RNN are able to map every open dynamical system, which would be desirable for a broad spectrum of applications. In this article we give a proof for the universal approximation ability of RNN in state space model form and even extend it to Error Correction and Normalized Recurrent Neural Networks.


International Journal of Neural Systems | 2007

Recurrent Neural Networks are universal approximators.

Anton Maximilian Schäfer; Hans-Georg Zimmermann

Recurrent Neural Networks (RNN) have been developed for a better understanding and analysis of open dynamical systems. Still the question often arises if RNN are able to map every open dynamical system, which would be desirable for a broad spectrum of applications. In this article we give a proof for the universal approximation ability of RNN in state space model form and even extend it to Error Correction and Normalized Recurrent Neural Networks.


international conference on artificial neural networks | 2006

Learning long term dependencies with recurrent neural networks

Anton Maximilian Schäfer; Steffen Udluft; Hans Georg Zimmermann

Recurrent neural networks (RNNs) unfolded in time are in theory able to map any open dynamical system. Still they are often blamed to be unable to identify long-term dependencies in the data. Especially when they are trained with backpropagation through time (BPTT) it is claimed that RNNs unfolded in time fail to learn inter-temporal influences more than ten time steps apart. This paper provides a disproof of this often cited statement. We show that RNNs and especially normalised recurrent neural networks (NRNNs) unfolded in time are indeed very capable of learning time lags of at least a hundred time steps. We further demonstrate that the problem of a vanishing gradient does not apply to these networks.


international conference on artificial neural networks | 2006

A technical trading indicator based on dynamical consistent neural networks

Hans Georg Zimmermann; Lorenzo Bertolini; Ralph Grothmann; Anton Maximilian Schäfer; Christoph Tietz

In econometrics, the behaviour of financial markets is described by quantitative variables. Mathematical and statistical methods are used to explore economic relationships and to forecast the future market development. However, econometric modeling is often limited to a single financial market. In the age of globalisation financial markets are highly interrelated and thus, single market analyses are misleading. In this paper we present a new way to model the dynamics of coherent financial markets. Our approach is based on so-called dynamical consistent neural networks (DCNN), which are able to map multiple scales and different sub-dynamics of the coherent market movement. Unlikely to standard econometric methods, small market movements are not treated as noise but as valuable market information. We apply the DCNN to forecast monthly movements of major foreign exchange (FX) rates. Based on the DCNN forecasts we develop a technical trading indicator to support investment decisions.


the european symposium on artificial neural networks | 2008

Safe exploration for reinforcement learning.

Alexander Hans; Daniel Schneegaß; Anton Maximilian Schäfer; Steffen Udluft


Archive | 2007

Method for the computer-assisted control and/or regulation of a technical system

Anton Maximilian Schäfer; Steffen Udluft


Archive | 2007

Method for computer aided control and regulation of technical system, involves carrying out characterization of dynamic behavior of technical systems multiple times by state and action of system

Anton Maximilian Schäfer; Steffen Udluft; Hans-Georg Zimmermann


Archive | 2007

Technical system e.g. gas turbine, controlling and/or regulating method, involves executing learning and/or optimizing procedure based on concealed states in state space to control and/or regulate system

Anton Maximilian Schäfer; Steffen Udluft


the european symposium on artificial neural networks | 2007

The Recurrent Control Neural Network

Anton Maximilian Schäfer; Steffen Udluft; Hans-Georg Zimmermann


Lecture Notes in Computer Science | 2006

Learning Long Term Dependencies with Recurrent Neural Networks

Anton Maximilian Schäfer; Steffen Udluft; Hans Georg Zimmermann

Collaboration


Dive into the Anton Maximilian Schäfer's collaboration.

Researchain Logo
Decentralizing Knowledge