Hans Georg Zimmermann
Siemens
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hans Georg Zimmermann.
neural information processing systems | 1998
Ralph Neuneier; Hans Georg Zimmermann
The purpose of this paper is to give a guidance in neural network modeling. Starting with the preprocessing of the data, we discuss different types of network architecture and show how these can be combined effectively. We analyze several cost functions to avoid unstable learning due to outliers and heteroscedasticity. The Observer - Observation Dilemma is solved by forcing the network to construct smooth approximation functions. Furthermore, we propose some pruning algorithms to optimize the network architecture. All these features and techniques are linked up to a complete and consistent training procedure (see figure 17.25 for an overview), such that the synergy of the methods is maximized.
international conference on artificial neural networks | 2006
Anton Maximilian Schäfer; Hans Georg Zimmermann
Recurrent Neural Networks (RNN) have been developed for a better understanding and analysis of open dynamical systems. Still the question often arises if RNN are able to map every open dynamical system, which would be desirable for a broad spectrum of applications. In this article we give a proof for the universal approximation ability of RNN in state space model form and even extend it to Error Correction and Normalized Recurrent Neural Networks.
international symposium on neural networks | 1992
Ferdinand Hergert; William Finnoff; Hans Georg Zimmermann
Three methods are examined for reducing complexity in potentially oversized networks. These consists of either removing redundant elements based on some measure of saliency, adding a further term to the cost function penalizing complexity, or observing the error on a further, validation set of examples, and then stopping training as soon as this performance begins to deteriorate. It was demonstrated on a series of simulation examples that all of these methods can significantly improve generalization, but their performance can prove to be domain dependent.<<ETX>>
international symposium on neural networks | 2005
Hans Georg Zimmermann; Ralph Grothmann; A.M. Schafer; C. Tietz
Recurrent neural networks aretypically consid- eredasrelatively simple architectures, whichcomealong with complicated learning algorithms. Mostresearchers focus onthe improvement ofthesealgorithms. Ourapproach isdifferent: Rather thanfocusing onlearning andoptimization algorithms, weconcentrate onthedesign ofthenetwork architecture. Aswewill show, manydifficulties inthemodeling ofdynam- ical systems canbesolved witha pre-design ofthenetwork architecture. We willfocus onlarge networks withthetask ofmodeling complete highdimensional systems (e.g. financial markets) instead ofsmallsetsoftimeseries. Standard neural networks tendtooverfit like anyother statistical learning system. Wewill introduce anewrecurrent neural network architecture in whichoverfitting andtheassociated loss ofgeneralization abilities isnota majorproblem. We willenhance thesenetworks by dynamical consistency.
International Journal of Intelligent Systems in Accounting, Finance & Management | 2005
Hans Georg Zimmermann; Ralph Grothmann
This paper introduces a stock-picking algorithm that can be used to perform an optimal asset allocation for a large number of investment opportunities. The allocation scheme is based upon the idea of causal risk. Instead of referring to the volatility of the assets time series, the stock-picking algorithm determines the risk exposure of the portfolio by concerning the non-forecastability of the assets. The underlying expected return forecasts are based on time-delay recurrent error correction neural networks, which utilize the last model error as an auxiliary input to evaluate their own misspecification. We demonstrate the profitability of our stock-picking approach by constructing portfolios from 68 different assets of the German stock market. It turns out that our approach is superior to a preset benchmark portfolio. Copyright
international conference on acoustics, speech, and signal processing | 2002
Çaglayan Erdem; Hans Georg Zimmermann
The analysis and selection of input features within machine learning techniques is an important problem if a new system has to be established or the system has to be trained for a new task. Within a Text-to-Speech (ITS) application this task has to be handled while adapting a system to a new language or a new speaker.
Archive | 1998
Rudolf Kruse; Stefan Siekmann; Ralph Neuneier; Hans Georg Zimmermann
We present an extended neuro-fuzzy system, combined with a semantic-preserving backpropagation-based learning algorithm, which makes effective use of prior knowledge and of historical data. After the network is initialized with a set of rules, the learning algorithm optimize the rule base without destroying the initial semantic. Due to a sparse initial network structure the effective number of parameters is small which prevents the network from overfitting.
international conference on artificial neural networks | 1992
Ferdinand Hergert; Hans Georg Zimmermann; U. Kramer; William Finnoff
Using a variety of simulation examples we demonstrate that naive comparisons of learning methods (often found in the literature) are highly unreliable. It is apparent from our results that in many cases the generalization performance of a training method is highly domain dependent and that significance testing is essential to derive meaningful performance comparisons.
international conference on artificial neural networks | 2006
Anton Maximilian Schäfer; Steffen Udluft; Hans Georg Zimmermann
Recurrent neural networks (RNNs) unfolded in time are in theory able to map any open dynamical system. Still they are often blamed to be unable to identify long-term dependencies in the data. Especially when they are trained with backpropagation through time (BPTT) it is claimed that RNNs unfolded in time fail to learn inter-temporal influences more than ten time steps apart. This paper provides a disproof of this often cited statement. We show that RNNs and especially normalised recurrent neural networks (NRNNs) unfolded in time are indeed very capable of learning time lags of at least a hundred time steps. We further demonstrate that the problem of a vanishing gradient does not apply to these networks.
Archive | 2002
Hans Georg Zimmermann; Ralph Grothmann; Ralph Neuneier
A market is basically driven by a superposition of agents decisions. The price dynamic results of the excess demand / supply created on the micro level. The behavior of a few agents is well understood by game theory. In case of a large number of agents one may refer to the assumption of an atomic market structure, which allows the aggregation of agents by statistics. We can omit the latter assumption, if we model the market by a multi-agent approach.