Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver Bänfer is active.

Publication


Featured researches published by Oliver Bänfer.


IEEE Transactions on Fuzzy Systems | 2011

Supervised Hierarchical Clustering in Fuzzy Model Identification

Benjamin Hartmann; Oliver Bänfer; Oliver Nelles; Anton Sodja; Luka Teslić; Igor Škrjanc

This paper presents a new, supervised, hierarchical clustering algorithm (SUHICLUST) for fuzzy model identification. The presented algorithm solves the problem of global model accuracy, together with the interpretability of local models as valid linearizations of the modeled nonlinear system. The algorithm combines the advantages of supervised, hierarchical algorithms, which are based on heuristic tree-construction algorithms, together with the advantages of fuzzy product space clustering. The high flexibility of the validity functions that is obtained by fuzzy clustering combined with supervised learning results in an efficient partitioning algorithm, which is independent of initialization and results in a parsimonious fuzzy model. Furthermore, the usability of SUHICLUST is very undemanding, because it delivers, in contrast with many other methods, reproducible results. In order to get reasonable results, the user only has to set either a threshold for the maximum number of local models or a value for the maximum allowed global model error as a termination criterion. For fine-tuning, the interpolation smoothness controls the degree of regularization. The performance is illustrated on both analytical examples and benchmark problems from the literature.


artificial intelligence and computational intelligence | 2010

Multilayer perceptron network with modified sigmoid activation functions

Tobias Ebert; Oliver Bänfer; Oliver Nelles

Models in todays microcontrollers, e.g. engine control units, are realized with a multitude of characteristic curves and look-up tables. The increasing complexity of these models causes an exponential growth of the required calibration memory. Hence, neural networks, e.g. multilayer perceptron networks (MLP), which provide a solution for this problem, become more important for modeling. Usually sigmoid functions are used as membership functions. The calculation of the therefore necessary exponential function is very demanding on low performance microcontrollers. Thus in this paper a modified activation function for the efficient implementation of MLP networks is proposed. Their advantages compared to standard look-up tables are illustrated by the application of an intake manifold model of a combustion engine.


international conference on control and automation | 2009

Polynomial model tree (POLYMOT) — A new training algorithm for local model networks with higher degree polynomials

Oliver Bänfer; Oliver Nelles

A new training algorithm for nonlinear system identification with local models of higher polynomial degree is presented in this paper. Usually the local models are linearly parameterized and those parameters are typically estimated by some least squares approach. For the utilization of higher degree polynomials this procedure is no longer feasible since the amount of parameters grows rapidly with the number of physical inputs and the polynomial degree. Thus a new learning strategy with the aid of stepwise regression is developed to estimate only the most significant parameters. The included partitioning algorithm which is based on the LOLIMOT algorithm decides in each step between increasing the number of parameters of the worst local model and splitting this model to create two new ones. A comparison of a LOLIMOT and POLYMOT trained exhaust emission model shows the benefits of the proposed new learning strategy.


international conference on control, automation and systems | 2010

Adaptive local model networks with higher degree polynomials

Oliver Bänfer; Marlon Franke; Oliver Nelles

A new adaptation method for local model networks with higher degree polynomials which are trained by the polynomial model tree (POLYMOT) algorithm is presented in this paper. Usually the local models are linearly parameterized and those parameters are typically adapted by a recursive least squares approach. For the utilization of higher degree polynomials a subset selection method, which is a part of the POLYMOT algorithm, selects and estimates the most significant parameters from a huge parameter matrix. This matrix contains one parameter wi, nx for each input ulp up to the maximal polynomial degree and for all the combinations of the inputs. It is created during the training procedure of the local model network. For the online adaptation of the trained local model network only the selected parameters should be used. Otherwise the local model network would be too flexible and the idea of subset selection would be lost. Therefore the presented adaptation method generates at first a new parameter matrix with the selected and most significant parameters which are unequal to zero. Afterwards the local model parameters can be adapted with the aid of a standard recursive least squares method.


IFAC Proceedings Volumes | 2012

POLYMOT versus HILOMOT - A Comparison of Two Different Training Algorithms for Local Model Networks

Oliver Bänfer; Benjamin Hartmann; Oliver Nelles

Abstract A comparison of the POLYnomial MOdel Tree (POLYMOT) and the HIerarchical LOcal MOdel Tree (HILOMOT) algorithm for the construction of local model networks is presented in this paper. A comprehensive benchmark study with different 2-dimensional test functions as well as four popular measured datasets demonstrates the robustness against noise and overfitting of both algorithms. The major number of axes-oblique local linear models by using HILOMOT is often compensated by a smaller number of more complex axes-orthogonal local polynomial models by using POLYMOT, so that both methods generate the same model quality. However, for high-dimensional input spaces HILOMOT demonstrates its advantage of axis-oblique partitioning.


artificial intelligence and computational intelligence | 2009

Local Model Networks with Modified Parabolic Membership Functions

Oliver Bänfer; Oliver Nelles; Josef Kainz; Johannes Beer

Models in today’s microcontrollers, e.g. engine control units, are realized with a multitude of characteristic curves and look-up tables. The increasing complexity of these models causes an exponential growth of the required calibration memory. Hence, neural networks, e.g. local model networks, which provide a solution for this problem, become more important for modeling. Usually Gaussians are used as membership functions. The calculation of the therefore necessary exponential function is very demanding on low performance microcontrollers. Thus in this paper a modified membership function for the efficient implementation of local model networks is proposed. Their advantages compared to standard local model networks and to look-up tables are illustrated by the application of an intake manifold model of a combustion engine.


american control conference | 2011

Multi-step-ahead optimal learning strategy for local model networks with higher degree polynomials

Oliver Bänfer; Geritt Kampmann; Oliver Nelles

The idea of a learning strategy extension for nonlinear system identification with local polynomial model networks is presented in this paper. Usually the polynomial model tree (POLYMOT) algorithm utilizes a one-step-ahead optimal learning strategy. A demonstration example shows that this greedy behavior is not the best choice to reach a satisfying global model. Thus this strategy should be enlarged to a multi step-ahead optimal learning. Therefore, it is possible to find the optimal global model in a special case.


international conference on control, automation, robotics and vision | 2010

Comparison of different subset selection algorithms for learning local model networks with higher degree polynomials

Oliver Bänfer; Benjamin Hartmann; Oliver Nelles

A comparison of three different subset selection methods in combination with a new learning algorithm for nonlinear system identification with local models of higher polynomial degree is presented in this paper. Usually the local models are linearly parameterized and those parameters are typically estimated by some least squares approach. For the utilization of higher degree polynomials this procedure is no longer feasible since the amount of parameters grows rapidly with the number of physical inputs and the polynomial degree. Thus a new learning strategy with the aid of subset selection methods is developed to estimate only the most significant parameters. A forward selection method with orthogonal least squares, a stepwise regression and a least angle regression method are used for training different neural networks. A comparison of the trained networks shows the benefits of each subset selection method.


international conference on systems | 2009

Learning Strategies for Local Model Networks with Higher Degree Polynomials

Oliver Bänfer; Oliver Nelles

Abstract Abstract In this paper a new algorithm for nonlinear system identification with local models of higher polynomial degree is proposed. Usually the local models are linearly parameterized and those parameters are typically estimated by some least squares approach. For the utilization of higher degree polynomials this procedure is no longer feasible due to the exponentially increasing number of parameters depending on a cumulative number of physical inputs. Thus a new learning strategy with the aid of stepwise regression is developed to estimate only the most significant parameters. The included partitioning algorithm decides in each step between increasing the number of parameters of the worst local model and splitting this model to create two new ones. Its advantages are illustrated by a demonstration example.


Archive | 2008

Local model networks

Oliver Nelles; Oliver Bänfer; Josef Kainz; Johannes Beer

A multitude of functions in modern electronic control units (ECU) uses characterstic curves and look-up tables (LUT) for modeling. The increasing complexity of these models causes an exponential growing of the required calibration memory. Hence, this way of modeling with LUT is not feasible any longer. On this account, the University of Siegen developed in cooperation with Continental Automotive GmbH local model networks in fixed-point arithmetic for series-ECU, which provide a solution for this problem.

Collaboration


Dive into the Oliver Bänfer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johannes Beer

Continental Automotive Systems

View shared research outputs
Top Co-Authors

Avatar

Josef Kainz

Continental Automotive Systems

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anton Sodja

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luka Teslić

University of Ljubljana

View shared research outputs
Researchain Logo
Decentralizing Knowledge