Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreas Buschermöhle is active.

Publication


Featured researches published by Andreas Buschermöhle.


systems, man and cybernetics | 2010

A generic concept to increase the robustness of embedded systems by trust management

Werner Brockmann; Andreas Buschermöhle; Jens Hülsmann

Embedded systems permeate increasingly complex, unstructured and non-stationary environments like e.g. car driver assistance systems and mobile robots. The demands on their dependability hence rise, especially in case of human interaction. But as the complexity increases, several sources of uncertainty are likely to increase as well up to an unacceptable level. Reasons are for instance noise, vagueness and ambiguity of information about the environment and also disturbances and anomalies in the interaction with it. Because many operations are safety-critical, this paper presents trust management as a generic approach for addressing these uncertainties. It builds on modeling the uncertainty of the information about the state of the system and about the interaction with its environment explicitly by so-called trust signals. These are propagated and processed throughout the whole embedded system in order to change from a high performance behavior in case of certain, hence trustworthy information to a robust behavior with less performance in case of uncertain information. The trust management approach works without a formal model of the application and is hence easy to use. Its application is outlined by a simple sensor fusion example.


computational intelligence for modelling, control and automation | 2008

Stable Classification in Environments with Varying Degrees of Uncertainty

Andreas Buschermöhle; Nils Rosemann; Werner Brockmann

Most practical signal processing problems have to deal with uncertainties, e. g., due to noisy input data. Usual strategies to do this are based on estimating these uncertainties by statistical methods in advance. For some systems with multi-staged signal processing it is possible to identify these estimates at runtime and to relate a degree of certainty to them. If such degrees of certainty are known for input signals, e. g. by earlier stages of processing, this knowledge can be used to get a more robust or accurate result in classification tasks in the later stages, even if they vary at runtime. In this paper we thus introduce an approach to extend support vector machines to incorporate such known uncertainties at runtime, given as certainty degrees. Based on the known certainty of each input, classification depends more on certain inputs and gradually less on uncertain input data. This is done by changing the decision (kernel) function online, i. e., during operation. An artificial two-dimensional dataset is used to visualize the effects of this extension. And the application to three different datasets is a first benchmark showing that the resulting classification quality increases when known uncertainties are considered.


systems, man and cybernetics | 2013

The Incremental Risk Functional: Basics of a Novel Incremental Learning Approach

Andreas Buschermöhle; Jan H. Schoenke; Nils Rosemann; Werner Brockmann

Incremental learning gets increasing attention in research and practice as it has the advantages of continuous adaptation and handling big data with a low computation and memory demand at the same time. Several approaches have been proposed recently for online learning, but only few work has been done to regard the influence of the approximation structure. Hence, we introduce the incremental risk functional which directly incorporates knowledge about the approximation structure into its parameter update. Exemplary, we apply this approach to regression estimation through linear-in-parameter approximators. We show that the resulting learning algorithm converges and changes the global functional behavior only as little as necessary with every learning step, thus resulting in a stable incremental learning approach.


international conference on machine learning and applications | 2013

Stable On-Line Learning with Optimized Local Learning, But Minimal Change of the Global Output

Andreas Buschermöhle; Werner Brockmann

This work presents a novel approach to on-line learning regression. The well-known risk functional is formulated in an incremental manner that is aggressive to incorporate a new example locally as much as possible and at the same time passive in the sense that the overall output is changed as little as possible. To achieve this localized learning, knowledge about the model structure of the approximator is utilized to steer the adaptation of the parameter vector. We present a continuously adapting first order learning algorithm that is stable, even for complex model structures and low data densities. Additionally, we present an approach to extend this algorithm to a second order version with greater robustness but lower flexibility. Both algorithms are compared to state of the art methods as well on synthetic data as on benchmark datasets to show the benefits of the new approach.


Organic Computing | 2011

Trust Management—Handling Uncertainties in Embedded Systems

Werner Brockmann; Andreas Buschermöhle; Jens Hülsmann; Nils Rosemann

This article summarises the current status of the Trust-Management project and gives an outlook to further research.


international conference information processing | 2012

Uncertainty and Trust Estimation in Incrementally Learning Function Approximation

Andreas Buschermöhle; Jan H. Schoenke; Werner Brockmann

Incremental learning gets increasingly important to cope with systems of high complexity or to adapt to changing environmental conditions. But to assure safety, the process of incremental learning must be supervised so that no knowledge learned incorrectly or under uncertain conditions influences the system in a contra-productive way. Hence we consider two principles to estimate different kinds of uncertainty, or in other words the trustworthiness, of an incrementally learning system. They are investigated principally for a simplified scenario that explicitly covers all different kinds of uncertainties. Finally, a combined measure to reflect all uncertainties of an incrementally learning system is presented.


conference of european society for fuzzy logic and technology | 2011

Incorporating Dynamic Uncertainties into a Fuzzy Classifier

Jens Hülsmann; Andreas Buschermöhle; Werner Brockmann

Dealing with classification problems in practice often has to cope with uncertain information, either in the training or in the operation phase or both. Modeling these uncertainties allows to enhance the robustness or performance of the classifier. In this paper we focus on the operation phase and present a general, but simple extension to rule based fuzzy classifier to do so. Therefor uncertain features are gradually and dimension wise faded out of the classification process. An artificial two–dimensional dataset is used to visualize the effectiveness of this approach. Investigations on three benchmark datasets shows the performance and gain in robustness.


scalable uncertainty management | 2012

A structured view on sources of uncertainty in supervised learning

Andreas Buschermöhle; Jens Hülsmann; Werner Brockmann

In supervised learning different sources of uncertainty influence the resulting functional behavior of the learning system which increases the risk of misbehavior. But still a learning system is often the only way to handle complex systems and large data sets. Hence it is important to consider the sources of uncertainty and to tackle them as far as possible. In this paper we categorize the sources of uncertainty and give a brief overview of uncertainty handling in supervised learning.


Evolving Systems | 2015

On-line learning with minimized change of the global mapping

Andreas Buschermöhle; Werner Brockmann

AbstractOn-line learning regression has been extensively studied as it has the advantages of allowing continuous adaptation to nonstationary environments, handling big data, and a fixed low computation and memory demand. Most research deals with direct linear regression. But the influence of a nonlinear transformation of the inputs through a fixed model structure is still an open problem. We present an on-line learning approach which is able to deal with all kinds of nonlinear model structures. Its emphasis is on minimizing the effect of local training examples on changes of the global mapping. Thus it yields a robust behavior by preventing overfitting on sparse data as well as fatal forgetting. This paper presents a first order version called incremental risk minimization algorithm (IRMA) in detail. It then extends this approach to a second order version of IRMA, which continuously adapts the learning process itself to the data at hand. For both versions it is proven that every learning step minimizes the worst case loss. We finally demonstrate the effectiveness by a series of experiments with synthetic and real data sets.


2014 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS) | 2014

Reliable localized on-line learning in non-stationary environments

Andreas Buschermöhle; Werner Brockmann

On-line learning allows to adapt to changing nonstationary environments. But typically with on-line learning a hypothesis of the data relation is adapted based on a stream of single local training examples, continuously changing the global input-output relation. Hence with these single examples the whole hypothesis is revised incrementally, which might be harmful to the overall predictive quality of the learned model. Nevertheless, for a reliable adaptation, the learned model must yield good predictions in every step. Therefor, the IRMA approach to online learning enables an adaptation that reliably incorporates a new example with a stringent local, but minimal global influence on the input-output relation. The main contribution of this paper is twofold. First, it presents an extension of IRMA regarding the setup of the stiffness, i.e. its hyper-parameter. Second, the IRMA approach is investigated for the first time on a non-trivial realworld application in a non-stationary environment. It is compared with state of the art algorithms on predicting future electric loads in a power grid where a continuous adaptation is necessary to adapt to season and weather conditions. The results show that the performance is increased significantly by IRMA.

Collaboration


Dive into the Andreas Buschermöhle's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jens Hülsmann

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nils Rosemann

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge