Matthias Steinbrecher
Otto-von-Guericke University Magdeburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthias Steinbrecher.
Archive | 2016
Rudolf Kruse; Christian Borgelt; Christian Braune; Sanaz Mostaghim; Matthias Steinbrecher
(Artificial) neural networks are information processing systems, whose structure and operation principles are inspired by the nervous system and the brain of animals and humans. They consist of a large number of fairly simple units, the so-called neurons, which are working in parallel. These neurons communicate by sending information in the form of activation signals, along directed connections, to each other.
Archive | 2013
Rudolf Kruse; Christian Borgelt; Frank Klawonn; Christian Moewes; Matthias Steinbrecher; Pascal Held
Having described the structure, the operation and the training of (artificial) neural networks in a general fashion in the preceding chapter, we turn in this and the subsequent chapters to specific forms of (artificial) neural networks. We start with the best-known and most widely used form, the so-called multi-layer perceptron (MLP), which is closely related to the networks of threshold logic units we studied in a previous chapter. They exhibit a strictly layered structure and may employ other activation functions than a step at a crisp threshold.
Archive | 2013
Rudolf Kruse; Christian Borgelt; Frank Klawonn; Christian Moewes; Matthias Steinbrecher; Pascal Held
We have already discussed how set theoretic operations like intersection, union and complement can be generalized to fuzzy sets. This chapter is devoted to the issue of extending the concept of mappings or functions to fuzzy sets. These ideas allow us to define operations like addition, subtraction, multiplication, division or taking squares as well as set theoretic concepts like the composition of relations for fuzzy sets.
north american fuzzy information processing society | 2008
Matthias Steinbrecher; Rudolf Kruse
We propose a novel postprocessing technique for identifying sets of association rules that expose a user-specified temporal development. We explicitly do not use a learning approach that requires the database to be subdivided into time frames. Instead, a global probabilistic learning method is used for induction. The resulting association rules are then matched against a set of fuzzy concepts. These concepts comprise user-built linguistic propositions that describe the evolution of rules that might be considered interesting. The proposed technique is evaluated on a real-world data set. To present the results, we introduce a modified rule visualization along the way that is an extension of our previous work.
Journal of Computer and System Sciences | 2010
Matthias Steinbrecher; Rudolf Kruse
We propose a user-centric rule filtering method that allows to identify association rules that exhibit a certain user-specified temporal behavior with respect to rule evaluation measures. The method can considerably reduce the number of association rules that have to be assessed manually after a rule induction. This is especially necessary if the rule set contains many rules as it is the case for the task of finding rare patterns inside the data. For the proposed method, we will reuse former work on the visualization of association rules [M. Steinbrecher, R. Kruse, Visualization of possibilistic potentials, in: Foundations of Fuzzy Logic and Soft Computing, in: Lecture Notes in Comput. Sci., vol. 4529, Springer-Verlag, Berlin/Heidelberg, 2007, pp. 295-303] and use an extension of it to motivate and assess the presented filtering technique. We put the focus on rules that are induced from a data set that contains a temporal variable and build our approach on the requirement that temporally ordered sets of association rules are available, i.e., one set for every time frame. To illustrate this, we propose an ad-hoc learning method along the way. The actual rule filtering is accomplished by means of fuzzy concepts. These concepts use linguistic variables to partition rule-related domains of interest, such as the confidence change rate. The original rule sets are then matched against these user concepts and result in only those rules that match the respective concepts to a predefined extent. We provide empirical evidence by applying the proposed methods to hand-crafted as well as real-world data sets and critically discuss the current state and further prospects.
Computational Intelligence in Automotive Applications | 2008
Matthias Steinbrecher; Frank Rügheimer; Rudolf Kruse
The production pipeline of present day’s automobile manufacturers consists of a highly heterogeneous and intricate assembly workflow that is driven by a considerable degree of interdependencies between the participating instances as there are suppliers, manufacturing engineers, marketing analysts and development researchers. Therefore, it is of paramount importance to enable all production experts to quickly respond to potential on-time delivery failures, ordering peaks or other disturbances that may interfere with the ideal assembly process. Moreover, the fast moving evolvement of new vehicle models require well-designed investigations regarding the collection and analysis of vehicle maintenance data. It is crucial to track down complicated interactions between car components or external failure causes in the shortest time possible to meet customer-requested quality claims. To summarize these requirements, let us turn to an example which reveals some of the dependencies mentioned in this chapter. As we will see later, a normal car model can be described by hundreds of variables each of which representing a feature or technical property. Since only a small number of combinations (compared to all possible ones) will represent a valid car configuration, we will present a means of reducing the model space by imposing restrictions. These restrictions enter the mathematical treatment in the form of dependencies since a restriction may cancel out some options, thus rendering two attributes (more) dependent. This early step produces qualitative dependencies like “engine type and transmission type are dependent.” To quantify these dependencies some uncertainty calculus is necessary to establish the dependence strengths. In our cases probability theory is used to augment the model, e.g., “whenever engine type 1 is ordered, the probability is 56% of having transmission type 2 ordered as well.” There is a multitude of sources to estimate or extract this information from. When ordering peaks occur like an increased demand of convertibles during the Spring, or some supply shortages arise due to a strike in the transport industry, the model is used to predict vehicle configurations that may run into delivery delays in order to forestall such a scenario by, e.g., acquiring alternative supply chains or temporarily shifting production load. Another part of the model may contain similar information for the aftercare, e.g., “whenever a warranty claim contained battery type 3, there is a 30% chance of having radio type 1 in the car.” In this case dependencies are contained in the quality assessment data and are not known beforehand but are extracted to reveal possible hidden design flaws. These examples – both in the realm of planning and subsequent maintenance measures – call for treatment methods that exploit the dependence structures embedded inside the application domains. Furthermore, these methods need to be equipped with dedicated updating, revision and refinement techniques in order to cope with the above-mentioned possible supply and demand irregularities. Since every production and planning stage involves highly specialized domain experts, it is necessary to offer intuitive system interfaces that are less prone to inter-domain misunderstandings. The next section will sketch the underlying theoretical frameworks, after which we will present and discuss successfully applied planning and analysis methods that have been rolled out to production sites of two large automobile manufacturers. Section 3 deals with the handling of production planning at Volkswagen. The
soft computing | 2007
Matthias Steinbrecher; Rudolf Kruse
The constantly increasing capabilities of database storage systems leads to an incremental collection of data by business organizations. The research area of Data Mining has become a paramount requirement in order to cope with the acquired information by locating and extracting patterns from these data volumes. Possibilistic networks comprise one prominent Data Mining technique that is capable of encoding dependence and independence relations between variables as well as dealing with imprecision. It will be argued that the learning of the network structure only provides an overview of the qualitative component, yet the more interesting information is contained inside the network parameters, namely the potential tables. In this paper we introduce a new visualization technique that allows for a detailed inspection of the quantitative component of possibilistic networks.
Archive | 2013
Rudolf Kruse; Christian Borgelt; Frank Klawonn; Christian Moewes; Matthias Steinbrecher; Pascal Held
In this chapter we will address how graphical models can be learned from given data. So far we were given the graphical structure. Now, we will introduce heuristics that allow us to induce these structures.
Archive | 2016
Rudolf Kruse; Christian Borgelt; Christian Braune; Sanaz Mostaghim; Matthias Steinbrecher
Swarm Intelligence (SI) is about a collective behavior of a population of individuals. The main properties of such populations is that all of the individuals have the same simple rule from which the global collective behavior cannot be predicted. Moreover, the individuals can only communicate within their local neighborhoods. The outcome of this local interaction defines the collective behavior which is unknown to single individuals. The world of Computational Swarm Intelligence contains several approaches for dealing with optimization problems. Particle Swarm Optimization (PSO) (Kennedy and Eberhart, Particle swarm optimization, 1995) and Ant Colony Optimization (ACO) (Dorigo and Stutzle, Ant Colony Optimization 2004) will be addressed in the chapter. After the introduction, we explain the basic principles of computational swarm intelligence for PSO in Sect. 14.2 upon which the following Sects. 14.3 to 14.5 are built. The second part of the chapter, Sect. 14.6, is about the Ant Colony Optimization method.
Archive | 2013
Rudolf Kruse; Christian Borgelt; Frank Klawonn; Christian Moewes; Matthias Steinbrecher; Pascal Held
Like multi-layer perceptrons, radial basis function networks are feed-forward neural networks with a strictly layered structure. However, the number of layers is always three, that is, there is exactly one hidden layer. In addition, radial basis function networks differ from multi-layer perceptrons in the network input and activation functions, especially in the hidden layer. In this hidden layer radial basis functions are employed, which are responsible for the name of this type of neural network. With these functions a kind of “catchment region” is assigned to each neuron, in which it mainly influences the output of the neural network.