Erich Fuchs
University of Passau
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Erich Fuchs.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010
Erich Fuchs; Thiemo Gruber; Jiri Nitschke; Bernhard Sick
The paper presents SwiftSeg, a novel technique for online time series segmentation and piecewise polynomial representation. The segmentation approach is based on a least-squares approximation of time series in sliding and/or growing time windows utilizing a basis of orthogonal polynomials. This allows the definition of fast update steps for the approximating polynomial, where the computational effort depends only on the degree of the approximating polynomial and not on the length of the time window. The coefficients of the orthogonal expansion of the approximating polynomial-obtained by means of the update steps-can be interpreted as optimal (in the least-squares sense) estimators for average, slope, curvature, change of curvature, etc., of the signal in the time window considered. These coefficients, as well as the approximation error, may be used in a very intuitive way to define segmentation criteria. The properties of SwiftSeg are evaluated by means of some artificial and real benchmark time series. It is compared to three different offline and online techniques to assess its accuracy and runtime. It is shown that SwiftSeg-which is suitable for many data streaming applications-offers high accuracy at very low computational costs.
Pattern Recognition | 2009
Erich Fuchs; Thiemo Gruber; Jiri Nitschke; Bernhard Sick
This article presents SwiftMotif, a novel technique for on-line motif detection in time series. With this technique, frequently occurring temporal patterns or anomalies can be discovered, for instance. The motif detection is based on a fusion of methods from two worlds: probabilistic modeling and similarity measurement techniques are combined with extremely fast polynomial least-squares approximation techniques. A time series is segmented with a data stream segmentation method, the segments are modeled by means of normal distributions with time-dependent means and constant variances, and these models are compared using a divergence measure for probability densities. Then, using suitable clustering algorithms based on these similarity measures, motifs may be defined. The fast time series segmentation and modeling techniques then allow for an on-line detection of previously defined motifs in new time series with very low run-times. SwiftMotif is suitable for real-time applications, accounts for the uncertainty associated with the occurrence of certain motifs, e.g., due to noise, and considers local variability (i.e., uniform scaling) in the time domain. This article focuses on the mathematical foundations and the demonstration of properties of SwiftMotif-in particular accuracy and run-time-using some artificial and real benchmark time series.
Neurocomputing | 2010
Erich Fuchs; Thiemo Gruber; Helmuth Pree; Bernhard Sick
Subspace representations that preserve essential information of high-dimensional data may be advantageous for many reasons such as improved interpretability, overfitting avoidance, acceleration of machine learning techniques. In this article, we describe a new subspace representation of time series which we call polynomial shape space representation. This representation consists of optimal (in a least-squares sense) estimators of trend aspects of a time series such as average, slope, curve, change of curve, etc. The shape space representation of time series allows for a definition of a novel similarity measure for time series which we call shape space distance measure. Depending on the application, time series segmentation techniques can be applied to obtain a piecewise shape space representation of the time series in subsequent segments. In this article, we investigate the properties of the polynomial shape space representation and the shape space distance measure by means of some benchmark time series and discuss possible application scenarios in the field of temporal data mining.
international conference on acoustics, speech, and signal processing | 1997
Erich Fuchs; Klaus Donner
Only a few time series methods are applicable to signal trend analysis under real-time conditions. The use of orthogonal polynomials for least-squares approximations on discrete data turned out to be very efficient for providing estimators in the time domain. A polynomial extrapolation considering signal trends in a certain time window is obtainable even for high sampling rates. The presented method can be used as a prediction algorithm, e.g. in threshold monitoring systems, or as a trend correction possibility preparing the analysis of the remaining signal. In the theoretical derivation, the recursive computation of orthogonal polynomials allows the development of these fast algorithms for least-squares approximations in moving time windows.
IEEE Transactions on Neural Networks | 2009
Erich Fuchs; Christian Gruber; Tobias Reitmaier; Bernhard Sick
Neural networks are often used to process temporal information, i.e., any kind of information related to time series. In many cases, time series contain short-term and long-term trends or behavior. This paper presents a new approach to capture temporal information with various reference periods simultaneously. A least squares approximation of the time series with orthogonal polynomials will be used to describe short-term trends contained in a signal (average, increase, curvature, etc.). Long-term behavior will be modeled with the tapped delay lines of a time-delay neural network (TDNN). This network takes the coefficients of the orthogonal expansion of the approximating polynomial as inputs such considering short-term and long-term information efficiently. The advantages of the method will be demonstrated by means of artificial data and two real-world application examples, the prediction of the user number in a computer network and online tool wear classification in turning.
ieee intelligent vehicles symposium | 2007
Ullrich Scheunert; Philipp Lindner; Eric Richter; Thomas Tatschke; Dominik Schestauber; Erich Fuchs; Gerd Wanielik
The fusion of data from different sensorial sources is today the most promising method to increase robustness and reliability of environmental perception. The project ProFusion2 pushes the sensor data fusion for automotive applications in the field of driver assistance systems. ProFusion2 was created to enhance fusion techniques and algorithms beyond the current state-of-the-art. It is a horizontal subproject in the Integrated Project PReVENT (funded by the EC). The paper presents two approaches concerning the detection of vehicles in road environments. An early fusion and a multi level fusion processing strategy are described. The common framework for the representation of the environment model and the representation of perception results is introduced. The key feature of this framework is the storing and representation of all data involved in one perception memory in a common data structure and the holistic accessibility.
international conference on acoustics speech and signal processing | 1998
Andreas Sicheneder; Armin Bender; Erich Fuchs; Roland Mandl; Bernhard Sick
A framework with a tool-supported high-level specification technique is very important for the development of complex signal processing applications containing software-intensive parts (e.g. hybrid systems in automated production processes) in order to provide safe and reliable systems. In this paper we present the concept of a framework, which is an object-oriented CASE-tool offering a graphical specification ability to model and validate a given application and to control its execution. A variety of people having different programming skills is able to use this visual specification technique effectively. Especially users not being interested in implementation details can specify their application on a high abstraction level by connecting reusable and reliable components (modules representing basic algorithms). As a result, complex signal graphs representing the dataflow between the modules are created. The tool supports this software specification technique by automatic type checking for the connections between modules and by changeable module parameters. On the other hand it is easy for software engineers to integrate additional signal processing algorithms into the framework thus building suitable module libraries without considering a specific high-level application.
ieee intelligent vehicles symposium | 2013
Florian Janda; Sebastian Pangerl; Eva Lang; Erich Fuchs
An approach for detecting the road boundary on different types of roads without any preliminary knowledge is presented. We fuse information obtained from an algorithm which detects road markings and road edges in images acquired by a video camera as well as data from a radar sensor. Each road marking, each road edge and each road barrier is tracked individually. Hence we can even capture exits or laybys. We use an edge image for road marking detection and texture information for road edge detection. Additional data provided by a radar sensor is used to measure targets referring to static barriers along the road side such as guardrails. The output of each processing unit is fused into a Kalman filter framework, where the confidence of each subsystem influences the innovation of the overall system. The underlying geometric road model comprises parameters for multiple lanes, the flanking road edge as well as the vehicles relative pose. The work is part of the project Interactive.
international conference on information fusion | 2007
Thomas Tatschke; Franz-Josef Färber; Erich Fuchs; Leonhard F. Walchshäusl; Rudi Lindl
In the development phase of perception systems (e.g. for advanced driver assistance systems) general interest is pointing towards the performance of the respective detection and tracking algorithms. One common way to evaluate such systems relies on simulated data which is used as a reference. We present a semi-autonomous method, which allows the extraction of reference data from sensor recordings (including data at least from a camera and a distance measuring sensor device). Furthermore, we show how to combine these reference data with the output from the object detection system and how to derive performance statistics (detection and miss rates) of the system. As the generated reference information can be stored along with the sensor recordings, this method also facilitates the comparison of different software versions or algorithm parameters.
Archive | 1998
Roland Mandl; Johann Nommer; Erich Fuchs; Bernhard Sick