Andre Gensler
University of Kassel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andre Gensler.
systems, man and cybernetics | 2016
Andre Gensler; Janosch Henze; Bernhard Sick; Nils Raabe
Power forecasting of renewable energy power plants is a very active research field, as reliable information about the future power generation allow for a safe operation of the power grid and helps to minimize the operational costs of these energy sources. Deep Learning algorithms have shown to be very powerful in forecasting tasks, such as economic time series or speech recognition. Up to now, Deep Learning algorithms have only been applied sparsely for forecasting renewable energy power plants. By using different Deep Learning and Artificial Neural Network algorithms, such as Deep Belief Networks, AutoEncoder, and LSTM, we introduce these powerful algorithms in the field of renewable energy power forecasting. In our experiments, we used combinations of these algorithms to show their forecast strength compared to a standard MLP and a physical forecasting model in the forecasting the energy output of 21 solar power plants. Our results using Deep Learning algorithms show a superior forecasting performance compared to Artificial Neural Networks as well as other reference models such as physical models.
international conference on pattern recognition | 2014
Michael Goldhammer; Konrad Doll; Ulrich Brunsmann; Andre Gensler; Bernhard Sick
This paper focuses on forecasting of pedestrians short-time trajectories up to 2.5 s for traffic safety applications. We present a self-learning approach based on artificial neural network movement models and compare it to traditional constant velocity Kalman Filter prediction and extrapolation of polynomials fitted using a least-squares error. Trajectories of uninstructed pedestrians in public traffic at a real urban intersection are acquired by a wide angle stereo camera setup in combination with a 3D head tracking framework. Results using this real-world data show that the artificial neural network significantly improves forecast quality compared to other approaches especially for critical traffic scenes including velocity changes such as starting and stopping. For those velocity changes a reduction of position estimation errors of about 21% compared to the Kalman Filter and to extrapolation of polynomials is obtained. By means of a concrete pedestrian-vehicle scenario we demonstrate the benefit of the proposed approach for an advanced driver assistant system in terms of reaction time.
international conference on data mining | 2013
Andre Gensler; Thiemo Gruber; Bernhard Sick
Segmentation is an important step in processing and analyzing time series. In this article, we present an approach to speed up some standard time series segmentation techniques. Often, time series segmentation is based on piecewise polynomial approximations of the time series (including piecewise constant or linear approximations as special cases). Basically, a least-squares fit with a polynomial has a computational complexity that depends on the number of observations, i.e., the length of the time series. To improve the computational complexity of segmentation techniques we exploit the fact that approximations have to be repeated in sliding (moving) or growing time windows. Therefore, we suggest to use update techniques for the approximations that determine the approximating polynomial in a sliding or growing time window from an already existing one with a computational complexity that is independent of the number of observations, i.e., the length of the window. For that purpose bases of orthogonal polynomials must be used instead of standard bases such as monomials. We take two standard techniques for segmentation - the on-line algorithm SWAB (Sliding Window And Bottom-up) and the off-line technique OptSeg (Optimal Segmentation) - and show that the run-times can be reduced substantially for a given polynomial degree. If run-time constraints are given, e.g., in real-time applications, it would also be possible to adapt the degree of the approximating polynomials. Higher polynomial degrees typically result in lower modeling errors or longer segments. The various properties of the new realizations of segmentation techniques are outlined by means of some benchmark time series. The experimental results show that, depending on the chosen parameterization, OptSeg can be accelerated by some orders of magnitude, SWAB by a factor of up to about ten.
international joint conference on neural network | 2016
Andre Gensler; Bernhard Sick
The prediction of the power generation of wind farms is a non-trivial problem with increasing importance during the last decade due to the rapid increase of wind power generation in the power grid. The prediction task is commonly addressed using numerical weather predictions, statistical methods, or machine learning techniques. Various articles have shown that ensemble techniques for forecasting can yield better results regarding forecasting accuracy than single techniques alone. Typical ensembles make use of a parameter, or data diversity approach to build the models. In this article, we propose a novel ensemble technique using both, cooperative and competitive characteristics of ensembles to gradually adjust the influences of single forecasting algorithms in the ensemble based on their individual strengths using a “coopetitive” weighting formula. The observed quality of the models during training is used to adaptively weigh the models based on the location in the input data space (i.e., depending on the weather situation). We compute the overall weights for a particular weather situation using both, a spatial as well as a global weighting term. The experimental evaluation is performed on a data set consisting of data from 45 wind farms, which is made publicly available. We demonstrate that the technique is among the best performing algorithms compared to other state-of-the-art algorithms and ensembles. Furthermore, the practical applicability of the proposed technique is discussed.
international symposium on temporal representation and reasoning | 2015
Andre Gensler; Thiemo Gruber; Bernhard Sick
Many efforts have been made during the past decades to investigate the use of orthogonal basis functions in the field of least-squares approximation. Certain bases of orthogonal functions allow for a definition of fast update algorithms for approximations of time series in sliding or growing time windows. In fields such as technical data analytics, temporal data mining, or pattern recognition in time series, appropriate time series representations are needed to measure the similarity of time series or to segment them. This article bridges the gap between mathematical basic research and applications by making fast update techniques for standard polynomials and trigonometric polynomials accessible for time series classification or regression (e.g., forecasting), anomaly or motif detection in time series, etc. This is of utmost importance for online or big data applications. Time series or segments of time series will be represented by features derived from the orthogonal expansion coefficients of the approximating polynomials which capture the essential behavior in the time or spectral domain, i.e. trends and periodic behavior, using standard or trigonometric polynomials. Our experiments show that a reliable computation is possible at a very low runtime compared to a conventional least-squares approach. The algorithms are implemented in Java, C(++), Matlab, and Python and made publicly available.
international conference on signal and information processing | 2014
Andre Gensler; Bernhard Sick; Jens Willkomm
For temporal data analytics it is essential to assess the similarity of time series numerically. For similarity measures, in turn, appropriate time series representation techniques are needed. We present and discuss two techniques for time series representation. Eigenspace representations are based on a principal component analysis of time series. Shape space representations are based on polynomial least-squares approximations. Both aim at capturing the essential characteristics of time series while abstracting from less significant information, e.g., noise. The similarity of time series can then be measured using a standard Euclidean distance in the eigenspace or the shape space, respectively. Experiments on a number of benchmark data sets for time series classification show that the measure based on a shape space representation outperforms some other linear (non-elastic) similarity measures-including a standard Euclidean measure applied to the raw time series, which is a standard approach in temporal data analytics-regarding classification accuracy and run-time.
Pattern Analysis and Applications | 2018
Andre Gensler; Bernhard Sick
AbstractThe automated detection of points in a time series with a special meaning to a user, commonly referred to as the detection of events, is an important aspect of temporal data mining. These events often are points in a time series that can be peaks, level changes, sudden changes of spectral characteristics, etc. Fast algorithms are needed for event detection for online applications or applications with huge time series data sets. In this article, we present a very fast algorithm for event detection that learns detection criteria from labeled sample time series (i.e., time series where events are marked). This algorithm is based on fast transformations of time series into low-dimensional feature spaces and probabilistic modeling techniques to identify criteria in a supervised manner. Events are then found in one, single fast pass over the signal (therefore, the algorithm is called SwiftEvent) by evaluating learned thresholds on Mahalanobis distances in the feature space. We analyze the run-time complexity of SwiftEvent and demonstrate its application in some use cases with artificial and real-world data sets in comparison with other state-of-the-art techniques.
systems, man and cybernetics | 2016
Andre Gensler; Bernhard Sick; Vitali Pankraz
Power forecasting for renewable energy power plants has been a highly active field of research during the past decade. In order to support the operation of the power grid, sophisticated algorithms have to predict the future development of power generation. Algorithms in the class of analog ensembles conduct the process of forecasting by finding historically similar situations (e.g., by comparing weather situations), and merging the historic power generation time series during similar periods to an overall power forecast. However, these algorithms often only use very simple similarity measures, which in turn do not make optimal use of the historic information available. In this article, we propose and compare advanced search strategies for similarity assessment. These strategies include the assessment of forecasting time periods as a whole instead of granular points in time, and joint time windows of historic- and future weather situations. Also, historic power time series are used directly in the comparison strategy. Furthermore, we propose a combined scheme to perform automated feature selection and -weighting for individual weather parameters. We evaluate the proposed technique on a solar farm data set consisting of 21 photovoltaic power plants which is made publicly available. In the evaluation we show that advanced comparison strategies not only offer an advantage over simple strategies, they are even able to outperform other reference techniques, e.g., such based on physical models.
ieee symposium series on computational intelligence | 2016
Andre Gensler; Bernhard Sick; Stephan Vogt
The evaluation of the performance of forecasting algorithms in the area of power forecasting of regenerative power plants is the basis for model comparison. There are a multitude of different forms of evaluation scores, which, however, do not seem to be universally applied. In this article, we want to broaden the understanding for the function and relationship of different error scores in the area of deterministic error scores. A categorization by normalization technique is introduced, which simplifies the process of choosing the appropriate error score for an application. A number of popular error scores are investigated in a case study which details the development of error scores given different forms of error distributions. Furthermore, the behavior of different error scores on a real-world wind farm data set is analyzed. A correlation analysis between the evaluated scores gives insights on how these scores relate to each other. Properties and notes on the applicability of the presented scores are detailed in a discussion. Finally, an outlook on future work in the area of probabilistic error scores is given.
LWA | 2014
Andre Gensler; Bernhard Sick