Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fabian Gieseke is active.

Publication


Featured researches published by Fabian Gieseke.


Soft Computing | 2011

Short-Term Wind Energy Forecasting Using Support Vector Regression

Oliver Kramer; Fabian Gieseke

Wind energy prediction has an important part to play in a smart energy grid for load balancing and capacity planning. In this paper we explore, if wind measurements based on the existing infrastructure of windmills in neighbored wind parks can be learned with a soft computing approach for wind energy prediction in the ten-minute to six-hour range. For this sake we employ Support Vector Regression (SVR) for time series forecasting, and run experimental analyses on real-world wind data from the NREL western wind resource dataset. In the experimental part of the paper we concentrate on loss function parameterization of SVR. We try to answer how far ahead a reliable wind forecast is possible, and how much information from the past is necessary.We demonstrate the capabilities of SVR-based wind energy forecast on the micro-scale level of one wind grid point, and on the larger scale of a whole wind park.


Neurocomputing | 2013

Wind energy prediction and monitoring with neural computation

Oliver Kramer; Fabian Gieseke; Benjamin Satzger

Wind energy has an important part to play as renewable energy resource in a sustainable world. For a reliable integration of wind energy high-dimensional wind time-series have to be analyzed. Fault analysis and prediction are an important aspect in this context. The objective of this work is to show how methods from neural computation can serve as forecasting and monitoring techniques, contributing to a successful integration of wind into sustainable and smart energy grids. We will employ support vector regression as prediction method for wind energy time-series. Furthermore, we will use dimension reduction techniques like self-organizing maps for monitoring of high-dimensional wind time-series. The methods are briefly introduced, related work is presented, and experimental case studies are exemplarily described. The experimental parts are based on real wind energy time-series data from the National Renewable Energy Laboratory (NREL) western wind resource data set.


international conference on machine learning | 2009

Fast evolutionary maximum margin clustering

Fabian Gieseke; Tapio Pahikkala; Oliver Kramer

The maximum margin clustering approach is a recently proposed extension of the concept of support vector machines to the clustering problem. Briefly stated, it aims at finding an optimal partition of the data into two classes such that the margin induced by a subsequent application of a support vector machine is maximal. We propose a method based on stochastic search to address this hard optimization problem. While a direct implementation would be infeasible for large data sets, we present an efficient computational shortcut for assessing the quality of intermediate solutions. Experimental results show that our approach outperforms existing methods in terms of clustering accuracy.


european conference on applications of evolutionary computation | 2013

Towards non-linear constraint estimation for expensive optimization

Fabian Gieseke; Oliver Kramer

Constraints can render a numerical optimization problem much more difficult to address. In many real-world optimization applications, however, such constraints are not explicitly given. Instead, one has access to some kind of a black-box that represents the (unknown) constraint function. Recently, we proposed a fast linear constraint estimator that was based on binary search. This paper extends these results by (a) providing an alternative scheme that resorts to the effective use of support vector machines and by (b) addressing the more general task of non-linear decision boundaries. In particular, we make use of active learning strategies from the field of machine learning to select reasonable training points for the recurrent application of the classifier. We compare both constraint estimation schemes on linear and non-linear constraint functions, and depict opportunities and pitfalls concerning the effective integration of such models into a global optimization process.


international conference on data mining | 2010

Resilient K-d Trees: K-Means in Space Revisited

Fabian Gieseke; Gabriel Moruz; Jan Vahrenhold

We develop a k-d tree variant that is resilient to a pre-described number of memory corruptions while still using only linear space. We show how to use this data structure in the context of clustering in high-radiation environments and demonstrate that our approach leads to a significantly higher resiliency rate compared to previous results.


Expert Systems With Applications | 2012

Evolutionary kernel density regression

Oliver Kramer; Fabian Gieseke

Highlights? We optimize the bandwidths of kernel regression with evolution strategies. ? We extend the Nadaraya-Watson estimator by local models with independent parameterization. ? The approach is more flexible than standard kernel regression. ? We optimize unsupervised kernel regression for dimensionality reduction with evolution strategies. The Nadaraya-Watson estimator, also known as kernel regression, is a density-based regression technique. It weights output values with the relative densities in input space. The density is measured with kernel functions that depend on bandwidth parameters. In this work we present an evolutionary bandwidth optimizer for kernel regression. The approach is based on a robust loss function, leave-one-out cross-validation, and the CMSA-ES as optimization engine. A variant with local parameterized Nadaraya-Watson models enhances the approach, and allows the adaptation of the model to local data space characteristics. The unsupervised counterpart of kernel regression is an approach to learn principal manifolds. The learning problem of unsupervised kernel regression (UKR) is based on optimizing the latent variables, which is a multimodal problem with many local optima. We propose an evolutionary framework for optimization of UKR based on scaling of initial local linear embedding solutions, and minimization of the cross-validation error. Both methods are analyzed experimentally.


KI'11 Proceedings of the 34th Annual German conference on Advances in artificial intelligence | 2011

Variance scaling for EDAs revisited

Oliver Kramer; Fabian Gieseke

Estimation of distribution algorithms (EDAs) are derivativefree optimization approaches based on the successive estimation of the probability density function of the best solutions, and their subsequent sampling. It turns out that the success of EDAs in numerical optimization strongly depends on scaling of the variance. The contribution of this paper is a comparison of various adaptive and self-adaptive variance scaling techniques for a Gaussian EDA. The analysis includes: (1) the Gaussian EDA without scaling, but different selection pressures and population sizes, (2) the variance adaptation technique known as Silvermans rule-of-thumb, (3) σ-self-adaptation known from evolution strategies, and (4) transformation of the solution space by estimation of the Hessian. We discuss the results for the sphere function, and its constrained counterpart.


Expert Systems With Applications | 2013

Learning morphological maps of galaxies with unsupervised regression

Oliver Kramer; Fabian Gieseke; Kai Lars Polsterer

Hubbles morphological classification of galaxies has found broad acceptance in astronomy since decades. Numerous extensions have been proposed in the past, mostly based on galaxy prototypes. In this work, we automatically learn morphological maps of galaxies with unsupervised machine learning methods that preserve neighborhood relations and data space distances. For this sake, we focus on a stochastic variant of unsupervised nearest neighbors (UNN) for arranging galaxy prototypes on a two-dimensional map. UNN regression is the unsupervised counterpart of nearest neighbor regression for dimensionally reduction. In the experimental part of this article, we visualize the embeddings and compare the learning results achieved by various UNN parameterizations and related dimensionality reduction methods.


Journal of Discrete Algorithms | 2010

Pruning spanners and constructing well-separated pair decompositions in the presence of memory hierarchies

Fabian Gieseke; Joachim Gudmundsson; Jan Vahrenhold

Given a geometric graph G=(S,E) in R^d with constant dilation t, and a positive constant @e, we show how to construct a (1+@e)-spanner of G with O(|S|) edges using O(sort(|E|)) memory transfers in the cache-oblivious model of computation. The main building block of our algorithm, and of independent interest in itself, is a new cache-oblivious algorithm for constructing a well-separated pair decomposition which builds such a data structure for a given point set S@?R^d using O(sort(|S|)) memory transfers.


international conference on pattern recognition applications and methods | 2012

SPARSE QUASI-NEWTON OPTIMIZATION FOR SEMI-SUPERVISED SUPPORT VECTOR MACHINES

Fabian Gieseke; Antti Airola; Tapio Pahikkala; Oliver Kramer

Collaboration


Dive into the Fabian Gieseke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Vahrenhold

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Igel

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar

Antti Airola

University of Oldenburg

View shared research outputs
Top Co-Authors

Avatar

Gabriel Moruz

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Satzger

Vienna University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge