Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Oehmcke is active.

Publication


Featured researches published by Stefan Oehmcke.


Joint German/Austrian Conference on Artificial Intelligence (Künstliche Intelligenz) | 2015

Event Detection in Marine Time Series Data

Stefan Oehmcke; Oliver Zielinski; Oliver Kramer

Automatic detection of special events in large data is often more interesting for data analysis than regular patterns. In particular, the processes in multivariate time series data can be better understood, if a deviation from the normal behavior is found. In this work, we apply a machine learning event detection method to a new application in the marine domain. The marine long-term data from the stationary platform at Spiekeroog, called Time Series Station, are a challenge, because noise, sensor drifts and missing data complicate analysis of the data. We acquire labels for evaluation with help of experts and test different approaches, which include time context into patterns. The used event detection method is local outlier factor (LOF). To improve results, we apply dimensionality reduction to the data. The analysis of the results shows, that the machine learning techniques can find special events, which are of interest to experts in the field.


international joint conference on neural network | 2016

kNN ensembles with penalized DTW for multivariate time series imputation

Stefan Oehmcke; Oliver Zielinski; Oliver Kramer

The imputation of partially missing multivariate time series data is critical for its correct analysis. The biggest problems in time series data are consecutively missing values that would result in serious information loss if simply dropped from the dataset. To address this problem, we adapt the k-Nearest Neighbors algorithm in a novel way for multivariate time series imputation. The algorithm employs Dynamic Time Warping as distance metric instead of point-wise distance measurements. We preprocess the data with linear interpolation to create complete windows for Dynamic Time Warping. The algorithm derives global distance weights from the correlation between features and consecutively missing values are penalized by individual distance weights to reduce error transfer from linear interpolation. Finally, efficient ensemble methods improve the accuracy. Experimental results show accurate imputations on datasets with a high correlation between features. Further, our algorithm shows better results with consecutively missing values than state-of-the-art algorithms.


Neurocomputing | 2018

Input quality aware convolutional LSTM networks for virtual marine sensors

Stefan Oehmcke; Oliver Zielinski; Oliver Kramer

The harsh environmental conditions in a marine area make continuous observations of it challenging. To temporally or permanently replace faulty hardware sensors, reliable virtual sensors are getting more and more important. This paper introduces a deep learning architecture for a marine virtual sensor application that is able to utilize input quality information. We propose a novel input quality based dropout layer to take advantage of Information about the quality of the input sensors. The virtual sensor models are built upon convolutional and recurrent long short-term memory layers. We apply a time dimensionality reduction method called exPAA that retains finer details from recent values, but past information with less details are also available to the model. As interpretable reliability is important virtual sensors, we include predictive uncertainty based on dropout and Monte Carlo predictions into our neural network. Experimental results show that we perform better than the baseline and our input quality based dropout layer improves on these results even further. We also provide insights on the learned uncertainty intervals as well as the utilized linear and non-linear correlations of the first convolutional layer.


human computer interaction with mobile devices and services | 2013

Storyteller: in-situ reflection on study experiences

Benjamin Poppinga; Stefan Oehmcke; Wilko Heuten; Susanne Boll

Diary studies are often applied in HCI research to collect qualitative user impressions. Unfortunately, the period between creation of a diary entry and the later reflection can be too long, which leads to a limited currentness and contextuality. This eventually results in incomplete or misinterpreted data. In this paper we present Storyteller, a mobile application that allows a quick creation of diary entries and encourages users to reflect on these in-situ through a storytelling approach. We argue that this can lead to more accurate and substantial qualitative insights.


international symposium on neural networks | 2017

Recurrent neural networks and exponential PAA for virtual marine sensors

Stefan Oehmcke; Oliver Zielinski; Oliver Kramer

Virtual sensors are getting more and more important as replacement and quality control tool for expensive and fragile hardware sensors. We introduce a virtual sensor application with marine sensor data from two data sources. The virtual sensor models are built upon recurrent neural networks (RNNs). To take full advantage of past data, we employ the time dimensionality reduction method piecewise approximate aggregation (PAA). We present an extension of this method, called exponential PAA (ExPAA) that pulls finer details from recent values, but preserves less exact information about the past. Experimental results demonstrate that RNNs benefit from this extension and confirm the stability and usability of our virtual sensor models over a five-month period of multivariate marine time series data.


european conference on applications of evolutionary computation | 2015

Analysis of Diversity Methods for Evolutionary Multi-objective Ensemble Classifiers

Stefan Oehmcke; Justin Heinermann; Oliver Kramer

Ensemble classifiers are strong and robust methods for classification and regression tasks. Considering the balance between runtime and classifier accuracy the learning problem becomes a multi-objective optimization problem. In this work, we propose an evolutionary multi-objective algorithm based on non-dominated sorting that balances runtime and accuracy properties of nearest neighbor classifier ensembles and decision tree ensembles. We identify relevant ensemble parameters with a significant impact on the accuracy and runtime. In the experimental part of this paper, we analyze the behavior on typical classification benchmark problems.


international symposium on neural networks | 2017

Manifold learning with iterative dimensionality photo-projection

Daniel Lückehe; Stefan Oehmcke; Oliver Kramer

In this work, we propose a new dimensionality reduction approach for generating low-dimensional embeddings of high-dimensional data based on an iterative procedure. The data sets dimensions are sorted depending on their variance. Starting with the highest variance, the dimensions are iteratively projected onto the embedding. The projection can be seen as taking a photo from a two-dimensional motive employing a depth effect. The approach is flexible and offers numerous extensions for future work. We introduce a basic variant and illustrate it working mechanisms with numerous visualizations. The approach is experimentally analyzed on a small set of benchmark problems. Exemplary embeddings and evaluations based on the Shepard-Kruskal measure and the co-ranking matrix complement the analysis. The new approach shows competitive results in comparison to well-established dimensionality reduction methods.


international conference on artificial neural networks | 2018

Direct Training of Dynamic Observation Noise with UMarineNet

Stefan Oehmcke; Oliver Zielinski; Oliver Kramer

Accurate uncertainty predictions are crucial to assess the reliability of a model, especially for neural networks. Part of this uncertainty is the observation noise, which is dynamic in our marine virtual sensor task. Typically, dynamic noise is not trained directly, but approximated through terms in the loss function. Unfortunately, this noise loss function needs to be scaled by a trade-off-parameter to achieve accurate uncertainties. In this paper we propose an upgrade to the existing architecture, which increases interpretability and introduces a novel direct training procedure for dynamic noise modelling. To that end, we train the point prediction model and the noise model separately. We present a new loss function that requires Monte Carlo runs of the model to directly train for the uncertainty prediction accuracy. In an experimental evaluation, we show that in most tested cases the uncertainty prediction is more accurate than the manually tuned trade-off-parameter. Because of the architectural changes we are able to analyze the importance of individual parts of the time series of our prediction.


KI | 2018

Knowledge Sharing for Population Based Neural Network Training.

Stefan Oehmcke; Oliver Kramer

Finding good hyper-parameter settings to train neural networks is challenging, as the optimal settings can change during the training phase and also depend on random factors such as weight initialization or random batch sampling. Most state-of-the-art methods for the adaptation of these settings are either static (e.g. learning rate scheduler) or dynamic (e.g ADAM optimizer), but only change some of the hyper-parameters and do not deal with the initialization problem. In this paper, we extend the asynchronous evolutionary algorithm, population based training, which modifies all given hyper-parameters during training and inherits weights. We introduce a novel knowledge distilling scheme. Only the best individuals of the population are allowed to share part of their knowledge about the training data with the whole population. This embraces the idea of randomness between the models, rather than avoiding it, because the resulting diversity of models is important for the population’s evolution. Our experiments on MNIST, fashionMNIST, and EMNIST (MNIST split) with two classic model architectures show significant improvements to convergence and model accuracy compared to the original algorithm. In addition, we conduct experiments on EMNIST (balanced split) employing a ResNet and a WideResNet architecture to include complex architectures and data as well.


international conference on neural information processing | 2017

Spatio-Temporal Wind Power Prediction Using Recurrent Neural Networks

Wei Lee Woon; Stefan Oehmcke; Oliver Kramer

While wind is an abundant source of energy, integrating wind power into existing electricity grids is a major challenge due to its inherent variability. The ability to accurately predict future generation output would greatly mitigate this problem and is thus extremely valuable. Numerical Weather Prediction (NWP) techniques have been the basis of many wind prediction approaches, but the use of machine learning techniques is steadily gaining ground. Deep Learning (DL) is a sub-class of machine learning which has been particularly successful and is now the state of the art for a variety of classification and regression problems, notably image processing and natural language processing. In this paper, we demonstrate the use of Recurrent Neural Networks, a type of DL architecture, to extract patterns from the spatio-temporal information collected from neighboring turbines. These are used to generate short term wind energy forecasts which are then benchmarked against various prediction algorithms. The results show significant improvements over forecasts produced using state of the art algorithms.

Collaboration


Dive into the Stefan Oehmcke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susanne Boll

University of Oldenburg

View shared research outputs
Top Co-Authors

Avatar

Wei Lee Woon

Masdar Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Manish Aggarwal

Indian Institute of Management Ahmedabad

View shared research outputs
Researchain Logo
Decentralizing Knowledge