Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katharina Eggensperger is active.

Publication


Featured researches published by Katharina Eggensperger.


Human Brain Mapping | 2017

Deep learning with convolutional neural networks for EEG decoding and visualization

Robin Tibor Schirrmeister; Jost Tobias Springenberg; Lukas Dominique Josef Fiederer; Martin Glasstetter; Katharina Eggensperger; Michael Tangermann; Frank Hutter; Wolfram Burgard; Tonio Ball

Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017.


international conference on robotics and automation | 2016

Automatic bone parameter estimation for skeleton tracking in optical motion capture

Tobias Schubert; Katharina Eggensperger; Alexis Gkogkidis; Frank Hutter; Tonio Ball; Wolfram Burgard

Motion analysis is important in a broad range of contexts, including animation, bio-mechanics, robotics and experiments investigating animal behavior. For applications, in which tracking accuracy is one of the main requirements, passive optical motion capture systems are widely used. Many skeleton tracking methods based on such systems use a predefined skeleton model, which is scaled once in the initialization step to the individual size of the character to be tracked. However, there are remarkable differences in the bone length relations across gender and even more across mammal races. In practice, the optimal skeleton model has to be determined in a manual and time-consuming process. In this paper, we reformulate this task as an optimization problem aiming to rescale a rough hierarchical skeleton structure to optimize probabilistic skeleton tracking performance. We solve this optimization problem by means of state-of-the-art blackbox optimization methods based on sequential model-based Bayesian optimization (SMBO). We compare different SMBO methods on three real-world datasets with an animal and humans, demonstrating that we can automatically find skeleton structures for previously unseen mammals. The same methods also allow an automated choice of a suitable starting frame for initializing tracking.


international joint conference on artificial intelligence | 2018

Neural Networks for Predicting Algorithm Runtime Distributions

Katharina Eggensperger; Marius Lindauer; Frank Hutter

Many state-of-the-art algorithms for solving hard combinatorial problems in artificial intelligence (AI) include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance. Knowledge about the resulting runtime distributions (RTDs) of algorithms on given problem instances can be exploited in various meta-algorithmic procedures, such as algorithm selection, portfolios, and randomized restarts. Previous work has shown that machine learning can be used to individually predict mean, median and variance of RTDs. To establish a new state-of-the-art in predicting RTDs, we demonstrate that the parameters of an RTD should be learned jointly and that neural networks can do this well by directly optimizing the likelihood of an RTD given runtime observations. In an empirical study involving five algorithms for SAT solving and AI planning, we show that neural networks predict the true RTDs of unseen instances better than previous methods, and can even do so when only few runtime observations are available per training instance.Many state-of-the-art algorithms for solving hard combinatorial problems include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance, across runs with different pseudo-random number seeds. Knowledge about the runtime distributions (RTDs) of algorithms on given problem instances can be exploited in various meta-algorithmic procedures, such as algorithm selection, portfolios, and randomized restarts. Previous work has shown that machine learning can be used to individually predict mean, median and variance of RTDs. To establish a new state-of-the-art in predicting RTDs, we demonstrate that the parameters of an RTD should be learned jointly and that neural networks can do this well by directly optimizing the likelihood of an RTD given runtime observations. In an empirical study involving four algorithms for SAT solving and AI planning, we show that our neural networks predict the true RTDs of unseen instances better than previous methods. As an exemplary application of RTD predictions, we show that our RTD models also yield good predictions of running these algorithms in parallel.


Machine Learning | 2018

Efficient benchmarking of algorithm configurators via model-based surrogates

Katharina Eggensperger; Marius Lindauer; Holger H. Hoos; Frank Hutter; Kevin Leyton-Brown

The optimization of algorithm (hyper-)parameters is crucial for achieving peak performance across a wide range of domains, ranging from deep neural networks to solvers for hard combinatorial problems. However, the proper evaluation of new algorithm configuration (AC) procedures (or configurators) is hindered by two key hurdles. First, AC scenarios are hard to set up, including the target algorithm to be optimized and the problem instances to be solved. Second, and even more significantly, they are computationally expensive: a single configurator run involves many costly runs of the target algorithm. Here, we propose a benchmarking approach that uses surrogate scenarios, which are computationally cheap while remaining close to the original AC scenarios. These surrogate scenarios approximate the response surface corresponding to true target algorithm performance using a regression model. In our experiments, we construct and evaluate surrogate scenarios for hyperparameter optimization as well as for AC problems that involve performance optimization of solvers for hard combinatorial problems. We generalize previous work by building surrogates for AC scenarios with multiple problem instances, stochastic target algorithms and censored running time observations. We show that our surrogate scenarios capture overall important characteristics of the original AC scenarios from which they were derived, while being much easier to use and orders of magnitude cheaper to evaluate.


neural information processing systems | 2015

Efficient and robust automated machine learning

Matthias Feurer; Aaron Klein; Katharina Eggensperger; Jost Tobias Springenberg; Manuel Blum; Frank Hutter


national conference on artificial intelligence | 2015

Efficient benchmarking of hyperparameter optimizers via surrogates

Katharina Eggensperger; Frank Hutter; Holger H. Hoos; Kevin Leyton-Brown


arXiv: Learning | 2017

Deep learning with convolutional neural networks for brain mapping and decoding of movement-related information from the human EEG.

Robin Tibor Schirrmeister; Jost Tobias Springenberg; Lukas Dominique Josef Fiederer; Martin Glasstetter; Katharina Eggensperger; Michael Tangermann; Frank Hutter; Wolfram Burgard; Tonio Ball


MLAS'14 Proceedings of the 2014 International Conference on Meta-learning and Algorithm Selection - Volume 1201 | 2014

Surrogate benchmarks for hyperparameter optimization

Katharina Eggensperger; Frank Hutter; Holger H. Hoos; Kevin Leyton-Brown


national conference on artificial intelligence | 2017

Efficient Parameter Importance Analysis via Ablation with Surrogates.

Andre Biedenkapp; Marius Lindauer; Katharina Eggensperger; Frank Hutter; Chris Fawcett; Holger H. Hoos


ieee signal processing in medicine and biology symposium | 2017

Deep learning with convolutional neural networks for decoding and visualization of EEG pathology

Robin Tibor Schirrmeister; Lukas Gemein; Katharina Eggensperger; Frank Hutter; Tonio Ball

Collaboration


Dive into the Katharina Eggensperger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Holger H. Hoos

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Tonio Ball

University of Freiburg

View shared research outputs
Top Co-Authors

Avatar

Kevin Leyton-Brown

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge