Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Olaf Mersmann is active.

Publication


Featured researches published by Olaf Mersmann.


genetic and evolutionary computation conference | 2011

Exploratory landscape analysis

Olaf Mersmann; Bernd Bischl; Heike Trautmann; Mike Preuss; Claus Weihs; Günter Rudolph

Exploratory Landscape Analysis subsumes a number of techniques employed to obtain knowledge about the properties of an unknown optimization problem, especially insofar as these properties are important for the performance of optimization algorithms. Where in a first attempt, one could rely on high-level features designed by experts, we approach the problem from a different angle here, namely by using relatively cheap low-level computer generated features. Interestingly, very few features are needed to separate the BBOB problem groups and also for relating a problem to high-level, expert designed features, paving the way for automatic algorithm selection.


electronic commerce | 2012

Resampling methods for meta-model validation with recommendations for evolutionary computation

Bernd Bischl; Olaf Mersmann; Heike Trautmann; Claus Weihs

Meta-modeling has become a crucial tool in solving expensive optimization problems. Much of the work in the past has focused on finding a good regression method to model the fitness function. Examples include classical linear regression, splines, neural networks, Kriging and support vector regression. This paper specifically draws attention to the fact that assessing model accuracy is a crucial aspect in the meta-modeling framework. Resampling strategies such as cross-validation, subsampling, bootstrapping, and nested resampling are prominent methods for model validation and are systematically discussed with respect to possible pitfalls, shortcomings, and specific features. A survey of meta-modeling techniques within evolutionary optimization is provided. In addition, practical examples illustrating some of the pitfalls associated with model selection and performance assessment are presented. Finally, recommendations are given for choosing a model validation technique for a particular setting.


Annals of Mathematics and Artificial Intelligence | 2013

A novel feature-based approach to characterize algorithm performance for the traveling salesperson problem

Olaf Mersmann; Bernd Bischl; Heike Trautmann; Markus Wagner; Jakob Bossek; Frank Neumann

Meta-heuristics are frequently used to tackle NP-hard combinatorial optimization problems. With this paper we contribute to the understanding of the success of 2-opt based local search algorithms for solving the traveling salesperson problem (TSP). Although 2-opt is widely used in practice, it is hard to understand its success from a theoretical perspective. We take a statistical approach and examine the features of TSP instances that make the problem either hard or easy to solve. As a measure of problem difficulty for 2-opt we use the approximation ratio that it achieves on a given instance. Our investigations point out important features that make TSP instances hard or easy to be approximated by 2-opt.


foundations of genetic algorithms | 2013

A feature-based comparison of local search and the christofides algorithm for the travelling salesperson problem

Samadhi Nallaperuma; Markus Wagner; Frank Neumann; Bernd Bischl; Olaf Mersmann; Heike Trautmann

Understanding the behaviour of well-known algorithms for classical NP-hard optimisation problems is still a difficult task. With this paper, we contribute to this research direction and carry out a feature based comparison of local search and the well-known Christofides approximation algorithm for the Traveling Salesperson Problem. We use an evolutionary algorithm approach to construct easy and hard instances for the Christofides algorithm, where we measure hardness in terms of approximation ratio. Our results point out important features and lead to hard and easy instances for this famous algorithm. Furthermore, our cross-comparison gives new insights on the complementary benefits of the different approaches.


learning and intelligent optimization | 2012

Local search and the traveling salesman problem: a feature-based characterization of problem hardness

Olaf Mersmann; Bernd Bischl; Jakob Bossek; Heike Trautmann; Markus Wagner; Frank Neumann

With this paper we contribute to the understanding of the success of 2-opt based local search algorithms for solving the traveling salesman problem (TSP). Although 2-opt is widely used in practice, it is hard to understand its success from a theoretical perspective. We take a statistical approach and examine the features of TSP instances that make the problem either hard or easy to solve. As a measure of problem difficulty for 2-opt we use the approximation ratio that it achieves on a given instance. Our investigations point out important features that make TSP instances hard or easy to be approximated by 2-opt.


Evolutionary Computation | 2015

Analyzing the bbob results by means of benchmarking concepts

Olaf Mersmann; Mike Preuss; Heike Trautmann; Bernd Bischl; Claus Weihs

We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the “best” one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.


congress on evolutionary computation | 2010

Benchmarking evolutionary multiobjective optimization algorithms

Olaf Mersmann; Heike Trautmann; Boris Naujoks; Claus Weihs

Choosing and tuning an optimization procedure for a given class of nonlinear optimization problems is not an easy task. One way to proceed is to consider this as a tournament, where each procedure will compete in different ‘disciplines’. Here, disciplines could either be different functions, which we want to optimize, or specific performance measures of the optimization procedure. We would then be interested in the algorithm that performs best in a majority of cases or whose average performance is maximal. We will focus on evolutionary multiobjective optimization algorithms (EMOA), and will present a novel approach to the design and analysis of evolutionary multiobjective benchmark experiments based on similar work from the context of machine learning. We focus on deriving a consensus among several benchmarks over different test problems and illustrate the methodology by reanalyzing the results of the CEC 2007 EMOA competition.


learning and intelligent optimization | 2010

On the distribution of EMOA hypervolumes

Olaf Mersmann; Heike Trautmann; Boris Naujoks; Claus Weihs

In recent years, new approaches for multi-modal and multiobjective stochastic optimisation have been developed. It is a rather normal process that these experimental fields develop independently from other scientific areas. However, the connection between stochastic optimisation and statistics is obvious and highly appreciated. Recent works, such as sequential parameter optimisation (SPO, cf. Bartz-Beielstein [1]) or online convergence detection (OCD, cf. Trautmann et al [2]), have combined methods from evolutionary computation and statistics. One important aspect in statistics is the analysis of stochastic outcomes of experiments and optimization methods, respectively. To this end, the optimisation runs of different evolutionary multi-objective optimisation algorithms (EMOA, cf. Deb [3] or Coello Coello et al. [4]) are treated as experiments to analyse the stochastic behavior of the results and to approximate the distribution of the performance of the EMOA. To combine the outcome of an EMOA and receive a single performance indicator value, the hypervolume (HV) indicator is considered, which is the only known unary quality indicator in this field (cf. Zitzler et al. [5]). The paper at hand investigates and compares the HV indicator outcome of multiple runs of two EMOA on different mathematical test cases.


international conference on evolutionary multi-criterion optimization | 2013

Do Hypervolume Regressions Hinder EMOA Performance? Surprise and Relief

Leonard Judt; Olaf Mersmann; Boris Naujoks

Decreases in dominated hypervolume w.r.t a fixed reference point for the (μ + 1)-SMS-EMOA are able to appear. We examine the impact of these decreases and different reference point handling techniques by providing four different algorithmic variants for selection. In addition, we show that yet further decreases can occur due to numerical instabilities that were previously not being expected. Fortunately, our findings do indicate that all detected decreases do not have a negative effect on the overall performance.


Archive | 2013

Foundations of Statistical Algorithms : With References to R Packages

Claus Weihs; Olaf Mersmann; Uwe Ligges

Introduction Computation Motivation and History Models for Computing: What Can a Computer Compute? Floating-Point Computations: How Does a Computer Compute? Precision of Computations: How Exact Does a Computer Compute? Implementation in R Verification Motivation and History Theory Practice and Simulation Implementation in R Iteration Motivation Preliminaries Univariate Optimization Multivariate Optimization Example: Neural Nets Constrained Optimization Evolutionary Computing Implementation in R Deduction of Theoretical Properties PLS-from Algorithm to Optimality EM Algorithm Implementation in R Randomization Motivation and History Theory: Univariate Randomization Theory: Multivariate Randomization Practice and Simulation: Stochastic Modeling Implementation in R Repetition Motivation and Overview Model Selection Model Selection in Classification Model Selection in Continuous Models Implementation in R Scalability and Parallelization Introduction Motivation and History Optimization Parallel Computing Implementation in R Bibliography Index Conclusion and Exercises appear at the end of each chapter.

Collaboration


Dive into the Olaf Mersmann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claus Weihs

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Boris Naujoks

Cologne University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Mike Preuss

University of Münster

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leonard Judt

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Michel Lang

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Oliver Flasch

Cologne University of Applied Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge