Jörg Stork
Cologne University of Applied Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jörg Stork.
genetic and evolutionary computation conference | 2014
Martin Zaefferer; Jörg Stork; Martina Friese; Andreas Fischbach; Boris Naujoks; Thomas Bartz-Beielstein
Real-world optimization problems may require time consuming and expensive measurements or simulations. Recently, the application of surrogate model-based approaches was extended from continuous to combinatorial spaces. This extension is based on the utilization of suitable distance measures such as Hamming or Swap Distance. In this work, such an extension is implemented for Kriging (Gaussian Process) models. Kriging provides a measure of uncertainty when determining predictions. This can be harnessed to calculate the Expected Improvement (EI) of a candidate solution. In continuous optimization, EI is used in the Efficient Global Optimization (EGO) approach to balance exploitation and exploration for expensive optimization problems. Employing the extended Kriging model, we show for the first time that EGO can successfully be applied to combinatorial optimization problems. We describe necessary adaptations and arising issues as well as experimental results on several test problems. All surrogate models are optimized with a Genetic Algorithm (GA). To yield a comprehensive comparison, EGO and Kriging are compared to an earlier suggested Radial Basis Function Network, a linear modeling approach, as well as model-free optimization with random search and GA. EGO clearly outperforms the competing approaches on most of the tested problem instances.
parallel problem solving from nature | 2014
Martin Zaefferer; Jörg Stork; Thomas Bartz-Beielstein
For expensive black-box optimization problems, surrogate-model based approaches like Efficient Global Optimization are frequently used in continuous optimization. Their main advantage is the reduction of function evaluations by exploiting cheaper, data-driven models of the actual target function. The utilization of such methods in combinatorial or mixed search spaces is less common. Efficient Global Optimization and related methods were recently extended to such spaces, by replacing continuous distance (or similarity) measures with measures suited for the respective problem representations.
ECDA | 2015
Jörg Stork; Ricardo Ramos; Patrick Koch; Wolfgang Konen
Support vector machines (SVM) are strong classifiers, but large datasets might lead to prohibitively long computation times and high memory requirements. SVM ensembles, where each single SVM sees only a fraction of the data, can be an approach to overcome this barrier. In continuation of related work in this field we construct SVM ensembles with Bagging and Boosting. As a new idea we analyze SVM ensembles with different kernel types (linear, polynomial, RBF) involved inside the ensemble. The goal is to train one strong SVM ensemble classifier for large datasets with less time and memory requirements than a single SVM on all data. From our experiments we find evidence for the following facts: Combining different kernel types can lead to an ensemble classifier stronger than each individual SVM on all training data and stronger than ensembles from a single kernel type alone. Boosting is only productive if we make each single SVM sufficiently weak, otherwise we observe overfitting. Even for very small training sample sizes—and thus greatly reduced time and memory requirements—the ensemble approach often delivers accuracies similar or close to a single SVM trained on all data.
genetic and evolutionary computation conference | 2014
Martin Zaefferer; Beate Breiderhoff; Boris Naujoks; Martina Friese; Jörg Stork; Andreas Fischbach; Oliver Flasch; Thomas Bartz-Beielstein
Cyclone separators are filtration devices frequently used in industry, e.g., to filter particles from flue gas. Optimizing the cyclone geometry is a demanding task. Accurate simulations of cyclone separators are based on time consuming computational fluid dynamics simulations. Thus, the need for exploiting cheap information from analytical, approximative models is evident. Here, we employ two multi-objective optimization algorithms on such cheap, approximative models to analyze their optimization performance on this problem. Under various limitations, we tune both algorithms with Sequential Parameter Optimization (SPO) to achieve best possible results in shortest time. The resulting optimal settings are validated with different seeds, as well as with a different approximative model for collection efficiency. Their optimal performance is compared against a model based approach, where multi-objective SPO is directly employed to optimize the Cyclone model, rather than tuning the optimization algorithms. It is shown that SPO finds improved parameter settings of the concerned algorithms and performs excellently when directly used as an optimizer.
parallel problem solving from nature | 2018
Martin Zaefferer; Jörg Stork; Oliver Flasch; Thomas Bartz-Beielstein
Surrogate models are a well established approach to reduce the number of expensive function evaluations in continuous optimization. In the context of genetic programming, surrogate modeling still poses a challenge, due to the complex genotype-phenotype relationships. We investigate how different genotypic and phenotypic distance measures can be used to learn Kriging models as surrogates. We compare the measures and suggest to use their linear combination in a kernel.
genetic and evolutionary computation conference | 2018
Frederik Rehbach; Martin Zaefferer; Jörg Stork; Thomas Bartz-Beielstein
The availability of several CPU cores on current computers enables parallelization and increases the computational power significantly. Optimization algorithms have to be adapted to exploit these highly parallelized systems and evaluate multiple candidate solutions in each iteration. This issue is especially challenging for expensive optimization problems, where surrogate models are employed to reduce the load of objective function evaluations. This paper compares different approaches for surrogate model-based optimization in parallel environments. Additionally an easy to use method, which was developed for an industrial project, is proposed. All described algorithms are tested with a variety of standard benchmark functions. Furthermore, they are applied to a real-world engineering problem, the electrostatic precipitator problem. Expensive computational fluid dynamics simulations are required to estimate the performance of the precipitator. The task is to optimize a gas-distribution system so that a desired velocity distribution is achieved for the gas flow throughout the precipitator. The vast amount of possible configurations leads to a complex discrete valued optimization problem. The experiments indicate that a hybrid approach works best, which proposes candidate solutions based on different surrogate model-based infill criteria and evolutionary operators.
genetic and evolutionary computation conference | 2017
Jacqueline Heinerman; Jörg Stork; Margarita Alejandra Rebolledo Coy; Julien Hubert; A. E. Eiben; Thomas Bartz-Beielstein; Evert Haasdijk
Social learning enables multiple robots to share learned experiences while completing a task. The literature offers examples where robots trained with social learning reach a higher performance compared to their individual learning counterparts [e.g, 2, 4]. No explanation has been advanced for that observation. In this research, we present experimental results suggesting that a lack of tuning of the parameters in social learning experiments could be the cause. In other words: the better the parameter settings are tuned, the less social learning can improve the system performance.
european conference on artificial life | 2017
Jacqueline Heinerman; Jörg Stork; Margarita Alejandra Rebolledo Coy; Julien Hubert; Thomas Bartz-Beielstein; A. E. Eiben; Evert Haasdijk
Social learning enables multiple robots to share learned experiences while completing a task. The literature offers contradicting examples of its benefits; robots trained with social learning reach a higher performance, an increased learning speed, or both, compared to their individual learning counterparts. No general explanation has been advanced for the difference in observations, which make the results highly dependent on the particular system and parameter setting. In this research, we show that even within one system, the observed advantages of social learning can vary between parameter settings. Using Evolutionary Robotics, we train robots individually in a foraging task. We compare the performance of 50 parameter instances of the evolutionary algorithm obtained by a definitive screening design. We apply social learning in groups of two and four robots to the parameter settings that lead to the best and median performance. Our results show that the observed advantages of social learning differ highly between parameter settings but in general, median quality parameter settings experience more benefit from social learning. These results serve as a reminder that tuning of the parameters should not be left as an afterthought because they can drastically impact the conclusions on the advantages of social learning. Additionally, these results suggest that social learning reduces the sensitivity of the learning process to the choice of parameters.
parallel problem solving from nature | 2016
Carola Doerr; Nicolas Bredeche; Enrique Alba; Thomas Bartz-Beielstein; Dimo Brockhoff; Benjamin Doerr; Gusz Eiben; Michael G. Epitropakis; Carlos M. Fonseca; Andreia P. Guerreiro; Evert Haasdijk; Jacqueline Heinerman; Julien Hubert; Per Kristian Lehre; Luigi Malagò; Juan J. Merelo; Julian F. Miller; Boris Naujoks; Pietro Simone Oliveto; Stjepan Picek; Nelishia Pillay; Mike Preuss; Patricia Ryser-Welch; Giovanni Squillero; Jörg Stork; Dirk Sudholt; Alberto Paolo Tonda; Darrell Whitley; Martin Zaefferer
PPSN 2016 hosts a total number of 16 tutorials covering a broad range of current research in evolutionary computation. The tutorials range from introductory to advanced and specialized but can all be attended without prior requirements. All PPSN attendees are cordially invited to take this opportunity to learn about ongoing research activities in our field!
european conference on applications of evolutionary computation | 2013
Oliver Flasch; Martina Friese; Katya Vladislavleva; Thomas Bartz-Beielstein; Olaf Mersmann; Boris Naujoks; Jörg Stork; Martin Zaefferer
This work provides a preliminary study on applying state-of-the-art time-series forecasting methods to electrical energy consumption data recorded by smart metering equipment. We compare a custom-build commercial baseline method to modern ensemble-based methods from statistical time-series analysis and to a modern commercial GP system. Our preliminary results indicate that that modern ensemble-based methods, as well as GP, are an attractive alternative to custom-built approaches for electrical energy consumption forecasting.