Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jörn Grahl is active.

Publication


Featured researches published by Jörn Grahl.


parallel problem solving from nature | 2008

Enhancing the Performance of Maximum---Likelihood Gaussian EDAs Using Anticipated Mean Shift

Peter A. N. Bosman; Jörn Grahl; Dirk Thierens

Many Estimation---of---Distribution Algorithms use maximum-likelihood (ML) estimates. For discrete variables this has met with great success. For continuous variables the use of ML estimates for the normal distribution does not directly lead to successful optimization in most landscapes. It was previously found that an important reason for this is the premature shrinking of the variance at an exponential rate. Remedies were subsequently successfully formulated (i.e. Adaptive Variance Scaling (AVS) and Standard---Deviation Ratio triggering (SDR)). Here we focus on a second source of inefficiency that is not removed by existing remedies. We then provide a simple, but effective technique called Anticipated Mean Shift (AMS) that removes this inefficiency.


European Journal of Operational Research | 2008

Matching inductive search bias and problem structure in continuous Estimation-of-Distribution Algorithms

Peter A. N. Bosman; Jörn Grahl

Research into the dynamics of Genetic Algorithms (GAs) has led to the field of Estimation-of-Distribution Algorithms (EDAs). For discrete search spaces, EDAs have been developed that have obtained very promising results on a wide variety of problems. In this paper we investigate the conditions under which the adaptation of this technique to continuous search spaces fails to perform optimization efficiently. We show that without careful interpretation and adaptation of lessons learned from discrete EDAs, continuous EDAs will fail to perform efficient optimization on even some of the simplest problems. We reconsider the most important lessons to be learned in the design of EDAs and subsequently show how we can use this knowledge to extend continuous EDAs that were obtained by straightforward adaptation from the discrete domain so as to obtain an improvement in performance. Experimental results are presented to illustrate this improvement and to additionally confirm experimentally that a proper adaptation of discrete EDAs to the continuous case indeed requires careful consideration.


congress on evolutionary computation | 2005

Behaviour of UMDA/sub c/ with truncation selection on monotonous functions

Jörn Grahl; Stefan Minner; Franz Rothlauf

Of late, much progress has been made in developing estimation of distribution algorithms (EDA), algorithms that use probabilistic modelling of high quality solutions to guide their search. While experimental results on EDA behaviour are widely available, theoretical results are still rare. This is especially the case for continuous EDA. In this article, we develop theory that predicts the behaviour of the univariate marginal distribution algorithm in the continuous domain (UMDA/sub c/) with truncation selection on monotonous fitness functions. Monotonous functions are commonly used to model the algorithm behaviour far from the optimum. Our result includes formulae to predict population statistics in a specific generation as well as population statistics after convergence. We find that population statistics develop identically for monotonous functions. We show that if assuming monotonous fitness functions, the distance that UMDA/sub c/ travels across the search space is bounded and solely relies on the percentage of selected individuals and not on the structure of the fitness landscape. This can be problematic if this distance is too small for the algorithm to find the optimum. Also, by wrongly setting the selection intensity, one might not be able to explore the whole search space.


Evolutionary Computation | 2013

Benchmarking parameter-free amalgam on functions with and without noise

Peter A. N. Bosman; Jörn Grahl; Dirk Thierens

We describe a parameter-free estimation-of-distribution algorithm (EDA) called the adapted maximum-likelihood Gaussian model iterated density-estimation evolutionary algorithm (AMaLGaM-IDA, or AMaLGaM for short) for numerical optimization. AMaLGaM is benchmarked within the 2009 black box optimization benchmarking (BBOB) framework and compared to a variant with incremental model building (iAMaLGaM). We study the implications of factorizing the covariance matrix in the Gaussian distribution, to use only a few or no covariances. Further, AMaLGaM and iAMaLGaM are also evaluated on the noisy BBOB problems and we assess how well multiple evaluations per solution can average out noise. Experimental evidence suggests that parameter-free AMaLGaM can solve a wide range of problems efficiently with perceived polynomial scalability, including multimodal problems, obtaining the best or near-best results among all algorithms tested in 2009 on functions such as the step-ellipsoid and Katsuuras, but failing to locate the optimum within the time limit on skew Rastrigin-Bueche separable and Lunacek bi-Rastrigin in higher dimensions. AMaLGaM is found to be more robust to noise than iAMaLGaM due to the larger required population size. Using few or no covariances hinders the EDA from dealing with rotations of the search space. Finally, the use of noise averaging is found to be less efficient than the direct application of the EDA unless the noise is uniformly distributed. AMaLGaM was among the best performing algorithms submitted to the BBOB workshop in 2009.


genetic and evolutionary computation conference | 2009

AMaLGaM IDEAs in noiseless black-box optimization benchmarking

Peter A. N. Bosman; Jörn Grahl; Dirk Thierens

This paper describes the application of a Gaussian Estimation-of-Distribution (EDA) for real-valued optimization to the noiseless part of a benchmark introduced in 2009 called BBOB (Black-Box Optimization Benchmarking). Specifically, the EDA considered here is the recently introduced parameter-free version of the Adapted Maximum-Likelihood Gaussian Model Iterated Density-Estimation Evolutionary Algorithm (AMaLGaM-IDEA). Also the version with incremental model building (iAMaLGaM-IDEA) is considered.


genetic and evolutionary computation conference | 2004

PolyEDA: Combining Estimation of Distribution Algorithms and Linear Inequality Constraints

Jörn Grahl; Franz Rothlauf

Estimation of distribution algorithms (EDAs) are population-based heuristic search methods that use probabilistic models of good solutions to guide their search. When applied to constrained optimization problems, most evolutionary algorithms use special techniques for handling invalid solutions. This paper presents PolyEDA, a new EDA approach that is able to directly consider linear inequality constraints by using Gibbs sampling. Gibbs sampling allows us to sample new individuals inside the boundaries of the polyhedral search space described using a set of linear inequality constraints by iteratively constructing a density approximation that lies entirely inside the polyhedron. Gibbs sampling prevents the creation of infeasible solutions. Thus, no additional techniques for handling infeasible solutions are needed in PolyEDA. Due to its ability to consider linear inequality constraints, PolyEDA can be used for highly constrained optimization problems, where even the generation of valid solutions is a non-trivial task. Results for different variants of a constrained Rosenbrock problem show a higher performance of PolyEDA in comparison to a standard EDA using rejection sampling.


genetic and evolutionary computation conference | 2014

An implicitly parallel EDA based on restricted boltzmann machines

Malte Probst; Franz Rothlauf; Jörn Grahl

We present a parallel version of RBM-EDA. RBM-EDA is an Estimation of Distribution Algorithm (EDA) that models dependencies between decision variables using a Restricted Boltzmann Machine (RBM). In contrast to other EDAs, RBM-EDA mainly uses matrix-matrix multiplications for model estimation and sampling. Hence, for implementation, standard libraries for linear algebra can be used. This allows an easy parallelization and leads to a high utilization of parallel architectures. The probabilistic model of the parallel version and the version on a single core are identical. We explore the speedups gained from running RBM-EDA on a Graphics Processing Unit. For problems of bounded difficulty like deceptive traps, parallel RBM-EDA is faster by several orders of magnitude (up to 750 times) in comparison to a single-threaded implementation on a CPU. As the speedup grows linearly with problem size, parallel RBM-EDA may be particularly useful for large problems.


Fundamenta Informaticae | 2007

Learning Structure Illuminates Black Boxes – An Introduction to Estimation of Distribution Algorithms

Jörn Grahl; Stefan Minner; Peter A. N. Bosman

This chapter serves as an introduction to estimation of distribution algorithms (EDAs). Estimation of distribution algorithms are a new paradigm in evolutionary computation. They combine statistical learning with population-based search in order to automatically identify and exploit certain structural properties of optimization problems. State-of-the-art EDAs consistently outperformclassical genetic algorithms on a broad range of hard optimization problems.We review fundamental terms, concepts, and algorithms which facilitate the understanding of EDA research. The focus is on EDAs for combinatorial and continuous non-linear optimization and the major differences between the two fields are discussed.


Archive | 2007

Fitness Landscape Analysis of Dynamic Multi-Product Lot-Sizing Problems with Limited Storage

Jörn Grahl; Alexander Radtke; Stefan Minner

Multi-item lot-sizing with dynamic demand and shared warehouse capacity is a common problem in practice. Only small instances of this problem can be solved with exact algorithms in reasonable time. Although metaheuristic search is widely accepted to tackle large instances of combinatorial optimization problems, we are not aware of metaheuristics for multi-item single level lot-sizing with shared warehouse capacity. The main contribution of this paper is that it evaluates the benefits of several mutation operators and that of recombination by means of a fitness landscape analysis. The obtained results are useful for optimization practitioners who design a metaheuristic for the problem, e.g., for use in an advanced planning system.


European Journal of Operational Research | 2017

Scalability of using Restricted Boltzmann Machines for combinatorial optimization

Malte Probst; Franz Rothlauf; Jörn Grahl

Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Restricted Boltzmann Machines (RBMs) are generative neural networks with these desired properties. We integrate an RBM into an EDA and evaluate the performance of this system in solving combinatorial optimization problems with a single objective. We assess how the number of fitness evaluations and the CPU time scale with problem size and complexity. The results are compared to the Bayesian Optimization Algorithm (BOA), a state-of-the-art multivariate EDA, and the Dependency Tree Algorithm (DTA), which uses a simpler probability model requiring less computational effort for training the model. Although RBM–EDA requires larger population sizes and a larger number of fitness evaluations than BOA, it outperforms BOA in terms of CPU times, in particular if the problem is large or complex. This is because RBM–EDA requires less time for model building than BOA. DTA with its restricted model is a good choice for small problems but fails for larger and more difficult problems. These results highlight the potential of using generative neural networks for combinatorial optimization.

Collaboration


Dive into the Jörn Grahl's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jens Arndt

University of Mannheim

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge