Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dirk Thierens is active.

Publication


Featured researches published by Dirk Thierens.


IEEE Transactions on Evolutionary Computation | 2003

The balance between proximity and diversity in multiobjective evolutionary algorithms

Peter A. N. Bosman; Dirk Thierens

Over the last decade, a variety of evolutionary algorithms (EAs) have been proposed for solving multiobjective optimization problems. Especially more recent multiobjective evolutionary algorithms (MOEAs) have been shown to be efficient and superior to earlier approaches. An important question however is whether we can expect such improvements to converge onto a specific efficient MOEA that behaves best on a large variety of problems. In this paper, we argue that the development of new MOEAs cannot converge onto a single new most efficient MOEA because the performance of MOEAs shows characteristics of multiobjective problems. While we point out the most important aspects for designing competent MOEAs in this paper, we also indicate the inherent multiobjective tradeoff in multiobjective optimization between proximity and diversity preservation. We discuss the impact of this tradeoff on the concepts and design of exploration and exploitation operators. We also present a general framework for competent MOEAs and show how current state-of-the-art MOEAs can be obtained by making choices within this framework. Furthermore, we show an example of how we can separate nondomination selection pressure from diversity preservation selection pressure and discuss the impact of changing the ratio between these components.


genetic and evolutionary computation conference | 2005

An adaptive pursuit strategy for allocating operator probabilities

Dirk Thierens

Learning the optimal probabilities of applying an exploration operator from a set of alternatives can be done by self-adaptation or by adaptive allocation rules. In this paper we consider the latter option. The allocation strategies discussed in the literature basically belong to the class of probability matching algorithms. These strategies adapt the operator probabilities in such a way that they match the reward distribution. In this paper we introduce an alternative adaptive allocation strategy, called the adaptive pursuit method. We compare this method with the probability matching approach in a non-stationary environment. Calculations and experimental results show the superior performance of the adaptive pursuit algorithm. If the reward distributions stay stationary for some time, the adaptive pursuit method converges rapidly and accurately to an operator probability distribution that results in a much higher probability of selecting the current optimal operator and a much higher average reward than with the probability matching strategy. Yet most importantly, the adaptive pursuit scheme remains sensitive to changes in the reward distributions, and reacts swiftly to non-stationary shifts in the environment.


electronic commerce | 1999

Scalability problems of simple genetic algorithms

Dirk Thierens

Scalable evolutionary computation has. become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithmnamely elitism, niching, and restricted mating are not significantly improving the scalability problems.


parallel problem solving from nature | 2000

Expanding from Discrete to Continuous Estimation of Distribution Algorithms: The IDEA

Peter A. N. Bosman; Dirk Thierens

The direct application of statistics to stochastic optimization based on iterated density estimation has become more important and present in evolutionary computation over the last few years. The estimation of densities over selected samples and the sampling from the resulting distributions, is a combination of the recombination and mutation steps used in evolutionary algorithms. We introduce the framework named IDEA to formalize this notion. By combining continuous probability theory with techniques from existing algorithms, this framework allows us to define new continuous evolutionary optimization algorithms.


ieee international conference on evolutionary computation | 1998

Domino convergence, drift, and the temporal-salience structure of problems

Dirk Thierens; David E. Goldberg; A.G. Pereira

The convergence speed of building blocks depends on their marginal fitness contribution or on the salience structure of the problem. We use a sequential parameterization approach to build models of the differential convergence behavior, and derive time complexities for the boundary case which is obtained with an exponentially scaled problem (BinInt). We show that this domino convergence time complexity is linear in the number of building blocks (O(l)) for selection algorithms with constant selection intensity (such as tournament selection and (/spl mu/,/spl lambda/) or truncation selection), and exponential (O(2/sup l/)) for proportional selection. These complexities should be compared with the convergence speed for uniformly salient problems which are respectively (O(/spl radic/l)) and (O(l ln l)). In addition we relate this facetwise model to a genetic drift model, and identify where and when the stochastic fluctuations due to drift overwhelms the domino convergence, resulting in drift stall. The combined models interrelate the strong convergence of salient building blocks and the stochastic drift of less salient ones.


congress on evolutionary computation | 2002

Adaptive mutation rate control schemes in genetic algorithms

Dirk Thierens

The adaptation of mutation rate parameter values is important to allow the search process to optimize its performance during run time. In addition, it frees the user of the need to make non-trivial decisions beforehand. Contrary to real vector coded genotypes, for discrete genotypes most users still prefer to use a fixed mutation rate. Here we propose two simple adaptive mutation rate control schemes, and show their feasibility in comparison with a fixed mutation rate, a self-adaptive mutation rate and a deterministically scheduled dynamic mutation rate.


International Journal of Approximate Reasoning | 2002

Multi-objective optimization with diversity preserving mixture-based iterated density estimation evolutionary algorithms

Peter A. N. Bosman; Dirk Thierens

Stochastic optimization by learning and using probabilistic models has received an increasing amount of attention over the last few years. Algorithms within this field estimate the probability distribution of a selection of the available solutions and subsequently draw more samples from the estimated probability distribution. The resulting algorithms have displayed a good performance on a wide variety of single-objective optimization problems, both for binary as well as for real-valued variables. Mixture distributions offer a powerful tool for modeling complicated dependencies between the problem variables. Moreover, they allow for elegant and parallel exploration of a multi-objective front. This parallel exploration aids the important preservation of diversity in multi-objective optimization. In this paper, we propose a new algorithm for evolutionary multi-objective optimization by learning and using probabilistic mixture distributions. We name this algorithm Multi-objective Mixture-based Iterated Density Estimation Evolutionary Algorithm (MIDEA). To further improve and maintain the diversity that is obtained by the mixture distribution, we use a specialized diversity preserving selection operator. We verify the effectiveness of our approach in two different problem domains and compare it with two other well-known efficient multi-objective evolutionary algorithms.


parallel problem solving from nature | 2008

Enhancing the Performance of Maximum---Likelihood Gaussian EDAs Using Anticipated Mean Shift

Peter A. N. Bosman; Jörn Grahl; Dirk Thierens

Many Estimation---of---Distribution Algorithms use maximum-likelihood (ML) estimates. For discrete variables this has met with great success. For continuous variables the use of ML estimates for the normal distribution does not directly lead to successful optimization in most landscapes. It was previously found that an important reason for this is the premature shrinking of the variance at an exponential rate. Remedies were subsequently successfully formulated (i.e. Adaptive Variance Scaling (AVS) and Standard---Deviation Ratio triggering (SDR)). Here we focus on a second source of inefficiency that is not removed by existing remedies. We then provide a simple, but effective technique called Anticipated Mean Shift (AMS) that removes this inefficiency.


parallel problem solving from nature | 2010

The linkage tree genetic algorithm

Dirk Thierens

We introduce the Linkage Tree Genetic Algorithm (LTGA), a competent genetic algorithm that learns the linkage between the problem variables. The LTGA builds each generation a linkage tree using a hierarchical clustering algorithm. To generate new offspring solutions, the LTGA selects two parent solutions and traverses the linkage tree starting from the root. At each branching point, the parent pair is recombined using a crossover mask defined by the clustering at that particular tree node. The parent pair competes with the offspring pair, and the LTGA continues traversing the linkage tree with the pair that has the most fit solution. Once the entire tree is traversed, the best solution of the current pair is copied to the next generation. In this paper we use the normalized variation of information metric as distance measure for the clustering process. Experimental results for fully deceptive functions and nearest neighbor NK-landscape problems with tunable overlap show that the LTGA can solve these hard functions efficiently without knowing the actual position of the linked variables on the problem representation.


international conference on evolutionary multi criterion optimization | 2005

The naive MIDEA: a baseline multi-objective EA

Peter A. N. Bosman; Dirk Thierens

Estimation of distribution algorithms have been shown to perform well on a wide variety of single–objective optimization problems. Here, we look at a simple – yet effective – extension of this paradigm for multi–objective optimization, called the naive

Collaboration


Dive into the Dirk Thierens's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jörn Grahl

University of Mannheim

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark de Berg

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge