Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter A. N. Bosman is active.

Publication


Featured researches published by Peter A. N. Bosman.


IEEE Transactions on Evolutionary Computation | 2003

The balance between proximity and diversity in multiobjective evolutionary algorithms

Peter A. N. Bosman; Dirk Thierens

Over the last decade, a variety of evolutionary algorithms (EAs) have been proposed for solving multiobjective optimization problems. Especially more recent multiobjective evolutionary algorithms (MOEAs) have been shown to be efficient and superior to earlier approaches. An important question however is whether we can expect such improvements to converge onto a specific efficient MOEA that behaves best on a large variety of problems. In this paper, we argue that the development of new MOEAs cannot converge onto a single new most efficient MOEA because the performance of MOEAs shows characteristics of multiobjective problems. While we point out the most important aspects for designing competent MOEAs in this paper, we also indicate the inherent multiobjective tradeoff in multiobjective optimization between proximity and diversity preservation. We discuss the impact of this tradeoff on the concepts and design of exploration and exploitation operators. We also present a general framework for competent MOEAs and show how current state-of-the-art MOEAs can be obtained by making choices within this framework. Furthermore, we show an example of how we can separate nondomination selection pressure from diversity preservation selection pressure and discuss the impact of changing the ratio between these components.


parallel problem solving from nature | 2000

Expanding from Discrete to Continuous Estimation of Distribution Algorithms: The IDEA

Peter A. N. Bosman; Dirk Thierens

The direct application of statistics to stochastic optimization based on iterated density estimation has become more important and present in evolutionary computation over the last few years. The estimation of densities over selected samples and the sampling from the resulting distributions, is a combination of the recombination and mutation steps used in evolutionary algorithms. We introduce the framework named IDEA to formalize this notion. By combining continuous probability theory with techniques from existing algorithms, this framework allows us to define new continuous evolutionary optimization algorithms.


International Journal of Approximate Reasoning | 2002

Multi-objective optimization with diversity preserving mixture-based iterated density estimation evolutionary algorithms

Peter A. N. Bosman; Dirk Thierens

Stochastic optimization by learning and using probabilistic models has received an increasing amount of attention over the last few years. Algorithms within this field estimate the probability distribution of a selection of the available solutions and subsequently draw more samples from the estimated probability distribution. The resulting algorithms have displayed a good performance on a wide variety of single-objective optimization problems, both for binary as well as for real-valued variables. Mixture distributions offer a powerful tool for modeling complicated dependencies between the problem variables. Moreover, they allow for elegant and parallel exploration of a multi-objective front. This parallel exploration aids the important preservation of diversity in multi-objective optimization. In this paper, we propose a new algorithm for evolutionary multi-objective optimization by learning and using probabilistic mixture distributions. We name this algorithm Multi-objective Mixture-based Iterated Density Estimation Evolutionary Algorithm (MIDEA). To further improve and maintain the diversity that is obtained by the mixture distribution, we use a specialized diversity preserving selection operator. We verify the effectiveness of our approach in two different problem domains and compare it with two other well-known efficient multi-objective evolutionary algorithms.


genetic and evolutionary computation conference | 2005

Exploiting gradient information in numerical multi--objective evolutionary optimization

Peter A. N. Bosman; Edwin D. de Jong

Various multi--objective evolutionary algorithms (MOEAs) have obtained promising results on various numerical multi--objective optimization problems. The combination with gradient--based local search operators has however been limited to only a few studies. In the single--objective case it is known that the additional use of gradient information can be beneficial. In this paper we provide an analytical parametric description of the set of all non--dominated (i.e. most promising) directions in which a solution can be moved such that its objectives either improve or remain the same. Moreover, the parameters describing this set can be computed efficiently using only the gradients of the individual objectives. We use this result to hybridize an existing MOEA with a local search operator that moves a solution in a randomly chosen non--dominated improving direction. We test the resulting algorithm on a few well--known benchmark problems and compare the results with the same MOEA without local search and the same MOEA with gradient--based techniques that use only one objective at a time. The results indicate that exploiting gradient information based on the non--dominated improving directions is superior to using the gradients of the objectives separately and that it can furthermore improve the result of MOEAs in which no local search is used, given enough evaluations.


IEEE Transactions on Evolutionary Computation | 2012

On Gradients and Hybrid Evolutionary Algorithms for Real-Valued Multiobjective Optimization

Peter A. N. Bosman

Algorithms that make use of the gradient, i.e., the direction of maximum improvement, to search for the optimum of a single-objective function have been around for decades. They are commonly accepted to be important and have been applied to tackle single-objective optimization problems in many fields. For multiobjective optimization problems, much less is known about the gradient and its algorithmic use. In this paper, we aim to contribute to the understanding of gradients for numerical, i.e., real-valued, multiobjective optimization. Specifically, we provide an analytical parametric description of the set of all nondominated, i.e., most promising, directions in which a solution can be moved such that the objective values either improve or remain the same. This result completes previous work where this set is described only for one particular case, namely when some of the nondominated directions have positive, i.e., nonimproving, components and the final set can be computed by taking the subset of directions that are all nonpositive. In addition we use our result to assess the utility of using gradient information for multiobjective optimization where the goal is to obtain a Pareto set of solutions that approximates the optimal Pareto set. To this end, we design and consider various multiobjective gradient-based optimization algorithms. One of these algorithms uses the description of the multiobjective gradient provided here. Also, we hybridize an existing multiobjective evolutionary algorithm (MOEA) with the various multiobjective gradient-based optimization algorithms. During optimization, the performance of the gradient-based optimization algorithms is monitored and the available computational resources are redistributed to allow the (currently) most effective algorithm to spend the most resources. We perform an experimental analysis using a few well-known benchmark problems to compare the performance of different optimization methods. The results underline that the use of a population of solutions that is characteristic of MOEAs is a powerful concept if the goal is to obtain a good Pareto set, i.e., instead of only a single solution. This makes it hard to increase convergence speed in the initial phase using gradient information to improve any single solution. However, in the longer run, the use of gradient information does ultimately allow for better fine-tuning of the results and thus better overall convergence.


genetic and evolutionary computation conference | 2005

Learning, anticipation and time-deception in evolutionary online dynamic optimization

Peter A. N. Bosman

In this paper we focus on an important source of problem-difficulty in (online) dynamic optimization problems that has so far received significantly less attention than the traditional shifting of optima. Intuitively put, decisions taken now (i.e. setting the problem variables to certain values) may influence the score that can be obtained in the future. We indicate how such time-linkage can deceive an optimizer and cause it to find a suboptimal solution trajectory. We then propose a means to address time-linkage: predict the future by learning from the past. We formalize this means in an algorithmic framework. Also, we indicate why evolutionary algorithms are specifically of interest in this framework. We have performed experiments with two new benchmark problems that contain time-linkage. The results show, as a proof of principle, that in the presence of time-linkage EAs based upon this framework can obtain better results than classic EAs that do not predict the future.


parallel problem solving from nature | 2008

Enhancing the Performance of Maximum---Likelihood Gaussian EDAs Using Anticipated Mean Shift

Peter A. N. Bosman; Jörn Grahl; Dirk Thierens

Many Estimation---of---Distribution Algorithms use maximum-likelihood (ML) estimates. For discrete variables this has met with great success. For continuous variables the use of ML estimates for the normal distribution does not directly lead to successful optimization in most landscapes. It was previously found that an important reason for this is the premature shrinking of the variance at an exponential rate. Remedies were subsequently successfully formulated (i.e. Adaptive Variance Scaling (AVS) and Standard---Deviation Ratio triggering (SDR)). Here we focus on a second source of inefficiency that is not removed by existing remedies. We then provide a simple, but effective technique called Anticipated Mean Shift (AMS) that removes this inefficiency.


European Journal of Operational Research | 2008

Matching inductive search bias and problem structure in continuous Estimation-of-Distribution Algorithms

Peter A. N. Bosman; Jörn Grahl

Research into the dynamics of Genetic Algorithms (GAs) has led to the field of Estimation-of-Distribution Algorithms (EDAs). For discrete search spaces, EDAs have been developed that have obtained very promising results on a wide variety of problems. In this paper we investigate the conditions under which the adaptation of this technique to continuous search spaces fails to perform optimization efficiently. We show that without careful interpretation and adaptation of lessons learned from discrete EDAs, continuous EDAs will fail to perform efficient optimization on even some of the simplest problems. We reconsider the most important lessons to be learned in the design of EDAs and subsequently show how we can use this knowledge to extend continuous EDAs that were obtained by straightforward adaptation from the discrete domain so as to obtain an improvement in performance. Experimental results are presented to illustrate this improvement and to additionally confirm experimentally that a proper adaptation of discrete EDAs to the continuous case indeed requires careful consideration.


international conference on evolutionary multi criterion optimization | 2005

The naive MIDEA: a baseline multi-objective EA

Peter A. N. Bosman; Dirk Thierens

Estimation of distribution algorithms have been shown to perform well on a wide variety of single–objective optimization problems. Here, we look at a simple – yet effective – extension of this paradigm for multi–objective optimization, called the naive


genetic and evolutionary computation conference | 2007

Learning and anticipation in online dynamic optimization with evolutionary algorithms: the stochastic case

Peter A. N. Bosman; Han La Poutré

{\mathbb M}

Collaboration


Dive into the Peter A. N. Bosman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Bel

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jörn Grahl

University of Mannheim

View shared research outputs
Top Co-Authors

Avatar

Jan-Jakob Sonke

Netherlands Cancer Institute

View shared research outputs
Top Co-Authors

Avatar

Cees Witteveen

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kleopatra Pirpinia

Netherlands Cancer Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge