Jürgen Branke
University of Warwick
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jürgen Branke.
IEEE Transactions on Evolutionary Computation | 2005
Yaochu Jin; Jürgen Branke
Evolutionary algorithms often have to solve optimization problems in the presence of a wide range of uncertainties. Generally, uncertainties in evolutionary computation can be divided into the following four categories. First, the fitness function is noisy. Second, the design variables and/or the environmental parameters may change after optimization, and the quality of the obtained optimal solution should be robust against environmental changes or deviations from the optimal point. Third, the fitness function is approximated, which means that the fitness function suffers from approximation errors. Fourth, the optimum of the problem to be solved changes over time and, thus, the optimizer should be able to track the optimum continuously. In all these cases, additional measures must be taken so that evolutionary algorithms are still able to work satisfactorily. This paper attempts to provide a comprehensive overview of the related work within a unified framework, which has been scattered in a variety of research areas. Existing approaches to addressing different uncertainties are presented and discussed, and the relationship between the different categories of uncertainties are investigated. Finally, topics for future research are suggested.
Archive | 2001
Jürgen Branke
Preface. 1. Brief Introduction to Evolutionary Algorithms. Part I: Enabling Continuous Adaptation. 2. Optimization in Dynamic Environments. 3. Survey: State of the Art. 4. From Memory to Self-Organization. 5. Empirical Evaluation. 6. Summary of Part I. Part II: Considering Adaptation Cost. 7. Adaptation Cost vs. Solution Quality. Part III: Robustness and Flexibility - Precaution against Changes. 8. Searching for Robust Solutions. 9. From Robustness to Flexibility. 10. Summary and Outlook. References. Index.
congress on evolutionary computation | 1999
Jürgen Branke
Recently, there has been increased interest in evolutionary computation applied to changing optimization problems. The paper surveys a number of approaches that extend the evolutionary algorithm with implicit or explicit memory, suggests a new benchmark problem and examines under which circumstances a memory may be helpful. From these observations, we derive a new way to explore the benefits of a memory while minimizing its negative side effects.
Lecture Notes in Computer Science | 2004
Jürgen Branke
Many real-world problems are dynamic, requiring an optimization algorithm which is able to continuously track a changing optimum over time. In this paper, we present new variants of Particle Swarm Optimization (PSO) specifically designed to work well in dynamic environments. The main idea is to extend the single population PSO and Charged Particle Swarm Optimization (CPSO) methods by constructing interacting multi-swarms. In addition, a new algorithmic variant, which broadens the implicit atomic analogy of CPSO to a quantum model, is introduced. The multi-swarm algorithms are tested on a multi-modal dynamic function – the moving peaks benchmark – and results are compared to the single population approach of PSO and CPSO, and to results obtained by a state-of-the-art evolutionary algorithm, namely self-organizing scouts (SOS). We show that our multi-swarm optimizer significantly outperforms single population PSO on this problem, and that multi-quantum swarms are superior to multi-charged swarms and SOS.
Archive | 2000
Jürgen Branke; Christian Smidt; Hartmut Schmeck
Time-dependent optimization problems pose a new challenge to evolutionary algorithms, since they not only require a search for the optimum, but also a continuous tracking of the optimum over time. In this paper, we will will use concepts from the ”forking GA” (a multi-population evolutionary algorithm proposed to find multiple peaks in a multi-modal landscape) to enhance search in a dynamic landscape. The algorithm uses a number of smaller populations to track the most promising peaks over time, while a larger parent population is continuously searching for new peaks. We will show that this approach is indeed suitable for dynamic optimization problems by testing it on the recently proposed Moving Peaks Benchmark.
parallel problem solving from nature | 2004
Jürgen Branke; Kalyanmoy Deb; Henning Dierolf; Matthias Osswald
Many real-world optimization problems have several, usually conflicting objectives. Evolutionary multi-objective optimization usually solves this predicament by searching for the whole Pareto-optimal front of solutions, and relies on a decision maker to finally select a single solution. However, in particular if the number of objectives is large, the number of Pareto-optimal solutions may be huge, and it may be very difficult to pick one “best” solution out of this large set of alternatives. As we argue in this paper, the most interesting solutions of the Pareto-optimal front are solutions where a small improvement in one objective would lead to a large deterioration in at least one other objective. These solutions are sometimes also called “knees”. We then introduce a new modified multi-objective evolutionary algorithm which is able to focus search on these knee regions, resulting in a smaller set of solutions which are likely to be more relevant to the decision maker.
Advances in Engineering Software | 2001
Jürgen Branke; T Kaußler; Hartmut Schmeck
Abstract Many real world design problems involve multiple, usually conflicting optimization criteria. Often, it is very difficult to weight the criteria exactly before alternatives are known. Multi-Objective Evolutionary Algorithms based on the principle of Pareto optimality are designed to explore the complete set of non-dominated solutions, which then allows the user to choose among many alternatives. However, although it is very difficult to exactly define the weighting of different optimization criteria, usually the user has some notion as to what range of weightings might be reasonable. In this paper, we present a novel, simple, and intuitive way to integrate the users preference into the evolutionary algorithm by allowing to define linear maximum and minimum trade-off functions. On a number of test problems we show that the proposed algorithm efficiently guides the population towards the interesting region, allowing a faster convergence and a better coverage of this area of the Pareto optimal front.
Management Science | 2007
Jürgen Branke; Stephen E. Chick; Christian Schmidt
Selection procedures are used in a variety of applications to select the best of a finite set of alternatives. “Best” is defined with respect to the largest mean, but the mean is inferred with statistical sampling, as in simulation optimization. There are a wide variety of procedures, which begs the question of which selection procedure to select. The main contribution of this paper is to identify, through extensive experimentation, the most effective selection procedures when samples are independent and normally distributed. We also (a) summarize the main structural approaches to deriving selection procedures, (b) formalize new sampling allocations and stopping rules, (c) identify strengths and weaknesses of the procedures, (d) identify some theoretical links between them, and (e) present an innovative empirical test bed with the most extensive numerical comparison of selection procedures to date. The most efficient and easiest to control procedures allocate samples with a Bayesian model for uncertainty about the means and use new adaptive stopping rules proposed here.
genetic and evolutionary computation conference | 2006
Xiaodong Li; Jürgen Branke
This paper describes an extension to a speciation-based particle swarm optimizer (SPSO) to improve performance in dynamic environments. The improved SPSO has adopted several proven useful techniques. In particular, SPSO is shown to be able to adapt to a series of dynamic test cases with varying number of peaks (assuming maximization). Inspired by the concept of quantum swarms, this paper also proposes a particle diversification method that promotes particle diversity within each converged species. Our results over the moving peaks benchmark test functions suggest that SPSO incorporating this particle diversification method can greatly improve its adaptability hence optima tracking performance.
congress on evolutionary computation | 2005
Lam Thu Bui; Hussein A. Abbass; Jürgen Branke
This paper investigates the use of evolutionary multi-objective optimization methods (EMOs) for solving single-objective optimization problems in dynamic environments. A number of authors proposed the use of EMOs for maintaining diversity in a single objective optimization task, where they transform the single objective optimization problem into a multi-objective optimization problem by adding an artificial objective function. We extend this work by looking at the dynamic single objective task and examine a number of different possibilities for the artificial objective function. We adopt the non-dominated sorting genetic algorithm version 2 (NSGA2). The results show that the resultant formulations are promising and competitive to other methods for handling dynamic environments.