Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Juergen Branke is active.

Publication


Featured researches published by Juergen Branke.


genetic and evolutionary computation conference | 2007

Multi-objective particle swarm optimization on computer grids

Sanaz Mostaghim; Juergen Branke; Hartmut Schmeck

In recent years, a number of authors have successfully extended particle swarmoptimization to problem domains with multiple objec-tives. This paper addresses theissue of parallelizing multi-objec-tive particle swarms. We propose and empirically comparetwo parallel versions which differ in the way they divide the swarminto subswarms that can be processed independently on differentprocessors. One of the variants works asynchronouslyand is thus particularly suitable for heterogeneous computer clusters asoccurring e.g. in moderngrid computing platforms.


congress on evolutionary computation | 2009

Empirical comparison of MOPSO methods - Guide selection and diversity preservation -

Nikhil Padhye; Juergen Branke; Sanaz Mostaghim

In this paper, we review several proposals for guide selection in Multi-Objective Particle Swarm Optimization (MOPSO) and compare them with each other in terms of convergence, diversity and computational times. The new proposals made for guide selection, both personal best (‘pbest’) and global best (‘gbest’), are found to be extremely effective and perform well compared to the already existing methods. The combination of selection methods for choosing ‘gbest’ and ‘pbest’ is also studied and it turns out that there exist certain combinations which yield an overall superior performance outperforming the others on the tested benchmark problems. Furthermore, two new proposals namely velocity trigger (as a substitute for “turbulence operator”) and a new scheme of boundary handling is made.


genetic and evolutionary computation conference | 2003

Selection in the presence of noise

Juergen Branke; Christian Schmidt

For noisy optimization problems, there is generally a trade-off between the effort spent to reduce the noise (in order to allow the optimization algorithm to run properly), and the number of solutions evaluated during optimization. However, for stochastic search algorithms like evolutionary optimization, noise is not always a bad thing. On the contrary, in many cases, noise has a very similar effect to the randomness which is purposefully and deliberately introduced e.g. during selection. Using the example of stochastic tournament selection, we show that the noise inherent in the optimization problem should be taken into account by the selection operator, and that one should not reduce noise further than necessary.


genetic and evolutionary computation conference | 2007

Performance measures and particle swarm methods for dynamic multi-objective optimization problems

Xiaodong Li; Juergen Branke; Michael Kirley

Introduction: Multiobjective optimization represents an important class of optimization techniques which have a direct implication for solving many real-world problems. In recent years, using evolutionary algorithms to solve multiobjective optimization problems, commonly known as EMO (Evolutionary Multi-objective Optimization), has gained rapid popularity. Since Evolutionary Algorithms (EAs) make use of a population of candidate solutions, a diverse set of optimal solutions so called Pareto-optimal solutions can be found within a single run. EAs offer a distinct advantage over many traditional optimization methods where multiple solutions must be found in multiple separate runs.


world congress on computational intelligence | 2008

Asynchronous multiple objective particle swarm optimisation in unreliable distributed environments

Ian Scriven; David John Ireland; Andrew Lewis; Sanaz Mostaghim; Juergen Branke

This paper examines the performance characteristics of both asynchronous and synchronous parallel particle swarm optimisation algorithms in heterogeneous, fault-prone environments. Algorithm convergence is measured as a function of both iterations completed and time elapsed, allowing the two particle update mechanisms to be comprehensively evaluated and compared in such an environment. Asynchronous particle updates are shown to negatively impact the convergence speed in regards to iterations completed, however the increased parallel efficiency of the asynchronous model appears to counter this performance reduction, ensuring the asynchronous update mechanism performs comparably to the synchronous mechanism in fault-free environments. When faults are introduced, the synchronous update method is shown to suffer significant performance drops, suggesting that at least partly asynchronous algorithms should be used in real-world environments where faults can regularly occur.


genetic and evolutionary computation conference | 2007

Addressing sampling errors and diversity loss in UMDA

Juergen Branke; Clemens Lode; Jonathan Shapiro

Estimation of distribution algorithms replace the typical crossover and mutation operators by constructing a probabilistic model and generating offspring according to this model. In previous studies, it has been shown that this generally leads to diversity loss due to sampling errors. In this paper, for the case of the simple Univariate Marginal Distribution Algorithm (UMDA), we propose and test several methods for counteracting diversity loss. The diversity loss can come in two phases: sampling from the probability model (offspring generation) and selection. We show that it is possible to completely remove the sampling error during offspring generation. Furthermore, we examine several plausible model construction variants which counteract diversity loss during selection and demonstrate that these update rules work better than the standard update on a variety of simple test problems.


Applied Soft Computing | 2007

Evolutionary design of en-route caching strategies

Juergen Branke; Pablo Funes; Frederik Thiele

Nowadays, large distributed databases are commonplace. Client applications increasingly rely on accessing objects from multiple remote hosts. The Internet itself is a huge network of computers, sending documents point-to-point by routing packetized data over multiple intermediate relays. As hubs in the network become overutilized, slowdowns and timeouts can disrupt the process. It is thus worth to think about ways to minimize these effects. Caching, i.e. storing replicas of previously-seen objects for later reuse, has the potential for generating large bandwidth savings and in turn a significant decrease in response time. En-route caching is the concept that all nodes in a network are equipped with a cache, and may opt to keep copies of some documents for future reuse [X. Tang, S.T. Chanson, Coordinated en-route web caching, IEEE Transact. Comput. 51 6 (2002) 595-607]. The rules used for such decisions are called caching strategies. Designing such strategies is a challenging task, because the different nodes interact, resulting in a complex, dynamic system. In this paper, we use genetic programming to evolve good caching strategies, both for specific networks and network classes. An important result is a new innovative caching strategy that outperforms current state-of-the-art methods.


genetic and evolutionary computation conference | 2009

Analysis of coevolution for worst-case optimization

Philipp Stuermer; Anthony Bucci; Juergen Branke; Pablo Funes; Elena Popovici

The problem of finding entities with the best worst-case performance across multiple scenarios arises in domains ranging from job shop scheduling to designing physical artifacts. In spite of previous successful applications of evolutionary computation techniques, particularly coevolution, to such domains, little work has examined utilizing coevolution for optimizing worst-case behavior. Previous work assesses certain algorithm mechanisms using aggregate performance on test problems. We examine fitness and population trajectories of individual algorithm runs, making two observations: first, that aggregate plots wash out important effects that call into question what these algorithms can produce; and second, that none of the mechanisms is generally better than the rest. More importantly, our dynamics analysis explains how the interplay of algorithm properties and problem properties influences performance. These contributions argue in favor of a reassessment of what makes for a good worst-case coevolutionary algorithm and suggest how to design one.


genetic and evolutionary computation conference | 2005

Evolutionary algorithms for dynamic optimization problems: workshop preface

Shengxiang Yang; Juergen Branke

Evolutionary algorithms (EAs) have been widely applied to solve stationary optimization problems. However, many real-world optimization problems are actually dynamic. For example, new jobs are to be added to the schedule, the quality of the raw material may be changing, and new orders have to be included into the vehicle routing problem etc. In such cases, when the problem changes over the course of the optimization, the purpose of the optimization algorithm changes from finding an optimal solution to being able to continuously track the movement of the optimum over time. This seriously challenges traditional EAs since they cannot adapt well to the changing environment once converged.


genetic and evolutionary computation conference | 2003

Ant-based crossover for permutation problems

Juergen Branke; Christiane Barz; Ivesa Behrens

Crossover for evolutionary algorithms applied to permutation problems is a difficult and widely discussed topic. In this paper we use ideas from ant colony optimization to design a new permutation crossover operator. One of the advantages of the new crossover operator is the ease to introduce problem specific heuristic knowledge. Empirical tests on a travelling salesperson problem show that the new crossover operator yields excellent results and significantly outperforms evolutionary algorithms with edge recombination operator as well as pure ant colony optimization.

Collaboration


Dive into the Juergen Branke's collaboration.

Top Co-Authors

Avatar

Sanaz Mostaghim

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Christian Schmidt

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Clemens Lode

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Frederik Thiele

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hartmut Schmeck

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ivesa Behrens

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Philipp Stuermer

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge