Dirk Sudholt
University of Sheffield
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dirk Sudholt.
IEEE Transactions on Evolutionary Computation | 2013
Dirk Sudholt
In this paper a new method for proving lower bounds on the expected running time of evolutionary algorithms (EAs) is presented. It is based on fitness-level partitions and an additional condition on transition probabilities between fitness levels. The method is versatile, intuitive, elegant, and very powerful. It yields exact or near-exact lower bounds for LO, OneMax, long k-paths, and all functions with a unique optimum. Most lower bounds are very general; they hold for all EAs that only use bit-flip mutation as variation operator, i.e., for all selection operators and population models. The lower bounds are stated with their dependence on the mutation rate. These results have very strong implications. They allow us to determine the optimal mutation-based algorithm for LO and OneMax, i.e., the algorithm that minimizes the expected number of fitness evaluations. This includes the choice of the optimal mutation rate.
genetic and evolutionary computation conference | 2005
Dirk Sudholt
Due to experimental evidence it is incontestable that crossover is essential for some fitness functions. However, theoretical results without assumptions are difficult. So-called real royal road functions are known where crossover is proved to be essential, i.e., mutation-based algorithms have an exponential expected runtime while the expected runtime of a genetic algorithm is polynomially bounded. However, these functions are artificial and have been designed in such a way that crossover is essential only at the very end (or at other well-specified points) of the optimization process.Here, a more natural fitness function based on a generalized Ising model is presented where crossover is essential throughout the whole optimization process. Mutation-based algorithms such as (μ+λ) EAs with constant population size are proved to have an exponential expected runtime while the expected runtime of a simple genetic algorithm with population size 2 and fitness sharing is polynomially bounded.
electronic commerce | 2009
Tobias Friedrich; Pietro Simone Oliveto; Dirk Sudholt; Carsten Witt
Maintaining diversity is important for the performance of evolutionary algorithms. Diversity-preserving mechanisms can enhance global exploration of the search space and enable crossover to find dissimilar individuals for recombination. We focus on the global exploration capabilities of mutation-based algorithms. Using a simple bimodal test function and rigorous runtime analyses, we compare well-known diversity-preserving mechanisms like deterministic crowding, fitness sharing, and others with a plain algorithm without diversification. We show that diversification is necessary for global exploration, but not all mechanisms succeed in finding both optima efficiently. Our theoretical results are accompanied by additional experiments for different population sizes.
Swarm Intelligence | 2009
Frank Neumann; Dirk Sudholt; Carsten Witt
Recently, the first rigorous runtime analyses of ACO algorithms appeared, covering variants of the MAX–MIN ant system and their runtime on pseudo-Boolean functions. Interestingly, a variant called 1-ANT is very sensitive to the evaporation factor while Gutjahr and Sebastiani proved partly opposite results for their variant MMASbs. These algorithms differ in their pheromone update mechanisms and, moreover, 1-ANT accepts equally fit solutions in contrast to MMASbs.By analyzing variants of MMASbs, we prove that the different behavior of 1-ANT and MMASbs results from the different pheromone update mechanisms. Building upon results by Gutjahr and Sebastiani, we extend their analyses of MMASbs to the class of unimodal functions and show improved results for test functions using new and specialized techniques; in particular, we present new lower bounds. Finally, we compare MMASbs with a variant that also accepts equally fit solutions as this enables the exploration of plateaus. For well-known plateau functions we prove that this drastically reduces the optimization time. Our findings are complemented by experiments that support our asymptotic analyses and yield additional insights.
genetic and evolutionary computation conference | 2007
Benjamin Doerr; Frank Neumann; Dirk Sudholt; Carsten Witt
The runtime analysis of randomized search heuristics is a growing field where, in the last two decades, many rigorous results have been obtained. These results, however, apply particularly to classical search heuristics such as Evolutionary Algorithms (EAs) and Simulated Annealing. First runtime analyses of modern search heuristics have been conducted only recently w.r.t a simple Ant Colony Optimization (ACO) algorithm called 1-ANT. In particular, the influence of the evaporation factor in the pheromone update mechanism and the robustness of this parameter w.r.t the runtime behavior have been determined for the example function OneMax.This paper puts forward the rigorous runtime analysis of the 1-ANT on example functions, namely on the functions LeadingOnes and BinVal. With respect to EAs, such analyses have been essential to develop methods for the analysis on more complicated problems. The proof techniques required for the 1-ANT, unfortunately, differ significantly from those for EAs, which means that a new reservoir of methods has to be built up. Again, the influence of the evaporation factor is analyzed rigorously, and it is proved that its choice can be very crucial to allow efficient runtimes. Moreover, the analyses provide insight into the working principles of ACO algorithms and, in terms of their robustness, describe essential differences to other randomized search heuristics.
genetic and evolutionary computation conference | 2011
Timo Kötzing; Dirk Sudholt; Madeleine Theile
Understanding the impact of crossover on performance is a major problem in the theory of genetic algorithms (GAs). We present new insight on working principles of crossover by analyzing the performance of crossover-based GAs on the simple functions OneMax and Jump. First, we assess the potential speedup by crossover when combined with a fitness-invariant bit shuffling operator that simulates a lineage of independent evolution on a function of unitation. Theoretical and empirical results show drastic speedups for both functions. Second, we consider a simple GA without shuffling and investigate the interplay of mutation and crossover on Jump. If the crossover probability is small, subsequent mutations create sufficient diversity, even for very small populations. Contrarily, with high crossover probabilities crossover tends to lose diversity more quickly than mutation can create it. This has a drastic impact on the performance on Jump. We complement our theoretical findings by Monte Carlo simulations on the population diversity.
Theoretical Computer Science | 2009
Dirk Sudholt
Memetic (evolutionary) algorithms integrate local search into the search process of evolutionary algorithms. As computational resources have to be spread adequately among local and evolutionary search, one has to care about when to apply local search and how much computational effort to devote to local search. Often local search is called with a fixed frequency and run for a fixed number of iterations, the local search depth. There is empirical evidence that these parameters have a significant impact on performance, but a theoretical understanding as well as concrete design guidelines are missing. We initiate the rigorous theoretical analysis of memetic algorithms. To this end, we consider a simple memetic algorithm for pseudo-Boolean optimization that captures basic working principles of memetic algorithms-the interplay of genetic operators like mutation and selection with local search. We present function classes where even small changes of the parametrization have a strong impact on performance. For almost every reasonable parameter setting we construct a function that, with high probability, can be optimized in polynomial time. However, changing the local search depth by a small additive term in any direction yields a superpolynomial optimization time, with high probability. For another class of functions altering the local search frequency by a factor of 2 even yields exponential optimization times. Our results show exemplarily that parametrizing memetic evolutionary algorithms can be extremely hard. Moreover, this work yields insights into the dynamic behavior of memetic algorithms and contributes to a theoretical foundation of hybrid metaheuristics.
genetic and evolutionary computation conference | 2012
Jonathan E. Rowe; Dirk Sudholt
We extend the theory of non-elitist evolutionary algorithms (EAs) by considering the offspring population size in the (1,λ) EA. We establish a sharp threshold at λ = log{\frac{e}{e-1}} n ≈5 log10 n between exponential and polynomial running times on OneMax. For any smaller value, the (1,λ) EA needs exponential time on every function that has only one global optimum. We also consider arbitrary unimodal functions and show that the threshold can shift towards larger offspring population sizes. Finally, we investigate the relationship between the offspring population size and arbitrary mutation rates on OneMax. We get sharp thresholds for λ that decrease with the mutation rate. This illustrates the balance between selection and mutation.
foundations of genetic algorithms | 2011
Jörg Lässig; Dirk Sudholt
We present two adaptive schemes for dynamically choosing the number of parallel instances in parallel evolutionary algorithms. This includes the choice of the offspring population size in a (1+λ) EA as a special case. Our schemes are parameterless and they work in a black-box setting where no knowledge on the problem is available. Both schemes double the number of instances in case a generation ends without finding an improvement. In a successful generation, the first scheme resets the system to one instance, while the second scheme halves the number of instances. Both schemes provide near-optimal speed-ups in terms of the parallel time. We give upper bounds for the asymptotic sequential time (i.e., the total number of function evaluations) that are not larger than upper bounds for a corresponding non-parallel algorithm derived by the fitness-level method.
genetic and evolutionary computation conference | 2008
Tobias Friedrich; Pietro Simone Oliveto; Dirk Sudholt; Carsten Witt
Maintaining diversity is important for the performance of evolutionary algorithms. Diversity mechanisms can enhance global exploration of the search space and enable crossover to find dissimilar individuals for recombination. We focus on the global exploration capabilities of mutation-based algorithms. Using a simple bimodal test function and rigorous runtime analyses, we compare well-known diversity mechanisms like deterministic crowding, fitness sharing, and others with a plain algorithm without diversification. We show that diversification is necessary for global exploration, but not all mechanisms succeed in finding both optima efficiently.