Andrei Lissovoi
Technical University of Denmark
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrei Lissovoi.
foundations of genetic algorithms | 2015
Timo Kötzing; Andrei Lissovoi; Carsten Witt
Evolutionary algorithms (EAs) perform well in settings involving uncertainty, including settings with stochastic or dynamic fitness functions. In this paper, we analyze the (1+1) EA on dynamically changing OneMax, as introduced by Droste (2003). We re-prove the known results on first hitting times using the modern tool of drift analysis. We extend these results to search spaces which allow for more than two values per dimension. Furthermore, we make an anytime analysis as suggested by Jansen and Zarges (2014), analyzing how closely the (1+1) EA can track the dynamically moving optimum over time. We get tight bounds both for the case of bit strings, as well as for the case of more than two values per position. Surprisingly, in the latter setting, the expected quality of the search point maintained by the (1+1) EA does not depend on the number of values per dimension.
Theoretical Computer Science | 2015
Andrei Lissovoi; Carsten Witt
A simple ACO algorithm called λ-MMAS for dynamic variants of the single-destination shortest paths problem is studied by rigorous runtime analyses. Building upon previous results for the special case of 1-MMAS, it is studied to what extent an enlarged colony using λ ants per vertex helps in tracking an oscillating optimum. It is shown that easy cases of oscillations can be tracked by a constant number of ants. However, the paper also identifies more involved oscillations that with overwhelming probability cannot be tracked with any polynomial-size colony. Finally, parameters of dynamic shortest-path problems which make the optimum difficult to track are discussed. Experiments illustrate theoretical findings and conjectures.
genetic and evolutionary computation conference | 2017
Andrei Lissovoi; Pietro Simone Oliveto; John Alasdair Warwicker
Selection hyper-heuristics are randomised search methodologies which choose and execute heuristics from a set of low-level heuristics. Recent time complexity analyses for the LeadingOnes benchmark function have shown that the standard simple random, permutation, random gradient, greedy and reinforcement learning selection mechanisms show no effects of learning. The idea behind the learning mechanisms is to continue to exploit the currently selected heuristic as long as it is successful. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. In this paper we generalise the classical selection-perturbation mechanisms so success can be measured over some fixed period of length r, rather than in a single iteration. We present a benchmark function where it is necessary to learn to exploit a particular low-level heuristic, rigorously proving that it makes the difference between an efficient and an inefficient algorithm. For LeadingOnes we prove that the generalised random gradient mechanism approaches optimal performance while generalised greedy, although not as fast, still outperforms random local search. An experimental analysis shows that combining the two generalised mechanisms leads to even better performance.
genetic and evolutionary computation conference | 2014
Andrei Lissovoi; Carsten Witt
We study the behavior of a population-based EA and the Max-Min Ant System (MMAS) on a family of deterministically-changing fitness functions, where, in order to find the global optimum, the algorithms have to find specific local optima within each of a series of phases. In particular, we prove that a (2+1) EA with genotype diversity is able to find the global optimum of the Maze function, previously considered by Kötzing and Molter (PPSN 2012, 113--122), in polynomial time. This is then generalized to a hierarchy result stating that for every μ, a (μ+1) EA with genotype diversity is able to track a Maze function extended over a finite alphabet of μ symbols, whereas population size μ-1 is not sufficient. Furthermore, we show that MMAS does not require additional modifications to track the optimum of the finite-alphabet Maze functions, and, using a novel drift statement to simplify the analysis, reduce the required phase length of the Maze function.
genetic and evolutionary computation conference | 2015
Andrei Lissovoi; Carsten Witt
A simple island model with λ islands and migration occurring after every τ iterations is studied on the dynamic fitness function Maze. This model is equivalent to a (1+λ) EA if τ=1, i.e., migration occurs during every iteration. It is proved that even for an increased offspring population size up to λ=O(n1-ε), the (1+λ) EA is still not able to track the optimum of Maze. If the migration interval is increased, the algorithm is able to track the optimum even for logarithmic λ. Finally, the relationship of τ, λ, and the ability of the island model to track the optimum is investigated more closely.
genetic and evolutionary computation conference | 2018
Benjamin Doerr; Andrei Lissovoi; Pietro Simone Oliveto; John Alasdair Warwicker
Selection hyper-heuristics are randomised optimisation techniques that select from a set of low-level heuristics which one should be applied in the next step of the optimisation process. Recently it has been proven that a Random Gradient hyper-heuristic optimises the LeadingOnes benchmark function in the best runtime achievable with any combination of its low-level heuristics, up to lower order terms. To achieve this runtime, the learning period τ, used to evaluate the performance of the currently chosen heuristic, should be set appropriately, i.e., super-linear in the problem size but not excessively larger. In this paper we automate the hyper-heuristic further by allowing it to self-adjust the learning period τ during the run. To achieve this we equip the algorithm with a simple self-adjusting mechanism, called 1 - o(1) rule, inspired by the 1/5 rule traditionally used in continuous optimisation. We rigorously prove that the resulting hyper-heuristic solves LeadingOnes in optimal runtime by automatically adapting τ and achieving a 1 - o(1) ratio of the desired behaviour. Complementary experiments for realistic problem sizes show the value of τ adapting as desired and that the hyper-heuristic with adaptive learning period outperforms the hyper-heuristic with fixed learning periods.
genetic and evolutionary computation conference | 2017
Andrei Lissovoi; Dirk Sudholt; Markus Wagner; Christine Zarges
Bet-and-run initialisation strategies have been experimentally shown to be beneficial on classical NP-complete problems such as the travelling salesperson problem and minimum vertex cover. We analyse the performance of a bet-and-run restart strategy, where k independent islands run in parallel for t1 iterations, after which the optimisation process continues on only the best-performing island. We define a family of pseudo-Boolean functions, consisting of a plateau and a slope, as an abstraction of real fitness landscapes with promising and deceptive regions. The plateau shows a high fitness, but does not allow for further progression, whereas the slope has a low fitness initially, but does lead to the global optimum. We show that bet-and-run strategies with non-trivial k and t1 are necessary to find the global optimum efficiently. We show that the choice of t1 is linked to properties of the function. Finally, we provide a fixed budget analysis to guide selection of the bet-and-run parameters to maximise expected fitness after t = k · t1 + t2 fitness evaluations.
genetic and evolutionary computation conference | 2016
Andrei Lissovoi; Carsten Witt
We introduce a simplified island model with behavior similar to the λ (1+1) islands optimizing the Maze fitness function, and investigate the effects of the migration topology on the ability of the simplified island model to track the optimum of a dynamic fitness function. More specifically, we prove that there exist choices of model parameters for which using a unidirectional ring as the migration topology allows the model to track the oscillating optimum through n Maze-like phases with high probability, while using a complete graph as the migration topology results in the island model losing track of the optimum with overwhelming probability. Additionally, we prove that if migration occurs only rarely, denser migration topologies may be advantageous. This serves to illustrate that while a less-dense migration topology may be useful when optimizing dynamic functions with oscillating behavior, and requires less problem-specific knowledge to determine when migration may be allowed to occur, care must be taken to ensure that a sufficient amount of migration occurs during the optimization process.
Algorithmica | 2016
Andrei Lissovoi; Carsten Witt
Algorithmica | 2017
Andrei Lissovoi; Carsten Witt