Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marvin Künnemann is active.

Publication


Featured researches published by Marvin Künnemann.


Theoretical Computer Science | 2015

Optimizing linear functions with the ( 1 + λ ) evolutionary algorithm—Different asymptotic runtimes for different instances

Benjamin Doerr; Marvin Künnemann

Abstract We analyze how the ( 1 + λ ) evolutionary algorithm (EA) optimizes linear pseudo-Boolean functions. We prove that it finds the optimum of any linear function within an expected number of O ( 1 λ n log ⁡ n + n ) iterations. We also show that this bound is sharp for some linear functions, e.g., the binary value function. Since previous works shows an asymptotically smaller runtime for the special case of OneMax , it follows that for the ( 1 + λ ) EA different linear functions may have run-times of different asymptotic order. The proof of our upper bound heavily relies on a number of classic and recent drift analysis methods. In particular, we show how to analyze a process displaying different types of drifts in different phases. Our work corrects a wrongfully claimed better asymptotic runtime in an earlier work [13] . We also use our methods to analyze the runtime of the ( 1 + λ ) EA on the OneMax test function and obtain a new upper bound of O ( n log ⁡ log ⁡ λ / log ⁡ λ ) for the case that λ is larger than O ( log ⁡ n log ⁡ log ⁡ n / log ⁡ log ⁡ log ⁡ n ) ; this is the cut-off point where a linear speed-up ceases to exist. While our results are mostly spurred from a theory-driven interest, they also show that choosing the right size of the offspring population can be crucial. For both the binary value and the OneMax test function we observe that once a linear speed-up ceases to exist, in fact, the speed-up from a larger λ reduces to sub-logarithmic (still at the price of a linear increase of the cost of each generation).


genetic and evolutionary computation conference | 2013

How the (1+λ) evolutionary algorithm optimizes linear functions

Benjamin Doerr; Marvin Künnemann

We analyze how the (1+λ) evolutionary algorithm (EA) optimizes linear pseudo-Boolean functions. We prove that it finds the optimum of any linear function within an expected number of O(1/λn log n+n) iterations. We also show that this bound is sharp for some functions, e.g., the binary value function. Hence unlike for the(1+1) EA, for the (1+λ) EA different linear functions may have run-times of different asymptotic order. The proof of our upper bound heavily relies on a number of classic and recent drift analysis methods. In particular, we show how to analyze a process displaying different types of drifts in different phases. Our work corrects a wrongfully claimed better asymptotic runtime in an earlier work~\cite{He10}.


ACM Journal of Experimental Algorithms | 2011

Quasirandom rumor spreading: An experimental analysis

Benjamin Doerr; Tobias Friedrich; Marvin Künnemann; Thomas Sauerwald

We empirically analyze two versions of the well-known “randomized rumor spreading” protocol to disseminate a piece of information in networks. In the classical model, in each round, each informed node informs a random neighbor. In the recently proposed quasirandom variant, each node has a (cyclic) list of its neighbors. Once informed, it starts at a random position of the list, but from then on informs its neighbors in the order of the list. While for sparse random graphs a better performance of the quasirandom model could be proven, all other results show that, independent of the structure of the lists, the same asymptotic performance guarantees hold as for the classical model. In this work, we compare the two models experimentally. Not only does this show that the quasirandom model generally is faster, but it also shows that the runtime is more concentrated around the mean. This is surprising given that much fewer random bits are used in the quasirandom process. These advantages are also observed in a lossy communication model, where each transmission does not reach its target with a certain probability, and in an asynchronous model, where nodes send at random times drawn from an exponential distribution. We also show that typically the particular structure of the lists has little influence on the efficiency.


symposium on experimental and efficient algorithms | 2010

Randomized rounding for routing and covering problems: experiments and improvements

Benjamin Doerr; Marvin Künnemann; Magnus Wahlström

We investigate how the recently developed different approaches to generate randomized roundings satisfying disjoint cardinality constraints behave when used in two classical algorithmic problems, namely low-congestion routing in networks and max-coverage problems in hypergraphs. Based on our experiments, we also propose and investigate the following new ideas. For the low-congestion routing problems, we suggest to solve a second LP, which yields the same congestion, but aims at producing a solution that is easier to round. For the max-coverage instances, observing that the greedy heuristic also performs very good, we develop hybrid approaches, in the form of a strengthened method of derandomized rounding, and a simple greedy/rounding hybrid using greedy and LP-based rounding elements. Experiments show that these ideas significantly reduce the rounding errors. For an important special case of max-coverage, namely unit disk max-domination, we also develop a PTAS. However, experiments show it less competitive than other approaches, except possibly for extremely high solution qualities.


analytic algorithmics and combinatorics | 2014

Tight analysis of randomized rumor spreading in complete graphs

Benjamin Doerr; Marvin Künnemann

We present a tight analysis of the basic randomized rumor spreading process in complete graphs introduced by Frieze and Grimmett (1985), where in each round of the process each node knowing the rumor gossips the rumor to a node chosen uniformly at random. The process starts with a single node knowing the rumor. We show that the number Sn of rounds required to spread a rumor in a complete graph with n nodes is very closely described by log2 n plus (1/n) times the completion time of the coupon collector process. This in particular gives very precise bounds for the expected runtime of the process, namely ⌊log2 n⌋ + ln n − 1:116 ≤ E[Sn] ≤ ⌈log2 n⌉ + ln n + 2:765 + o(1).


congress on evolutionary computation | 2013

Royal road functions and the (1 + λ) evolutionary algorithm: Almost no speed-up from larger offspring populations

Benjamin Doerr; Marvin Künnemann

We analyze the runtime of the (1 + λ) evolutionary algorithm (EA) on the classic royal road test function class. For a royal road function defined on bit-strings of length n having block sized ≥ log n + (c + 1 + ε) log d, we prove that the (1 + λ) EA with λ = Θ(n<sup>c</sup>) finds the optimum in an expected number of O(2<sup>d</sup>/d<sup>c</sup> · n/d log n/d) generations. Together with our lower bound of Ω(2<sup>d</sup>/d<sup>c</sup>), this shows that for royal road functions even very large offspring populations do not reduce the runtime significantly.


international colloquium on automata languages and programming | 2017

On the Fine-Grained Complexity of One-Dimensional Dynamic Programming

Marvin Künnemann; Ramamohan Paturi; Stefan Schneider

In this paper, we investigate the complexity of one-dimensional dynamic programming, or more specifically, of the Least-Weight Subsequence (LWS) problem: Given a sequence of n data items together with weights for every pair of the items, the task is to determine a subsequence S minimizing the total weight of the pairs adjacent in S. A large number of natural problems can be formulated as LWS problems, yielding obvious O(n^2)-time solutions. In many interesting instances, the O(n^2)-many weights can be succinctly represented. Yet except for near-linear time algorithms for some specific special cases, little is known about when an LWS instantiation admits a subquadratic-time algorithm and when it does not. In particular, no lower bounds for LWS instantiations have been known before. In an attempt to remedy this situation, we provide a general approach to study the fine-grained complexity of succinct instantiations of the LWS problem: Given an LWS instantiation we identify a highly parallel core problem that is subquadratically equivalent. This provides either an explanation for the apparent hardness of the problem or an avenue to find improved algorithms as the case may be. More specifically, we prove subquadratic equivalences between the following pairs (an LWS instantiation and the corresponding core problem) of problems: a low-rank version of LWS and minimum inner product, finding the longest chain of nested boxes and vector domination, and a coin change problem which is closely related to the knapsack problem and (min,+)-convolution. Using these equivalences and known SETH-hardness results for some of the core problems, we deduce tight conditional lower bounds for the corresponding LWS instantiations. We also establish the (min,+)-convolution-hardness of the knapsack problem. Furthermore, we revisit some of the LWS instantiations which are known to be solvable in near-linear time and explain their easiness in terms of the easiness of the corresponding core problems.


international symposium on algorithms and computation | 2015

Improved Approximation for Fréchet Distance on c-packed Curves Matching Conditional Lower Bounds

Karl Bringmann; Marvin Künnemann

The Frechet distance is a well-studied and popular measure of similarity of two curves. The best known algorithms have quadratic time complexity, which has recently been shown to be optimal assuming the Strong Exponential Time Hypothesis (SETH) [Bringmann FOCS’14]. To overcome the worst-case quadratic time barrier, restricted classes of curves have been studied that attempt to capture realistic input curves. The most popular such class are c-packed curves, for which the Frechet distance has a \((1+{\varepsilon })\)-approximation in time \(\mathcal {O}(c n /{\varepsilon }+ c n \log n)\) [Driemel et al. DCG’12]. In dimension \(d \geqslant 5\) this cannot be improved to Open image in new window for any \(\delta > 0\) unless SETH fails [Bringmann FOCS’14]. In this paper, exploiting properties that prevent stronger lower bounds, we present an improved algorithm with time complexity Open image in new window . This improves upon the algorithm by Driemel et al. for any \({\varepsilon }\ll 1/\log n\), and matches the conditional lower bound (up to lower order factors of the form \(n^{o(1)}\)).


international colloquium on automata, languages and programming | 2015

Towards Understanding the Smoothed Approximation Ratio of the 2-Opt Heuristic

Marvin Künnemann; Bodo Manthey

The 2-Opt heuristic is a very simple, easy-to-implement local search heuristic for the traveling salesman problem. While it usually provides good approximations to the optimal tour in experiments, its worst-case performance is poor. In an attempt to explain the approximation performance of 2-Opt, we analyze the smoothed approximation ratio of 2-Opt. We obtain a bound of O(log(1/sigma)) for the smoothed approximation ratio of 2-Opt. As a lower bound, we prove that the worst-case lower bound of Omega(log n/loglog n) for the approximation ratio holds for sigma=O(1/sqrt(n)). Our main technical novelty is that, different from existing smoothed analyses, we do not separately analyze objective values of the global and the local optimum on all inputs, but simultaneously bound them on the same input.


algorithm engineering and experimentation | 2011

Dependent randomized rounding: the bipartite case

Benjamin Doerr; Marvin Künnemann; Magnus Wahlström

We analyze the two existing algorithms to generate dependent randomized roundings for the bipartite edge weight rounding problem together with several newly proposed variants of these algorithms. For both the edge-based approach of Gandhi, Khuller, Parthasarathy, Srinivasan (FOCS 2002) and the bit-wise approach of Doerr (STACS 2006) we give a simple derandomization (guaranteeing the same rounding errors as the randomized versions achieve with positive probability). An experimental investigation on different types of random instances show that, contrary to the randomized rounding problem with disjoint cardinality constraints, the bit-wise approach is faster than the edge-based one, while the latter still achieves the best rounding errors. We propose a hybrid approach that, in terms of running time, combines advantages of the two previous approaches; in terms of rounding errors it seems a fair compromise. In all cases, the derandomized versions yield much better rounding errors than the randomized ones. We also test how the algorithms compare when used to solve different broadcast scheduling problems (as suggested by Gandhi et al.). Since this needs more random decisions than just in the rounding process, we need to partially re-prove previous results and simplify the corresponding algorithms to finally derive a derandomized version. Again, the derandomized versions give significantly better approximations than the randomized versions. We tested the algorithms on data taken from the Wikipedia access log. For the maximum throughput version of the problem, the derandomized algorithms compute solutions that are very close to the optimum of the linear relaxation. For the minimum average delay version, Gandhi et al. gave a (2, 1)-bicriteria algorithm, i.e., an algorithm which produces a 2-speed schedule with an average delay which on expectation is no worse than that of the 1-speed optimum. For this problem variant, while the performance guarantee of the algorithms certainly holds, we find that a simple greedy heuristic generally produces superior solutions.

Collaboration


Dive into the Marvin Künnemann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ning Chen

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Chengyu Lin

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Peihan Miao

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge