Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bodo Manthey is active.

Publication


Featured researches published by Bodo Manthey.


foundations of computer science | 2009

k-Means Has Polynomial Smoothed Complexity

David Arthur; Bodo Manthey; Heiko Röglin

The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means method has been studied in the model of smoothed analysis. But even the smoothed analyses so far are unsatisfactory as the bounds are still super-polynomial in the number n of data points. In this paper, we settle the smoothed running time of the k-means method. We show that the smoothed number of iterations is bounded by a polynomial in n and 1/sigma, where sigma is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set.


Journal of the ACM | 2011

Smoothed Analysis of the k-Means Method

David Arthur; Bodo Manthey; Heiko Röglin

The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means method has been studied in the model of smoothed analysis. But even the smoothed analyses so far are unsatisfactory as the bounds are still super-polynomial in the number n of data points. In this article, we settle the smoothed running time of the k-means method. We show that the smoothed number of iterations is bounded by a polynomial in n and 1/σ, where σ is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set.


Algorithmica | 2005

Approximating Maximum Weight Cycle Covers in Directed Graphs with Weights Zero and One

Markus Bläser; Bodo Manthey

Abstract A cycle cover of a graph is a spanning subgraph, each node of which is part of exactly one simple cycle. A k-cycle cover is a cycle cover where each cycle has length at least k. Given a complete directed graph with edge weights zero and one, Max-k-DDC(0,1) is the problem of finding a k-cycle cover with maximum weight. We present a 2/3 approximation algorithm for Max-k-DDC(0,1) with running time O(n 5/2). This algorithm yields a 4/3 approximation algorithm for Max-k-DDC(1,2) as well. Instances of the latter problem are complete directed graphs with edge weights one and two. The goal is to find a k-cycle cover with minimum weight. We particularly obtain a 2/3 approximation algorithm for the asymmetric maximum traveling salesman problem with distances zero and one and a 4/3 approximation algorithm for the asymmetric minimum traveling salesman problem with distances one and two. As a lower bound, we prove that Max-k-DDC(0,1) for k ≥ 3 and Max-k-UCC(0,1) (finding maximum weight cycle covers in undirected graphs) for k ≥ 7 are \APX-complete.


Information Technology | 2011

Smoothed analysis: analysis of algorithms beyond worst case

Bodo Manthey; Heiko Röglin

Abstract Many algorithms perform very well in practice, but have a poor worst-case performance. The reason for this discrepancy is that worst-case analysis is often a way too pessimistic measure for the performance of an algorithm. In order to provide a more realistic performance measure that can explain the practical performance of algorithms, smoothed analysis has been introduced. It is a hybrid of the classical worst-case analysis and average-case analysis, where the performance on inputs is measured that are subject to random noise. We give a gentle, not too formal introduction to smoothed analysis by means of two examples: the k-means method for clustering and the Nemhauser/Ullmann algorithm for the knapsack problem. Zusammenfassung Viele Algorithmen sind in der Praxis effizient, obwohl ihre Laufzeit im Worst Case sehr schlecht ist. Der Grund für diese Diskrepanz ist, dass die reine Betrachtung des Worst Case oft ein viel zu pessimistisches Maß darstellt. Smoothed Analysis ist eine Alternative zur Worst-Case-Analyse, die oft zu realistischeren Ergebnissen führt und so die praktische Performance von Algorithmen theoretisch untermauert. Sie ist eine Mischung aus Worst-Case- und Average-Case-Analyse, bei der die Performance auf Eingaben gemessen wird, die zufälliges Rauschen enthalten. Wir geben einen Einblick in Smoothed Analysis anhand zweier Beispiele: der k-Means-Methode für das Clustering-Problem und dem Nemhauser/Ullmann-Algorithmus für das Rucksack-Problem.


Operations Research Letters | 2009

Average-case approximation ratio of the 2-opt algorithm for the TSP

Christian Engels; Bodo Manthey

We show that the 2-opt heuristic for the traveling salesman problem achieves an expected approximation ratio of roughly O(n) for instances with n nodes, where the edge weights are drawn uniformly and independently at random.


SIAM Journal on Computing | 2008

On Approximating Restricted Cycle Covers

Bodo Manthey

Compounds of formula (1) (1) wherein A and B are identical or different alkyl, aralkyl or cycloalkyl groups or carbocyclic or heterocyclic aromatic radicals, R1 and R2 are hydrogen atoms or substituents which do not impart solubility in water, are particularly suitable as photoconductive substances.


international symposium on algorithms and computation | 2009

Worst-Case and Smoothed Analysis of k-Means Clustering with Bregman Divergences

Bodo Manthey; Heiko Röglin

The k-means algorithm is the method of choice for clustering large-scale data sets and it performs exceedingly well in practice. Most of the theoretical work is restricted to the case that squared Euclidean distances are used as similarity measure. In many applications, however, data is to be clustered with respect to other measures like, e.g., relative entropy, which is commonly used to cluster web pages. In this paper, we analyze the running-time of the k-means method for Bregman divergences, a very general class of similarity measures including squared Euclidean distances and relative entropy. We show that the exponential lower bound known for the Euclidean case carries over to almost every Bregman divergence. To narrow the gap between theory and practice, we also study k-means in the semi-random input model of smoothed analysis. For the case that n data points in ? d are perturbed by noise with standard deviation ?, we show that for almost arbitrary Bregman divergences the expected running-time is bounded by


symposium on theoretical aspects of computer science | 2009

On Approximating Multi-Criteria TSP

Bodo Manthey

{\rm poly}(n^{\sqrt k}, 1/\sigma)


Lecture Notes in Computer Science | 2002

Two Approximation Algorithms for 3-Cycle Covers

Markus Bläser; Bodo Manthey

and k kd ·poly(n, 1/?).


Algorithmica | 2013

Smoothed Analysis of Partitioning Algorithms for Euclidean Functionals

Markus Bläser; Bodo Manthey; B. V. Raghavendra Rao

We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP), whose performances are independent of the number

Collaboration


Dive into the Bodo Manthey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge