Richard Královič
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Richard Královič.
international symposium on algorithms and computation | 2009
Hans-Joachim Böckenhauer; Dennis Komm; Rastislav Královič; Richard Královič; Tobias Mömke
In this paper, we investigate to what extent the solution quality of online algorithms can be improved by allowing the algorithm to extract a given amount of information about the input. We consider the recently introduced notion of advice complexity where the algorithm, in addition to being fed the requests one by one, has access to a tape of advice bits that were computed by some oracle function from the complete input. The advice complexity is the number of advice bits read. We introduce an improved model of advice complexity and investigate the connections of advice complexity to the competitive ratio of both deterministic and randomized online algorithms using the paging problem, job shop scheduling, and the routing problem on a line as sample problems. We provide both upper and lower bounds on the advice complexity of all three problems. Our results for all of these problems show that very small advice (only three bits in the case of paging) already suffices to significantly improve over the best deterministic algorithm. Moreover, to achieve the same competitive ratio as any randomized online algorithm, a logarithmic number of advice bits is sufficient. On the other hand, to obtain optimality, much larger advice is necessary.
mathematical foundations of computer science | 2010
Juraj Hromkovič; Rastislav Královič; Richard Královič
What is information? Frequently spoken about in many contexts, yet nobody has ever been able to define it with mathematical rigor. The best we are left with so far is the concept of entropy by Shannon, and the concept of information content of binary strings by Chaitin and Kolmogorov. While these are doubtlessly great research instruments, they are hardly helpful in measuring the amount of information contained in particular objects. In a pursuit to overcome these limitations, we propose the notion of information content of algorithmic problems. We discuss our approaches and their possible usefulness in understanding the basic concepts of informatics, namely the concept of algorithms and the concept of computational complexity.
Theoretical Informatics and Applications | 2011
Dennis Komm; Richard Královič
Recently, a new measurement – the advice complexity – was introduced for measuring the information content of online problems. The aim is to measure the bitwise information that online algorithms lack, causing them to perform worse than offline algorithms. Among a large number of problems, a well-known scheduling problem, job shop scheduling with unit length tasks , and the paging problem were analyzed within this model. We observe some connections between advice complexity and randomization. Our special focus goes to barely random algorithms, i.e. , randomized algorithms that use only a constant number of random bits, regardless of the input size. We adapt the results on advice complexity to obtain efficient barely random algorithms for both the job shop scheduling and the paging problem. Furthermore, so far, it has not yet been investigated for job shop scheduling how good an online algorithm may perform when only using a very small ( e.g. , constant) number of advice bits. In this paper, we answer this question by giving both lower and upper bounds, and also improve the best known upper bound for optimal algorithms.
international colloquium on automata languages and programming | 2011
Hans-Joachim Böckenhauer; Dennis Komm; Rastislav Královič; Richard Královič
Competitive analysis is the established tool for measuring the output quality of algorithms that work in an online environment. Recently, the model of advice complexity has been introduced as an alternative measurement which allows for a more fine-grained analysis of the hardness of online problems. In this model, one tries to measure the amount of information an online algorithm is lacking about the future parts of the input. This concept was investigated for a number of well-known online problems including the k-server problem. In this paper, we first extend the analysis of the k-server problem by giving both a lower bound on the advice needed to obtain an optimal solution, and upper bounds on algorithms for the general k-server problem on metric graphs and the special case of dealing with the Euclidean plane. In the general case, we improve the previously known results by an exponential factor, in the Euclidean case we design an algorithm which achieves a constant competitive ratio for a very small (i. e., constant) number of advice bits per request. Furthermore, we investigate the relation between advice complexity and randomized online computations by showing how lower bounds on the advice complexity can be used for proving lower bounds for the competitive ratio of randomized online algorithms.
scandinavian workshop on algorithm theory | 2008
Davide Bilò; Hans-Joachim Böckenhauer; Juraj Hromkovič; Richard Královič; Tobias Mömke; Peter Widmayer; Anna Zych
In this paper we study the problem of finding a minimum Steiner Tree given a minimum Steiner Tree for similar problem instance. We consider scenarios of altering an instance by locally changing the terminal set or the weight of an edge. For all modification scenarios we provide approximation algorithms that improve best currently known corresponding approximation ratios.
Theoretical Computer Science | 2009
Hans-Joachim Böckenhauer; Juraj Hromkovič; Richard Královič; Tobias Mömke; Peter Rossmanith
Given an instance of the Steiner tree problem together with an optimal solution, we consider the scenario where this instance is modified locally by adding one of the vertices to the terminal set or removing one vertex from it. In this paper, we investigate the problem of whether the knowledge of an optimal solution to the unaltered instance can help in solving the locally modified instance. Our results are as follows: (i) We prove that these reoptimization variants of the Steiner tree problem are NP-hard, even if edge costs are restricted to values from {1,2}. (ii) We design 1.5-approximation algorithms for both variants of local modifications. This is an improvement over the currently best known approximation algorithm for the classical Steiner tree problem which achieves an approximation ratio of 1+ln(3)/2?1.55. (iii) We present a PTAS for the subproblem in which the edge costs are natural numbers {1,?,k} for some constant k.
latin american symposium on theoretical informatics | 2012
Hans-Joachim Böckenhauer; Dennis Komm; Richard Královič; Peter Rossmanith
We study the advice complexity and the random bit complexity of the online knapsack problem: Given a knapsack of unit capacity, and n items that arrive in successive time steps, an online algorithm has to decide for every item whether it gets packed into the knapsack or not. The goal is to maximize the value of the items in the knapsack without exceeding its capacity. In the model of advice complexity of online problems, one asks how many bits of advice about the unknown parts of the input are both necessary and sufficient to achieve a specific competitive ratio. It is well-known that even the unweighted online knapsack problem does not admit any competitive deterministic online algorithm. We show that a single bit of advice helps a deterministic algorithm to become 2-competitive, but that Ω(log n) advice bits are necessary to further improve the deterministic competitive ratio. This is the first time that such a phase transition for the number of advice bits has been observed for any problem. We also show that, surprisingly, instead of an advice bit, a single random bit allows for a competitive ratio of 2, and any further amount of randomness does not improve this. Moreover, we prove that, in a resource augmentation model, i.e., when allowing a little overpacking of the knapsack, a constant number of advice bits suffices to achieve a near-optimal competitive ratio. We also study the weighted version of the problem proving that, with O(log n) bits of advice, we can get arbitrarily close to an optimal solution and, using asymptotically fewer bits, we are not competitive.
computer science symposium in russia | 2012
Dennis Komm; Richard Královič; Tobias Mömke
Recently, a new approach to get a deeper understanding of online computation has been introduced: the study of the advice complexity of online problems. The idea is to measure the information that online algorithms need to be supplied with to compute high-quality solutions and to overcome the drawback of not knowing future input requests. In this paper, we study the advice complexity of an online version of the well-known set cover problem introduced by Alon et al.: for a ground set of size n and a set family of m subsets of the ground set, we obtain bounds in both n and m. We prove that a linear number of advice bits is both sufficient and necessary to perform optimally. Furthermore, for any constant c, we prove that n − c bits are enough to construct a c-competitive online algorithm and this bound is tight up to a constant factor (only depending on c). Moreover, we show that a linear number of advice bits is both necessary and sufficient to be optimal with respect to m, as well. We further show lower and upper bounds for achieving c-competitiveness also in m.
Theoretical Computer Science | 2014
Hans-Joachim Böckenhauer; Dennis Komm; Richard Královič; Peter Rossmanith
We study the advice complexity and the random bit complexity of the online knapsack problem. Given a knapsack of unit capacity, and n items that arrive in successive time steps, an online algorithm has to decide for every item whether it gets packed into the knapsack or not. The goal is to maximize the value of the items in the knapsack without exceeding its capacity. In the model of advice complexity of online problems, one asks how many bits of advice about the unknown parts of the input are both necessary and sufficient to achieve a specific competitive ratio. It is well-known that even the unweighted online knapsack problem does not admit any competitive deterministic online algorithm. For this problem, we show that a single bit of advice helps a deterministic online algorithm to become 2-competitive, but that @W(logn) advice bits are necessary to further improve the deterministic competitive ratio. This is the first time that such a phase transition for the number of advice bits has been observed for any problem. Additionally, we show that, surprisingly, instead of an advice bit, a single random bit allows for a competitive ratio of 2, and any further amount of randomness does not improve this. Moreover, we prove that, in a resource augmentation model, i.e., when allowing the online algorithm to overpack the knapsack by some small amount, a constant number of advice bits suffices to achieve a near-optimal competitive ratio. We also study the weighted version of the problem proving that, with O(logn) bits of advice, we can get arbitrarily close to an optimal solution and, using asymptotically fewer bits, we are not competitive. Furthermore, we show that an arbitrary number of random bits does not permit a constant competitive ratio.
Journal of Computer and System Sciences | 2012
Christos A. Kapoutsis; Richard Královič; Tobias Mömke
We examine the succinctness of one-way, rotating, sweeping, and two-way deterministic finite automata (1DFAs, RDFAs, SDFAs, 2DFAs) and their nondeterministic and randomized counterparts. Here, a SDFA is a 2DFA whose head can change direction only on the end-markers and a RDFA is a SDFA whose head is reset to the left end of the input every time the right end-marker is read. We study the size complexity classes defined by these automata, i.e., the classes of problems solvable by small automata of certain type. For any pair of classes of one-way, rotating, and sweeping deterministic (1D, RD, SD), self-verifying (1@D, R@D, S@D) and nondeterministic (1N, RN, SN) automata, as well as for their complements and reversals, we show that they are equal, incomparable, or one is strictly included in the other. The provided map of the complexity classes has interesting implications on the power of randomization for finite automata. Among other results, it implies that Las Vegas sweeping automata can be exponentially more succinct than SDFAs. We introduce a list of language operators and study the corresponding closure properties of the size complexity classes defined by these automata as well. Our conclusions reveal also the logical structure of certain proofs of known separations among the complexity classes and allow us to systematically construct alternative witnesses of these separations.