Antonios Antoniadis
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Antonios Antoniadis.
acm symposium on parallel algorithms and architectures | 2011
Susanne Albers; Antonios Antoniadis; Gero Greiner
We investigate a very basic problem in dynamic speed scaling where a sequence of jobs, each specified by an arrival time, a deadline and a processing volume, has to be processed so as to minimize energy consumption. Previous work has focused mostly on the setting where a single variable-speed processor is available. In this paper we study multi-processor environments with m parallel variable-speed processors assuming that job migration is allowed, i.e. whenever a job is preempted it may be moved to a different processor. We first study the offline problem and show that optimal schedules can be computed efficiently in polynomial time. In contrast to a previously known strategy, our algorithm does not resort to linear programming. We develop a fully combinatorial algorithm that relies on repeated maximum flow computations. The approach might be useful to solve other problems in dynamic speed scaling. For the online problem, we extend two algorithms Optimal Available and Average Rate proposed by Yao et al. [16] for the single processor setting. We prove that Optimal Available is αα-competitive, as in the single processor case. Here α>1 is the exponent of the power consumption function. While it is straightforward to extend Optimal Available to parallel processing environments, the competitive analysis becomes considerably more involved. For Average Rate we show a competitiveness of (3\α)α/2 + 2α.
ACM Transactions on Algorithms | 2014
Susanne Albers; Antonios Antoniadis
We study an energy conservation problem where a variable-speed processor is equipped with a sleep state. Executing jobs at high speeds and then setting the processor asleep is an approach that can lead to further energy savings compared to standard dynamic speed scaling. We consider classical deadline-based scheduling, that is, each job is specified by a release time, a deadline and a processing volume. For general convex power functions, Irani et al. [2007] devised an offline 2-approximation algorithm. Roughly speaking, the algorithm schedules jobs at a critical speed <i>s</i><sub><i>crit</i></sub> that yields the smallest energy consumption while jobs are processed. For power functions <i>P</i>(<i>s</i>) = <i>s</i><sup>α</sup> & γγ, where <i>s</i> is the processor speed, Han et al. [2010] gave an α<sup>α</sup> + 2)-competitive online algorithm. We investigate the offline setting of speed scaling with a sleep state. First, we prove NP-hardness of the optimization problem. Additionally, we develop lower bounds, for general convex power functions: No algorithm that constructs <i>s</i><sub><i>crit</i></sub>-schedules, which execute jobs at speeds of at least <i>s</i><sub><i>crit</i></sub>, can achieve an approximation factor smaller than 2. Furthermore, no algorithm that minimizes the energy expended for processing jobs can attain an approximation ratio smaller than 2. We then present an algorithmic framework for designing good approximation algorithms. For general convex power functions, we derive an approximation factor of 4/3. For power functions <i>P</i>(<i>s</i>) = β <i>s</i><sup>α</sup> + γ, we obtain an approximation of 137/117 > 1.171. We finally show that our framework yields the best approximation guarantees for the class of <i>s</i><sub><i>crit</i></sub>-schedules. For general convex power functions, we give another 2-approximation algorithm. For functions <i>P</i>(<i>s</i>) = β<i>s</i><sup>α</sup> + γ, we present tight upper and lower bounds on the best possible approximation factor. The ratio is exactly <i>eW</i><sub>−1</sub>(−<i>e</i><sup>−1−1/<i>e</i></sup>)/(<i>eW</i><sub>−1</sub>(−<i>e</i><sup>−1−1/<i>e</i>)+1)</sup> > 1.211, where <i>W</i><sub>-1</sub> is the lower branch of the Lambert <i>W</i> function.
Journal of Scheduling | 2013
Antonios Antoniadis; Chien-Chung Huang
We consider the following offline variant of the speed scaling problem introduced by Yao et al. We are given a set of jobs and we have a variable-speed processor to process them. The higher the processor speed, the higher the energy consumption. Each job is associated with its own release time, deadline, and processing volume. The objective is to find a feasible schedule that minimizes the energy consumption. In contrast to Yao et al., no preemption of jobs is allowed. Unlike the preemptive version that is known to be in P, the non-preemptive version of speed scaling is strongly NP-hard. In this work, we present a constant factor approximation algorithm for it. The main technical idea is to transform the problem into the unrelated machine scheduling problem with
symposium on discrete algorithms | 2015
Antonios Antoniadis; Chien-Chung Huang; Sebastian Ott
latin american symposium on theoretical informatics | 2016
Antonios Antoniadis; Neal Barcelo; Michael Nugent; Kirk Pruhs; Kevin Schewior; Michele Scquizzato
L_p
workshop on approximation and online algorithms | 2014
Antonios Antoniadis; Neal Barcelo; Michael Nugent; Kirk Pruhs; Michele Scquizzato
symposium on theoretical aspects of computer science | 2014
Antonios Antoniadis; Neal Barcelo; Mario E. Consuegra; Peter Kling; Michael Nugent; Kirk Pruhs; Michele Scquizzato
-norm objective.
mathematical foundations of computer science | 2013
Antonios Antoniadis; Chien-Chung Huang; Sebastian Ott; José Verschae
We study classical deadline-based preemptive scheduling of jobs in a computing environment equipped with both dynamic speed scaling and sleep state capabilities: Each job is specified by a release time, a deadline and a processing volume, and has to be scheduled on a single, speed-scalable processor that is supplied with a sleep state. In the sleep state, the processor consumes no energy, but a constant wake-up cost is required to transition back to the active state. In contrast to speed scaling alone, the addition of a sleep state makes it sometimes beneficial to accelerate the processing of jobs in order to transition the processor to the sleep state for longer amounts of time and incur further energy savings. The goal is to output a feasible schedule that minimizes the energy consumption. Since the introduction of the problem by Irani et al. [17], its exact computational complexity has been repeatedly posed as an open question (see e.g. [2,9,16]). The currently best known upper and lower bounds are a 4/3-approximation algorithm and NP-hardness due to [2] and [2, 18], respectively. We close the aforementioned gap between the upper and lower bound on the computational complexity of speed scaling with sleep state by presenting a fully polynomial-time approximation scheme for the problem. The scheme is based on a transformation to a non-preemptive variant of the problem, and a discretization that exploits a carefully defined lexicographical ordering among schedules.
latin american symposium on theoretical informatics | 2014
Antonios Antoniadis; Neal Barcelo; Daniel G. Cole; Kyle Fox; Benjamin Moseley; Michael Nugent; Kirk Pruhs
We consider three related online problems: Online Convex Optimization, Convex Body Chasing, and Lazy Convex Body Chasing. In Online Convex Optimization the input is an online sequence of convex functions over some Euclidean space. In response to a function, the online algorithm can move to any destination point in the Euclidean space. The cost is the total distance moved plus the sum of the function costs at the destination points. Lazy Convex Body Chasing is a special case of Online Convex Optimization where the function is zero in some convex region, and grows linearly with the distance from this region. And Convex Body Chasing is a special case of Lazy Convex Body Chasing where the destination point has to be in the convex region. We show that these problems are equivalent in the sense that if any of these problems have an O(1)-competitive algorithm then all of the problems have an O(1)-competitive algorithm. By leveraging these results we then obtain the first O(1)-competitive algorithm for Online Convex Optimization in two dimensions, and give the first O(1)-competitive algorithm for chasing linear subspaces. We also give a simple algorithm and O(1)-competitiveness analysis for chasing lines.
conference on current trends in theory and practice of informatics | 2009
Antonios Antoniadis; Andrzej Lingas
Online matching on a line involves matching an online stream of items of various sizes to stored items of various sizes, with the objective of minimizing the average discrepancy in size between matched items. The best previously known upper and lower bounds on the optimal deterministic competitive ratio are linear in the number of items, and constant, respectively. We show that online matching on a line is essentially equivalent to a particular search problem, that we call \(k\) -lost cows. We then obtain the first deterministic sub-linearly competitive algorithm for online matching on a line by giving such an algorithm for the \(k\)-lost cows problem.