Alan Roytman
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alan Roytman.
measurement and modeling of computer systems | 2013
Lachlan L. H. Andrew; Siddharth Barman; Katrina Ligett; Minghong Lin; Adam Meyerson; Alan Roytman; Adam Wierman
We consider algorithms for “smoothed online convex optimization” (SOCO) problems, which are a hybrid between online convex optimization (OCO) and metrical task system (MTS) problems. Historically, the performance metric for OCO was regret and that for MTS was competitive ratio (CR). There are algorithms with either sublinear regret or constant CR, but no known algorithm achieves both simultaneously. We show that this is a fundamental limitation – no algorithm (deterministic or randomized) can achieve sublinear regret and a constant CR, even when the objective functions are linear and the decision space is one dimensional. However, we present an algorithm that, for the important one dimensional case, provides sublinear regret and a CR that grows arbitrarily slowly.
international workshop and international workshop on approximation, randomization, and combinatorial optimization. algorithms and techniques | 2013
Adam Meyerson; Alan Roytman; Brian Tagiku
Energy efficient algorithms are becoming critically important, as huge data centers and server farms have increasing impact on monetary and environmental costs. Motivated by such issues, we study online load balancing from an energy perspective. Our framework extends recent work by Khuller, Li, and Saha (SODA 2010) to the online model. We are given m machines, each with some energy activation cost c i and d dimensions (i.e., components). There are n jobs which arrive online and must be assigned to machines. Each job induces a load on its assigned machine along each dimension. We must select machines to activate so that the total activation cost of the machines falls within a budget B and the largest load over all machines and dimensions (i.e., the makespan) by assigning jobs to active machines is at most Λ.
electronic commerce | 2009
Milan Bradonjić; Gunes Ercal-Ozkaya; Adam Meyerson; Alan Roytman
We study the relationship between the social cost of correlated equilibria and the social cost of Nash equilibria. In contrast to previous work focusing on the possible benefits of a benevolent mediator, we define and bound the Price of Mediation (PoM): the ratio of the cost of the worst correlated equilibrium to the cost of the worst Nash. We observe that in practice, the heuristics used for mediation are frequently non-optimal, and from an economic perspective mediators may be inept or self-interested. Recent results on computation of equilibria also motivate our work. We consider the Price of Mediation for general games with small numbers of players and pure strategies. For games with two players each having two pure strategies we prove a tight bound of two on the PoM. For larger games (either more players, or more pure strategies per player, or both) we show that the PoM can be unbounded. Most of our results focus on symmetric congestion games (also known as load balancing games). We show that for general convex cost functions, the PoM can grow exponentially in the number of players. We prove that PoM is one for linear costs and at most a small constant (but can be larger than one) for concave costs. For polynomial cost functions, we prove bounds on the PoM which are exponential in the degree.
symposium on discrete algorithms | 2017
Yossi Azar; Ilan Cohen; Alan Roytman
In this paper, we exploit linear programming duality in the online setting (i.e., where input arrives on the fly) from the unique perspective of designing lower bounds on the competitive ratio. In particular, we provide a general technique for obtaining online deterministic and randomized lower bounds (i.e., hardness results) on the competitive ratio for a wide variety of problems. We show the usefulness of our approach by providing new, tight lower bounds for three diverse online problems. The three problems we show tight lower bounds for are the Vector Bin Packing problem, Ad-auctions (and various online matching problems), and the Capital Investment problem. Our methods are sufficiently general that they can also be used to reconstruct existing lower bounds. Our techniques are in stark contrast to previous works, which exploit linear programming duality to obtain positive results, often via the useful primal-dual scheme. We design a general recipe with the opposite aim of obtaining negative results via duality. The general idea behind our approach is to construct a primal linear program based on a collection of input sequences, where the objective function corresponds to optimizing the competitive ratio. We then obtain the corresponding dual linear program and provide a feasible solution, where the objective function yields a lower bound on the competitive ratio. Online lower bounds are often achieved by adapting the input sequence according to an online algorithms behavior and doing an appropriate ad hoc case analysis. Using our unifying techniques, we simultaneously combine these cases into one linear program and achieve online lower bounds via a more robust analysis. We are confident that our framework can be successfully applied to produce many more lower bounds for a wide array of online problems.
economics and computation | 2017
Michal Feldman; Amos Fiat; Alan Roytman
We consider job scheduling settings, with multiple machines, where jobs arrive online and choose a machine selfishly so as to minimize their cost. Our objective is the classic makespan minimization objective, which corresponds to the completion time of the last job to complete. The incentives of the selfish jobs may lead to poor performance. To reconcile the differing objectives, we introduce posted machine prices. The selfish job seeks to minimize the sum of its completion time on the machine and the posted price for the machine. Prices may be static (i.e., set once and for all before any arrival) or dynamic (i.e., change over time), but they are determined only by the past, assuming nothing about upcoming events. Obviously, such schemes are inherently truthful. We consider the competitive ratio: the ratio between the makespan achievable by the pricing scheme and that of the optimal algorithm. We give tight bounds on the competitive ratio for both dynamic and static pricing schemes for identical, restricted, related, and unrelated machine settings. Our main result is a dynamic pricing scheme for related machines that gives a constant competitive ratio, essentially matching the competitive ratio of online algorithms for this setting. In contrast, dynamic pricing gives poor performance for unrelated machines. This lower bound also exhibits a gap between what can be achieved by pricing versus what can be achieved by online algorithms.
conference on current trends in theory and practice of informatics | 2014
Ran Gelles; Rafail Ostrovsky; Alan Roytman
We consider the task of transmitting a data stream in the sliding window model, where communication takes place over an adversarial noisy channel with noise rate up to 1. For any noise level c 0. Decoding more than a (1 − c)-prefix of the window is shown to be impossible in the worst case, which makes our scheme optimal in this sense. Our scheme runs in polylogarithmic time per element in the size of the window, causes constant communication overhead, and succeeds with overwhelming probability.
algorithmic game theory | 2017
Yossi Azar; Michal Feldman; Nick Gravin; Alan Roytman
Incorporating budget constraints into the analysis of auctions has become increasingly important, as they model practical settings more accurately. The social welfare function, which is the standard measure of efficiency in auctions, is inadequate for settings with budgets, since there may be a large disconnect between the value a bidder derives from obtaining an item and what can be liquidated from her. The Liquid Welfare objective function has been suggested as a natural alternative for settings with budgets. Simple auctions, like simultaneous item auctions, are evaluated by their performance at equilibrium using the Price of Anarchy (PoA) measure -- the ratio of the objective function value of the optimal outcome to the worst equilibrium. Accordingly, we evaluate the performance of simultaneous item auctions in budgeted settings by the Liquid Price of Anarchy (LPoA) measure -- the ratio of the optimal Liquid Welfare to the Liquid Welfare obtained in the worst equilibrium. Our main result is that the LPoA for mixed Nash equilibria is bounded by a constant when bidders are additive and items can be divided into sufficiently many discrete parts. Our proofs are robust, and can be extended to achieve similar bounds for simultaneous second price auctions as well as Bayesian Nash equilibria. For pure Nash equilibria, we establish tight bounds on the LPoA for the larger class of fractionally-subadditive valuations. To derive our results, we develop a new technique in which some bidders deviate (surprisingly) toward a non-optimal solution. In particular, this technique does not fit into the smoothness framework.
symposium on the theory of computing | 2018
Mikkel Abrahamsen; Anna Adamaszek; Karl Bringmann; Vincent Cohen-Addad; Mehran Mehr; Eva Rotenberg; Alan Roytman; Mikkel Thorup
We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves,we present an algorithm that is polynomialin bothn andk. For the variant with unit cost per curve, or unit disks, we presenta near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in nO(k) time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NP-hard for general k. Our polynomial time algorithm refutes this unless P equals NP.
international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2016
Vladimir Braverman; Alan Roytman; Gregory Vorsanger
An important challenge in the streaming model is to maintain small-space approximations of entrywise functions performed on a matrix that is generated by the outer product of two vectors given as a stream. In other works, streams typically define matrices in a standard way via a sequence of updates, as in the work of Woodruff (2014) and others. We describe the matrix formed by the outer product, and other matrices that do not fall into this category, as implicit matrices. As such, we consider the general problem of computing over such implicit matrices with Hadamard functions, which are functions applied entrywise on a matrix. In this paper, we apply this generalization to provide new techniques for identifying independence between two vectors in the streaming model. The previous state of the art algorithm of Braverman and Ostrovsky (2010) gave a
symposium on discrete algorithms | 2011
Vladimir Braverman; Adam Meyerson; Rafail Ostrovsky; Alan Roytman; Michael Shindler; Brian Tagiku
(1 \pm \epsilon)