Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tracy Kimbrel is active.

Publication


Featured researches published by Tracy Kimbrel.


Journal of the ACM | 2007

Speed scaling to manage energy and temperature

Nikhil Bansal; Tracy Kimbrel; Kirk Pruhs

Speed scaling is a power management technique that involves dynamically changing the speed of a processor. We study policies for setting the speed of the processor for both of the goals of minimizing the energy used and the maximum temperature attained. The theoretical study of speed scaling policies to manage energy was initiated in a seminal paper by Yao et al. [1995], and we adopt their setting. We assume that the power required to run at speed s is P(s) = sα for some constant α > 1. We assume a collection of tasks, each with a release time, a deadline, and an arbitrary amount of work that must be done between the release time and the deadline. Yao et al. [1995] gave an offline greedy algorithm YDS to compute the minimum energy schedule. They further proposed two online algorithms Average Rate (AVR) and Optimal Available (OA), and showed that AVR is 2α − 1 αα-competitive with respect to energy. We provide a tight αα bound on the competitive ratio of OA with respect to energy. We initiate the study of speed scaling to manage temperature. We assume that the environment has a fixed ambient temperature and that the device cools according to Newtons law of cooling. We observe that the maximum temperature can be approximated within a factor of two by the maximum energy used over any interval of length 1/b, where b is the cooling parameter of the device. We define a speed scaling policy to be cooling-oblivious if it is simultaneously constant-competitive with respect to temperature for all cooling parameters. We then observe that cooling-oblivious algorithms are also constant-competitive with respect to energy, maximum speed and maximum power. We show that YDS is a cooling-oblivious algorithm. In contrast, we show that the online algorithms OA and AVR are not cooling-oblivious. We then propose a new online algorithm that we call BKP. We show that BKP is cooling-oblivious. We further show that BKP is e-competitive with respect to the maximum speed, and that no deterministic online algorithm can have a better competitive ratio. BKP also has a lower competitive ratio for energy than OA for α ≥5. Finally, we show that the optimal temperature schedule can be computed offline in polynomial-time using the Ellipsoid algorithm.


international world wide web conferences | 2006

Dynamic placement for clustered web applications

Alexei Karve; Tracy Kimbrel; Giovanni Pacifici; Mike Spreitzer; Malgorzata Steinder; Maxim Sviridenko; Asser N. Tantawi

We introduce and evaluate a middleware clustering technology capable of allocating resources to web applications through dynamic application instance placement. We define application instance placement as the problem of placing application instances on a given set of server machines to adjust the amount of resources available to applications in response to varying resource demands of application clusters. The objective is to maximize the amount of demand that may be satisfied using a configured placement. To limit the disturbance to the system caused by starting and stopping application instances, the placement algorithm attempts to minimize the number of placement changes. It also strives to keep resource utilization balanced across all server machines. Two types of resources are managed, one load-dependent and one load-independent. When putting the chosen placement in effect our controller schedules placement changes in a manner that limits the disruption to the system.


foundations of computer science | 2004

Dynamic speed scaling to manage energy and temperature

Nikhil Bansal; Tracy Kimbrel; Kirk Pruhs

We first consider online speed scaling algorithms to minimize the energy used subject to the constraint that every job finishes by its deadline. We assume that the power required to run at speed s is P(s) = s/sup /spl alpha//. We provide a tight /spl alpha//sup /spl alpha// bound on the competitive ratio of the previously proposed optimal available algorithm. This improves the best known competitive ratio by a factor of 2/sup /spl alpha//. We then introduce an online algorithm, and show that this algorithms competitive ratio is at most 2(/spl alpha//(/spl alpha/ - 1))/sup /spl alpha//e/sup /spl alpha//. This competitive ratio is significantly better and is approximately 2e/sup /spl alpha/+1/ for large /spl alpha/. Our result is essentially tight for large /spl alpha/. In particular, as /spl alpha/ approaches infinity, we show that any algorithm must have competitive ratio e/sup /spl alpha// (up to lower order terms). We then turn to the problem of dynamic speed scaling to minimize the maximum temperature that the device ever reaches, again subject to the constraint that all jobs finish by their deadlines. We assume that the device cools according to Fouriers law. We show how to solve this problem in polynomial time, within any error bound, using the ellipsoid algorithm.


Lecture Notes in Computer Science | 2005

Dynamic application placement under service and memory constraints

Tracy Kimbrel; Malgorzata Steinder; Maxim Sviridenko; Asser N. Tantawi

In this paper we consider an optimization problem which models the dynamic placement of applications on servers under two simultaneous resource requirements: one that is dependent on the loads placed on the applications and one that is independent. The demand (load) for applications changes over time and the goal is to satisfy all the demand while changing the solution (assignment of applications to servers) as little as possible. We describe the system environment where this problem arises, present a heuristic algorithm to solve it, and provide an experimental analysis comparing the algorithm to previously known algorithms. The experiments indicate that the new algorithm performs much better. Our algorithm is currently deployed in the IBM flagship product Websphere.


SIAM Journal on Computing | 2000

Near-Optimal Parallel Prefetching and Caching

Tracy Kimbrel; Anna R. Karlin

Recently there has been a great deal of interest in the operating systems research community in prefetching and caching data from parallel disks, as a technique for enabling serial applications to improve input--output (I/O) performance. In this paper, algorithms are considered for integrated prefetching and caching in a model with a fixed-size cache and any number of backing storage devices (disks). The integration of caching and prefetching with a single disk was previously considered by Cao, Felten, Karlin, and Li. Here, it is shown that the natural extension of their aggressive algorithm to the parallel disk case is suboptimal by a factor near the number of disks in the worst case. The main result is a new algorithm, reverse aggressive, with near-optimal performance for integrated prefetching and caching in the presence of multiple disks.


international colloquium on automata, languages and programming | 2004

Further Improvements in Competitive Guarantees for QoS Buffering

Nikhil Bansal; Lisa Fleischer; Tracy Kimbrel; Mohammad Mahdian; Baruch Schieber; Maxim Sviridenko

We study the behavior of algorithms for buffering packets weighted by different levels of Quality of Service (QoS) guarantees in a single queue. Buffer space is limited, and packet loss occurs when the buffer overflows. We describe a modification of the previously proposed “preemptive greedy” algorithm of for buffer management and give an analysis to show that this algorithm achieves a competitive ratio of at most 1.75. This improves upon recent work showing a 1.98 competitive ratio, and a previous result that shows a simple greedy algorithm has a competitive ratio of 2.


symposium on discrete algorithms | 2004

Minimizing migrations in fair multiprocessor scheduling of persistent tasks

Tracy Kimbrel; Baruch Schieber; Maxim Sviridenko

Suppose that we are given n persistent tasks (jobs) that need to be executed in an equitable way on m processors (machines). Each machine is capable of performing one unit of work in each integral time unit and each job may be executed on at most one machine at a time. The schedule needs to specify which job is to be executed on each machine in each time window. The goal is to find a schedule that minimizes job migrations between machines while guaranteeing a fair schedule. We measure the fairness by the drift d defined as the maximum difference between the execution times accumulated by any two jobs. As jobs are persistent we measure the quality of the schedule by the ratio of the number of migrations to time windows. We show a tradeoff between the drift and the number of migrations. Let n = qm + r with 0 < r < m (the problem is trivial for n ≤ m and for r = 0). For any d ≥ 1, we show a schedule that achieves a migration ratio less than r(m − r)/(n(q(d − 1)) + ∊ > 0; namely, it asymptotically requires r(m − r) job migrations every n(q(d − 1) + 1) time windows. We show how to implement the schedule efficiently. We prove that our algorithm is almost optimal by proving a lower bound of r(m − r)/(nqd) on the migration ratio. We also give a more complicated schedule that matches the lower bound for a special case when 2q ≤ d and m = 2r. Our algorithms can be extended to the dynamic case in which jobs enter and leave the system over time.


international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2009

On Hardness of Pricing Items for Single-Minded Bidders

Rohit Khandekar; Tracy Kimbrel; Konstantin Makarychev; Maxim Sviridenko

We consider the following item pricing problem which has received much attention recently. A seller has an infinite numbers of copies of n items. There are m buyers, each with a budget and an intention to buy a fixed subset of items. Given prices on the items, each buyer buys his subset of items, at the given prices, provided the total price of the subset is at most his budget. The objective of the seller is to determine the prices such that her total profit is maximized. In this paper, we focus on the case where the buyers are interested in subsets of size at most two. This special case is known to be APX-hard (Guruswami et al [1]). The best known approximation algorithm, by Balcan and Blum, gives a 4-approximation [2]. We show that there is indeed a gap of 4 for the combinatorial upper bound used in their analysis. We further show that a natural linear programming relaxation of this problem has an integrality gap of 4, even in this special case. Then we prove that the problem is NP-hard to approximate within a factor of 2 assuming the Unique Games Conjecture; and it is unconditionally NP-hard to approximate within a factor 17/16. Finally, we extend the APX-hardness of the problem to the special case in which the graph formed by items as vertices and buyers as edges is bipartite . We hope that our techniques will be helpful for obtaining stronger hardness of approximation bounds for this problem.


Mathematics of Operations Research | 2006

Job Shop Scheduling with Unit Processing Times

Nikhil Bansal; Tracy Kimbrel; Maxim Sviridenko

We consider randomized algorithms for the preemptive job shop problem, or equivalently, the case in which all operations have unit length. We give an α-approximation for the case of two machines where α < 1.45, an improved approximation ratio of O(log m/log log m) for an arbitrary number m of machines, and the first (2+ε)-approximation for a constant number of machines. The first result is via an approximation algorithm for a string matching problem which is of independent interest.


Computing | 2001

Tighter Bounds on Preemptive Job Shop Scheduling with Two Machines

Eric J. Anderson; T. S. Jayram; Tracy Kimbrel

Abstract We consider the preemptive job shop scheduling problem with two machines, with the objective to minimize the makespan. We present an algorithm that finds a schedule of length at most Pmax/2 greater than the optimal schedule length, where Pmax is the length of the longest job.

Researchain Logo
Decentralizing Knowledge