Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nikhil Bansal is active.

Publication


Featured researches published by Nikhil Bansal.


Machine Learning archive | 2004

Correlation Clustering

Nikhil Bansal; Avrim Blum; Shuchi Chawla

We consider the following clustering problem: we have a complete graph on n vertices (items), where each edge (u, v) is labeled either + or − depending on whether u and v have been deemed to be similar or different. The goal is to produce a partition of the vertices (a clustering) that agrees as much as possible with the edge labels. That is, we want a clustering that maximizes the number of + edges within clusters, plus the number of − edges between clusters (equivalently, minimizes the number of disagreements: the number of − edges inside clusters plus the number of + edges between clusters). This formulation is motivated from a document clustering problem in which one has a pairwise similarity function f learned from past data, and the goal is to partition the current set of documents in a way that correlates with f as much as possible; it can also be viewed as a kind of “agnostic learning” problem.An interesting feature of this clustering formulation is that one does not need to specify the number of clusters k as a separate parameter, as in measures such as k-median or min-sum or min-max clustering. Instead, in our formulation, the optimal number of clusters could be any value between 1 and n, depending on the edge labels. We look at approximation algorithms for both minimizing disagreements and for maximizing agreements. For minimizing disagreements, we give a constant factor approximation. For maximizing agreements we give a PTAS, building on ideas of Goldreich, Goldwasser, and Ron (1998) and de la Veg (1996). We also show how to extend some of these results to graphs with edge labels in [−1, +1], and give some results for the case of random noise.


Journal of the ACM | 2007

Speed scaling to manage energy and temperature

Nikhil Bansal; Tracy Kimbrel; Kirk Pruhs

Speed scaling is a power management technique that involves dynamically changing the speed of a processor. We study policies for setting the speed of the processor for both of the goals of minimizing the energy used and the maximum temperature attained. The theoretical study of speed scaling policies to manage energy was initiated in a seminal paper by Yao et al. [1995], and we adopt their setting. We assume that the power required to run at speed s is P(s) = sα for some constant α > 1. We assume a collection of tasks, each with a release time, a deadline, and an arbitrary amount of work that must be done between the release time and the deadline. Yao et al. [1995] gave an offline greedy algorithm YDS to compute the minimum energy schedule. They further proposed two online algorithms Average Rate (AVR) and Optimal Available (OA), and showed that AVR is 2α − 1 αα-competitive with respect to energy. We provide a tight αα bound on the competitive ratio of OA with respect to energy. We initiate the study of speed scaling to manage temperature. We assume that the environment has a fixed ambient temperature and that the device cools according to Newtons law of cooling. We observe that the maximum temperature can be approximated within a factor of two by the maximum energy used over any interval of length 1/b, where b is the cooling parameter of the device. We define a speed scaling policy to be cooling-oblivious if it is simultaneously constant-competitive with respect to temperature for all cooling parameters. We then observe that cooling-oblivious algorithms are also constant-competitive with respect to energy, maximum speed and maximum power. We show that YDS is a cooling-oblivious algorithm. In contrast, we show that the online algorithms OA and AVR are not cooling-oblivious. We then propose a new online algorithm that we call BKP. We show that BKP is cooling-oblivious. We further show that BKP is e-competitive with respect to the maximum speed, and that no deterministic online algorithm can have a better competitive ratio. BKP also has a lower competitive ratio for energy than OA for α ≥5. Finally, we show that the optimal temperature schedule can be computed offline in polynomial-time using the Ellipsoid algorithm.


ACM Transactions on Computer Systems | 2003

Size-based scheduling to improve web performance

Mor Harchol-Balter; Bianca Schroeder; Nikhil Bansal; Mukesh Agrawal

Is it possible to reduce the expected response time of every request at a web server, simply by changing the order in which we schedule the requests? That is the question we ask in this paper.This paper proposes a method for improving the performance of web servers servicing static HTTP requests. The idea is to give preference to requests for small files or requests with short remaining file size, in accordance with the SRPT (Shortest Remaining Processing Time) scheduling policy.The implementation is at the kernel level and involves controlling the order in which socket buffers are drained into the network. Experiments are executed both in a LAN and a WAN environment. We use the Linux operating system and the Apache and Flash web servers.Results indicate that SRPT-based scheduling of connections yields significant reductions in delay at the web server. These result in a substantial reduction in mean response time and mean slowdown for both the LAN and WAN environments. Significantly, and counter to intuition, the requests for large files are only negligibly penalized or not at all penalized as a result of SRPT-based scheduling.


foundations of computer science | 2004

Dynamic speed scaling to manage energy and temperature

Nikhil Bansal; Tracy Kimbrel; Kirk Pruhs

We first consider online speed scaling algorithms to minimize the energy used subject to the constraint that every job finishes by its deadline. We assume that the power required to run at speed s is P(s) = s/sup /spl alpha//. We provide a tight /spl alpha//sup /spl alpha// bound on the competitive ratio of the previously proposed optimal available algorithm. This improves the best known competitive ratio by a factor of 2/sup /spl alpha//. We then introduce an online algorithm, and show that this algorithms competitive ratio is at most 2(/spl alpha//(/spl alpha/ - 1))/sup /spl alpha//e/sup /spl alpha//. This competitive ratio is significantly better and is approximately 2e/sup /spl alpha/+1/ for large /spl alpha/. Our result is essentially tight for large /spl alpha/. In particular, as /spl alpha/ approaches infinity, we show that any algorithm must have competitive ratio e/sup /spl alpha// (up to lower order terms). We then turn to the problem of dynamic speed scaling to minimize the maximum temperature that the device ever reaches, again subject to the constraint that all jobs finish by their deadlines. We assume that the device cools according to Fouriers law. We show how to solve this problem in polynomial time, within any error bound, using the ellipsoid algorithm.


symposium on the theory of computing | 2006

The Santa Claus problem

Nikhil Bansal; Maxim Sviridenko

We consider the following problem: The Santa Claus has n presents that he wants to distribute among m kids. Each kid has an arbitrary value for each present. Let p<sub>ij</sub> be the value that kid i has for present j. The Santas goal is to distribute presents in such a way that the least lucky kid is as happy as possible, i.e he tries to maximize min<sub>i=1,...,m</sub> sum<sub>j ∈ S<sub>i</sub></sub> p<sub>ij</sub> where S<sub>i</sub> is a set of presents received by the i-th kid.Our main result is an O(log log m/log log log m) approximation algorithm for the restricted assignment case of the problem when p<sub>ij</sub> ∈ p<sub>j</sub>,0 (i.e. when present j has either value p<sub>j</sub> or 0 for each kid). Our algorithm is based on rounding a certain natural exponentially large linear programming relaxation usually referred to as the configuration LP. We also show that the configuration LP has an integrality gap of Ω(m<sup>1/2</sup>) in the general case, when p<sub>ij</sub> can be arbitrary.


symposium on discrete algorithms | 2009

Speed scaling with an arbitrary power function

Nikhil Bansal; Ho-Leung Chan; Kirk Pruhs

All of the theoretical speed scaling research to date has assumed that the power function, which expresses the power consumption P as a function of the processor speed s, is of the form P = sα, where α > 1 is some constant. Motivated in part by technological advances, we initiate a study of speed scaling with arbitrary power functions. We consider the problem of minimizing the total flow plus energy. Our main result is a (3+e)-competitive algorithm for this problem, that holds for essentially any power function. We also give a (2+e)-competitive algorithm for the objective of fractional weighted flow plus energy. Even for power functions of the form sα, it was not previously known how to obtain competitiveness independent of α for these problems. We also introduce a model of allowable speeds that generalizes all known models in the literature.


symposium on discrete algorithms | 2007

Speed scaling for weighted flow time

Nikhil Bansal; Kirk Pruhs; Clifford Stein

In addition to the traditional goal of efficiently managing time and space, many computers now need to efficiently manage power usage. For example, Intels SpeedStep and AMDs PowerNOW technologies allow the Windows XP operating system to dynamically change the speed of the processor to prolong battery life. In this setting, the operating system must not only have a job selection policy to determine which job to run, but also a speed scaling policy to determine the speed at which the job will be run. These policies must be online since the operating system does not in general have knowledge of the future. In current CMOS based processors, the speed satisfies the well known cube-root-rule, that the speed is approximately the cube root of the power [Mud01, BBS+00]. Thus, in this work, we make the standard generalization that the power is equal to speed to some power α ≥ 1, where one should think of α as being approximately 3 [YDS95, BKP04]. Energy is power integrated over time. The operating system is faced with a dual objective optimization problem as it both wants to conserve energy, and optimize some Quality of Service (QoS) measure of the resulting schedule.


Mathematics of Operations Research | 2006

Bin Packing in Multiple Dimensions: Inapproximability Results and Approximation Schemes

Nikhil Bansal; Claire Kenyon; Maxim Sviridenko

We study the following packing problem: Given a collection of d-dimensional rectangles of specified sizes, pack them into the minimum number of unit cubes. We show that unlike the one-dimensional case, the two-dimensional packing problem cannot have an asymptotic polynomial time approximation scheme (APTAS), unless PNP. On the positive side, we give an APTAS for the special case of packing d-dimensional cubes into the minimum number of unit cubes. Second, we give a polynomial time algorithm for packing arbitrary two-dimensional rectangles into at most OPT square bins with sides of length 1 , where OPT denotes the minimum number of unit bins required to pack these rectangles. Interestingly, this result has no additive constant term, i.e., is not an asymptotic result. As a corollary, we obtain the first approximation scheme for the problem of placing a collection of rectangles in a minimum-area encasing rectangle.


international colloquium on automata languages and programming | 2008

Scheduling for Speed Bounded Processors

Nikhil Bansal; Ho-Leung Chan; Tak Wah Lam; Lap-Kei Lee

We consider online scheduling algorithms in the dynamic speedscaling model, where a processor can scale its speed between 0 andsome maximum speed T. The processor uses energy at ratesαwhen run at speed s,where α> 1 is a constant. Most modern processorsuse dynamic speed scaling to manage their energy usage. This leadsto the problem of designing execution strategies that are bothenergy efficient, and yet have almost optimum performance. We consider two problems in this model and give essentiallyoptimum possible algorithms for them. In the first problem, jobswith arbitrary sizes and deadlines arrive online and the goal is tomaximize the throughput, i.e. the total size of jobs completedsuccessfully. We give an algorithm that is 4-competitive forthroughput and O(1)-competitive for the energy used. Thisimproves upon the 14 throughput competitive algorithm of Chan etal. [10]. Our throughput guarantee is optimal as any onlinealgorithm must be at least 4-competitive even if the energy concernis ignored [7]. In the second problem, we consider optimizing thetrade-off between the total flow time incurred and the energyconsumed by the jobs. We give a 4-competitive algorithm to minimizetotal flow time plus energy for unweighted unit size jobs, and a (2+ o(1)) α/ln α-competitivealgorithm to minimize fractional weighted flow time plus energy.Prior to our work, these guarantees were known only when theprocessor speed was unbounded (T= ∞) [4].


foundations of computer science | 2010

Constructive Algorithms for Discrepancy Minimization

Nikhil Bansal

Given a set system

Collaboration


Dive into the Nikhil Bansal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kirk Pruhs

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Anupam Gupta

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Joseph Naor

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Marek Eliáš

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Deepak Rajan

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge