Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kunal Talwar is active.

Publication


Featured researches published by Kunal Talwar.


foundations of computer science | 2007

Mechanism Design via Differential Privacy

Frank McSherry; Kunal Talwar

We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero.


symposium on operating systems principles | 2009

Quincy: fair scheduling for distributed computing clusters

Michael Isard; Vijayan Prabhakaran; Jon Currey; Udi Wieder; Kunal Talwar; Andrew V. Goldberg

This paper addresses the problem of scheduling concurrent jobs on clusters where application data is stored on the computing nodes. This setting, in which scheduling computations close to their data is crucial for performance, is increasingly common and arises in systems such as MapReduce, Hadoop, and Dryad as well as many grid-computing environments. We argue that data-intensive computation benefits from a fine-grain resource sharing model that differs from the coarser semi-static resource allocations implemented by most existing cluster computing architectures. The problem of scheduling with locality and fairness constraints has not previously been extensively studied under this resource-sharing model. We introduce a powerful and flexible new framework for scheduling concurrent distributed jobs with fine-grain resource sharing. The scheduling problem is mapped to a graph datastructure, where edge weights and capacities encode the competing demands of data locality, fairness, and starvation-freedom, and a standard solver computes the optimal online schedule according to a global cost model. We evaluate our implementation of this framework, which we call Quincy, on a cluster of a few hundred computers using a varied workload of data-and CPU-intensive jobs. We evaluate Quincy against an existing queue-based algorithm and implement several policies for each scheduler, with and without fairness constraints. Quincy gets better fairness when fairness is requested, while substantially improving data locality. The volume of data transferred across the cluster is reduced by up to a factor of 3.9 in our experiments, leading to a throughput increase of up to 40%.


symposium on the theory of computing | 2004

The complexity of pure Nash equilibria

Alex Fabrikant; Christos H. Papadimitriou; Kunal Talwar

We investigate from the computational viewpoint multi-player games that are guaranteed to have pure Nash equilibria. We focus on congestion games, and show that a pure Nash equilibrium can be computed in polynomial time in the symmetric network case, while the problem is PLS-complete in general. We discuss implications to non-atomic congestion games, and we explore the scope of the potential function method for proving existence of pure Nash equilibria.


symposium on the theory of computing | 2003

A tight bound on approximating arbitrary metrics by tree metrics

Jittat Fakcharoenphol; Satish Rao; Kunal Talwar

In this paper, we show that any n point metric space can be embedded into a distribution over dominating tree metrics such that the expected stretch of any edge is O(log n). This improves upon the result of Bartal who gave a bound of O(log n log log n). Moreover, our result is existentially tight; there exist metric spaces where any tree embedding must have distortion Ω(log n)-distortion. This problem lies at the heart of numerous approximation and online algorithms including ones for group Steiner tree, metric labeling, buy-at-bulk network design and metrical task system. Our result improves the performance guarantees for all of these problems.


symposium on the theory of computing | 2010

On the geometry of differential privacy

Moritz Hardt; Kunal Talwar

We consider the noise complexity of differentially private mechanisms in the setting where the user asks d linear queries f:Rn -> R non-adaptively. Here, the database is represented by a vector in R and proximity between databases is measured in the l1-metric. We show that the noise complexity is determined by two geometric parameters associated with the set of queries. We use this connection to give tight upper and lower bounds on the noise complexity for any d ≤ n. We show that for d random linear queries of sensitivity 1, it is necessary and sufficient to add l2-error Θ(min d√d/ε,d√(log (n/d))/ε) to achieve ε-differential privacy. Assuming the truth of a deep conjecture from convex geometry, known as the Hyperplane conjecture, we can extend our results to arbitrary linear queries giving nearly matching upper and lower bounds. Our bound translates to error


symposium on the theory of computing | 2007

The price of privacy and the limits of LP decoding

Cynthia Dwork; Frank McSherry; Kunal Talwar

O(min d/ε,√(d log(n/d)/ε)) per answer. The best previous upper bound (Laplacian mechanism) gives a bound of O(min (d/ε,√n/ε)) per answer, while the best known lower bound was Ω(√d/ε). In contrast, our lower bound is strong enough to separate the concept of differential privacy from the notion of approximate differential privacy where an upper bound of O(√{d}/ε) can be achieved.


symposium on the theory of computing | 2004

Bypassing the embedding: algorithms for low dimensional metrics

Kunal Talwar

This work is at theintersection of two lines of research. One line, initiated by Dinurand Nissim, investigates the price, in accuracy, of protecting privacy in a statistical database. The second, growing from an extensive literature on compressed sensing (see in particular the work of Donoho and collaborators [4,7,13,11])and explicitly connected to error-correcting codes by Candès and Tao ([4]; see also [5,3]), is in the use of linearprogramming for error correction. Our principal result is the discovery of a sharp threshhold ρ*∠ 0.239, so that if ρ < ρ* and A is a random m x n encoding matrix of independently chosen standardGaussians, where m = O(n), then with overwhelming probability overchoice of A, for all x ∈ Rn, LP decoding corrects ⌊ ρ m⌋ arbitrary errors in the encoding Ax, while decoding can be made to fail if the error rate exceeds ρ*. Our boundresolves an open question of Candès, Rudelson, Tao, and Vershyin [3] and (oddly, but explicably) refutesempirical conclusions of Donoho [11] and Candès et al [3]. By scaling and rounding we can easilytransform these results to obtain polynomial-time decodable random linear codes with polynomial-sized alphabets tolerating any ρ < ρ* ∠ 0.239 fraction of arbitrary errors. In the context of privacy-preserving datamining our results say thatany privacy mechanism, interactive or non-interactive, providingreasonably accurate answers to a 0.761 fraction of randomly generated weighted subset sum queries, and arbitrary answers on the remaining 0.239 fraction, is blatantly non-private.


computer and communications security | 2016

Deep Learning with Differential Privacy

Martín Abadi; Andy Chu; Ian J. Goodfellow; H. Brendan McMahan; Ilya Mironov; Kunal Talwar; Li Zhang

The doubling dimension of a metric is the smallest k such that any ball of radius 2r can be covered using 2k balls of radius r. This concept for abstract metrics has been proposed as a natural analog to the dimension of a Euclidean space. If we could embed metrics with low doubling dimension into low dimensional Euclidean spaces, they would inherit several algorithmic and structural properties of the Euclidean spaces. Unfortunately however, such a restriction on dimension does not suffice to guarantee embeddibility in a normed space.In this paper we explore the option of bypassing the embedding. In particular we show the following for low dimensional metrics: Quasi-polynomial time (1+ε)-approximation algorithm for various optimization problems such as TSP, k-median and facility location. (1+ε)-approximate distance labeling scheme with optimal label length. (1+ε)-stretch polylogarithmic storage routing scheme.


foundations of computer science | 2003

Paths, trees, and minimum latency tours

Kamalika Chaudhuri; Brighten Godfrey; Satish Rao; Kunal Talwar

Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.


workshop on internet and network economics | 2005

Click fraud resistant methods for learning click-through rates

Nicole Immorlica; Kamal Jain; Mohammad Mahdian; Kunal Talwar

We give improved approximation algorithms for a variety of latency minimization problems. In particular, we give a 3.59-approximation to the minimum latency problem, improving on previous algorithms by a multiplicative factor of 2. Our techniques also give similar improvements for related problems like k-traveling repairmen and its multiple depot variant. We also observe that standard techniques can be used to speed up the previous and this algorithm by a factor of O/sup /spl tilde//(n).

Collaboration


Dive into the Kunal Talwar's collaboration.

Top Co-Authors

Avatar

Anupam Gupta

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Satish Rao

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge