Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rishi Saket is active.

Publication


Featured researches published by Rishi Saket.


foundations of computer science | 2009

SDP Integrality Gaps with Local ell_1-Embeddability

Subhash Khot; Rishi Saket

We construct integrality gap instances for SDP relaxation of the Maximum Cut and the Sparsest Cut problems. It is well-known that if the triangle inequality constraints were added to the SDP, then the SDP vectors naturally define an n-point negative type (or ell_2^2) metric where n is the number of vertices in the problem instance. Our gap-instances have the property that every sub-metric on t = O((log log log n)^{1/6}) points is isometrically embeddable into ell_1. The local ell_1-embeddability constraints are implied when the basic SDP relaxation is augmented with t rounds of the Sherali-Adams LP-relaxation. For the Maximum Cut problem, we obtain an optimal gap of alpha_{GW}^{-1} - eps, where alpha_{GW} is the Goemans-Williamson constant and eps ≫ 0 is an arbitrarily small constant. For the Sparsest Cut problem, we obtain a gap of Omega((log log log n)^{1/13}). The latter result can be rephrased as a construction of an n-point negative type metric such that every t-point sub-metric is isometrically ell_1-embeddable, but embedding the whole metric into ell_1 incurs distortion Omega((log log log n)^{1/13}).


symposium on discrete algorithms | 2016

Bypassing UGC from Some Optimal Geometric Inapproximability Results

Venkatesan Guruswami; Prasad Raghavendra; Rishi Saket; Yi Wu

The Unique Games Conjecture (UGC) has emerged in recent years as the starting point for several optimal inapproximability results. While for <i>none</i> of these results a reverse reduction to Unique Games is known, the assumption of bijective projections in the Label Cover instance nevertheless seems critical in these proofs. In this work, we bypass the need for UGC assumption in inapproximability results for two geometric problems, obtaining a tight NP-hardness result in each case. The first problem, known as <i>L<sub>p</sub></i> Subspace Approximation, is a generalization of the classic least squares regression problem. Here, the input consists of a set of points <i>X</i> = {α<sub>1</sub>, … , α<sub>m</sub>} ⊆ R<i><sub>n</sub></i> and a parameter <i>k</i> (possibly depending on <i>n</i>). The goal is to find a subspace <i>H</i> of <i>R</i><i><sub>n</sub></i> of dimension <i>k</i> that minimizes the ℓ<sub><i>p</i></sub> norm of the Euclidean distances to the points in <i>X</i>. For <i>p</i> = 2, <i>k</i> = <i>n</i> − 1, this reduces to the least squares regression problem, while for <i>p</i> = ∞, <i>k</i> = 0 it reduces to the problem of finding a ball of minimum radius enclosing all the points. We show that for any fixed <i>p</i> ∈ (2, ∞), and for <i>k</i> = <i>n</i> − 1, it is NP-hard to approximate this problem to within a factor of γ<sub><i>p</i></sub> − ε for constant ε > 0, where γ<sub><i>p</i></sub> is the <i>p</i>th norm of a standard Gaussian random variable. This matches the γ<sub><i>p</i></sub> approximation algorithm obtained by Deshpande, Tulsiani, and Vishnoi who also showed the same hardness result under the UGC. The second problem we study is the related <i>L<sub>p</sub></i> Quadratic Grothendieck Maximization Problem, considered by Kindler, Naor, and Schechtman. Here, the input is a multilinear quadratic form ∑<sub><i>n</i></sub><sub><i>i</i>, <i>j</i> = 1</sub><i>a<sub>ij</sub>x<sub>i</sub>x<sub>j</sub></i> and the goal is to maximize the quadratic form over the ℓ<sub><i>p</i></sub> unit ball, namely, all <i>x</i> with ∑<sub><i>n</i></sub><sub><i>i</i> = 1</sub>|<i>x<sub>i</sub></i>|<sub><i>p</i></sub> ⩽ 1. The problem is polynomial time solvable for <i>p</i> = 2. We show that for any constant <i>p</i> ∈ (2, ∞), it is NP-hard to approximate the quadratic form to within a factor of γ<sub>2</sub><sub><i>p</i></sub> − ε for any ε > 0. The same hardness factor was shown under the UGC by Kindler et al. We also obtain a γ<sub>2</sub><sub><i>p</i></sub>-approximation algorithm for the problem using the convex relaxation of the problem defined by Kindler et al. A γ<sub>2</sub><sub><i>p</i></sub> approximation algorithm has also been independently obtained by Naor and Schechtman. These are the <i>first</i> approximation thresholds, proven under P ≠ NP, that involve the Gaussian random variable in a fundamental way. Note that the problem statements themselves do not explicitly involve the Gaussian distribution.


SIAM Journal on Computing | 2010

Hardness of Reconstructing Multivariate Polynomials over Finite Fields

Parikshit Gopalan; Subhash Khot; Rishi Saket

We study the polynomial reconstruction problem, for low-degree multivariate polynomials over F[2]. In this problem, we are given a set of points x epsi {0, 1}n and target values f(x) epsi {0, 1} for each of these points, with the promise that there is a polynomial over F[2] of degree at most d that agrees with f at 1 - epsiv fraction of the points. Our goal is to find agree d polynomial that has good-agreement with f. We show that it is NP-hard to find a polynomial that agrees with f on more than 1 - 2-d + delta fraction of the points for any epsiv, delta > 0. This holds even with the stronger promise that the polynomial that fits the data is in fact linear, wherejis the algorithm is allowed to find a polynomial of degree d. Previously the only known, hardness of approximation (or even NP-completeness) was for the case when d = I, which follows from a celebrated result of Has tad. In the setting of computational learning, our result shows the hardness of (non-proper) agnostic learning of parities, where the learner is allowed, a low-degree polynomial over F[2] as a hypothesis. This is the first non-proper hardness result for this central problem in computational learning. Our results extend-to multivariate polynomial reconstruction over any finite field.


symposium on the theory of computing | 2008

On hardness of learning intersection of two halfspaces

Subhash Khot; Rishi Saket

We show that unless NP = RP, it is hard to (even) weakly PAC-learn intersection of two halfspaces in Rn using a hypothesis which is a function of up to l linear threshold functions for any integer l. Specifically, we show that for every integer l and an arbitrarily small constant ε > 0, unless NP = RP, no polynomial time algorithm can distinguish whether there is an intersection of two halfspaces that correctly classifies a given set of labeled points in Rn, or whether any function of l linear threshold functions can correctly classify at most 1/2+ε fraction of the points.


foundations of computer science | 2008

Hardness of Minimizing and Learning DNF Expressions

Subhash Khot; Rishi Saket

We study the problem of finding the minimum size DNF formula for a function f : {0, 1}d rarr {0,1} given its truth table. We show that unless NP sube DTIME(npoly(log n)), there is no polynomial time algorithm that approximates this problem to within factor d1-epsiv where epsiv > 0 is an arbitrarily small constant. Our result essentially matches the known O(d) approximation for the problem. We also study weak learnability of small size DNF formulas. We show that assuming NP sube RP, for arbitrarily small constant epsiv > 0 and any fixed positive integer t, a two term DNF cannot be PAC-learnt in polynomial time by a t term DNF to within 1/2 + epsiv accuracy. Under the same complexity assumption, we show that for arbitrarily small constants mu, epsiv > 0 and any fixed positive integer t, an AND function (i.e. a single term DNF) cannot be PAC-learnt in polynomial time under adversarial mu-noise by a t-CNF to within 1/2 + epsiv accuracy.


international colloquium on automata languages and programming | 2010

On the inapproximability of vertex cover on k-partite k-uniform hypergraphs

Venkatesan Guruswami; Rishi Saket

Computing a minimum vertex cover in graphs and hypergraphs is a well-studied optimizaton problem. While intractable in general, it is well known that on bipartite graphs, vertex cover is polynomial time solvable. In this work, we study the natural extension of bipartite vertex cover to hypergraphs, namely finding a small vertex cover in k- uniform k-partite hypergraphs, when the k-partition is given as input. For this problem Lovasz [16] gave a k/2 factor LP rounding based approximation, and a matching (k/2 - o(1)) integrality gap instance was constructed by Aharoni et al. [1]. We prove the following results, which are the first strong hardness results for this problem (heree > 0 is an arbitrary constant): - NP-hardness of approximating within a factor of (k/4 -e), and - Unique Games-hardness of approximating within a factor of (k/2 -e), showing optimality of Lovaszs algorithm under the Unique Games conjecture. The NP-hardness result is based on a reduction from minimum vertex cover in r-uniform hypergraphs for which NP-hardness of approximating within r-1-e was shown by Dinur et al. [5]. The Unique Games-hardness result is obtained by applying the recent results of Kumar et al. [15], with a slight modification, to the LP integrality gap due to Aharoni et al. [1]. The modification is to ensure that the reduction preserves the desired structural properties of the hypergraph.


international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2010

Approximate Lasserre integrality gap for unique games

Subhash Khot; Preyas Popat; Rishi Saket

In this paper, we investigate whether a constant round Lasserre Semi-definite Programming (SDP) relaxation might give a good approximation to the Unique Games problem. We show that the answer is negative if the relaxation is insensitive to a sufficiently small perturbation of the constraints. Specifically, we construct an instance of Unique Games with k labels along with an approximate vector solution to t rounds of the Lasserre SDP relaxation. The SDP objective is at least 1 - e whereas the integral optimum is at most γ, and all SDP constraints are satisfied up to an accuracy of δ > 0. Here e, γ > 0 and t ∈ Z+ are arbitrary constants and k = k(e, γ) ∈ Z+. The accuracy parameter δ can bemade sufficiently small independent of parameters e, γ,t, k (but the size of the instance grows as δ gets smaller).


conference on computational complexity | 2013

Optimal Inapproximability for Scheduling Problems via Structural Hardness for Hypergraph Vertex Cover

Sushant Sachdeva; Rishi Saket

This work studies the inapproximability of two well-known scheduling problems: Concurrent Open Shop and the Assembly Line problem. For both these problems, Bansal and Khot [1] obtained tight (2 - ε)-factor inapproximability, assuming the Unique Games Conjecture (UGC). In this paper, we prove optimal (2 - ε)-factor NP-hardness of approximation for both these problems unconditionally, i.e., without assuming UGC. Our results for the scheduling problems follow from a structural hardness for Minimum Vertex Cover on hypergraphs - an unconditional but weaker analog of a similar result of Bansal and Khot [1] which, however, is based on UGC. Formally, we prove that for any ε > 0, and positive integers q, T > 1, the following is NP-hard: Given a qT-uniform hypergraph G(V, E), decide between the following cases, YES Case: There is a partition of V into V1, ⋯, Vq, X with |V1| = ⋯ = |Vq| ≥ (1 - ε/q) |V| such that the vertices of any hyperedge e can be partitioned into T blocks of q vertices each so that at least T - 1 of the blocks each contain at most one vertex from Vj, for any j ∈ [q]. Since T > 1, this implies that for any j ∈ [q], Vj ∪ X is a vertex cover of size at most (1/q + ε) |V|. NO Case: Every vertex cover in G has size at least (1 - ε)|V |. The above hardness result suffices to prove optimal inapproximability for the scheduling problems mentioned above, via black-box hardness reductions. Using this result, we also prove a super constant hardness factor for Two Stage Stochastic Vehicle Routing, for which a similar inapproximability was shown by Gørtz, Nagarajan, and Saket [2] assuming the UGC, and based on the result of [1].


Journal of Computer and System Sciences | 2011

On the hardness of learning intersections of two halfspaces

Subhash Khot; Rishi Saket

We show that unless NP=RP, it is hard to (even) weakly PAC-learn intersection of two halfspaces in R^n using a hypothesis which is a function of up to @? halfspaces (linear threshold functions) for any integer @?. Specifically, we show that for every integer @? and an arbitrarily small constant @e>0, unless NP=RP, no polynomial time algorithm can distinguish whether there is an intersection of two halfspaces that correctly classifies a given set of labeled points in R^n, or whether any function of @? halfspaces can correctly classify at most 12+@e fraction of the points.


international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2011

Nearly optimal NP-hardness of vertex cover on k-uniform k-partite hypergraphs

Sushant Sachdeva; Rishi Saket

We study the problem of computing the minimum vertex cover on kuniform k-partite hypergraphs when the k-partition is given. On bipartite graphs (k = 2), the minimum vertex cover can be computed in polynomial time. For general k, the problem was studied by Lovasz [23], who gave a k/2 -approximation based on the standard LP relaxation. Subsequent work by Aharoni, Holzman and Krivelevich [1] showed a tight integrality gap of (k/2-o(1)) for the LP relaxation. While this problem was known to be NP-hard for k ≥ 3, the first nontrivial NP-hardness of approximation factor of k/4-e was shown in a recent work by Guruswami and Saket [13]. They also showed that assuming Khots Unique Games Conjecture yields a k/2- e inapproximability for this problem, implying the optimality of Lovaszs result. In this work, we show that this problem is NP-hard to approximate within k/2 - 1 + 1/2k - e. This hardness factor is off from the optimal by an additive constant of at most 1 for k ≥ 4. Our reduction relies on the Multi-Layered PCP of [8] and uses a gadget - based on biased Long Codes - adapted from the LP integrality gap of [1]. The nature of our reduction requires the analysis of several Long Codes with different biases, for which we prove structural properties of the so called cross-intersecting collections of set families - variants of which have been studied in extremal set theory.

Collaboration


Dive into the Rishi Saket's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Suprovat Ghoshal

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge