Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ravindran Kannan is active.

Publication


Featured researches published by Ravindran Kannan.


foundations of computer science | 2000

On clusterings-good, bad and spectral

Ravindran Kannan; Santosh Vempala; A. Veta

We propose a new measure for assessing the quality of a clustering. A simple heuristic is shown to give worst-case guarantees under the new measure. Then we present two results regarding the quality of the clustering found by a popular spectral algorithm. One proffers worst case guarantees whilst the other shows that if there exists a good clustering then the spectral algorithm will find one close to it.


symposium on the theory of computing | 2012

Computing a nonnegative matrix factorization -- provably

Sanjeev Arora; Rong Ge; Ravindran Kannan; Ankur Moitra

The Nonnegative Matrix Factorization (NMF) problem has a rich history spanning quantum mechanics, probability theory, data analysis, polyhedral combinatorics, communication complexity, demography, chemometrics, etc. In the past decade NMF has become enormously popular in machine learning, where the factorization is computed using a variety of local search heuristics. Vavasis recently proved that this problem is NP-complete. We initiate a study of when this problem is solvable in polynomial time. Consider a nonnegative m x n matrix


SIAM Journal on Computing | 2008

The Spectral Method for General Mixture Models

Ravindran Kannan; Hadi Salmasian; Santosh Vempala

M


foundations of computer science | 2010

Clustering with Spectral Norm and the k-Means Algorithm

Amit Kumar; Ravindran Kannan

and a target inner-dimension r. Our results are the following: - We give a polynomial-time algorithm for exact and approximate NMF for every constant r. Indeed NMF is most interesting in applications precisely when r is small. We complement this with a hardness result, that if exact NMF can be solved in time (nm)o(r), 3-SAT has a sub-exponential time algorithm. Hence, substantial improvements to the above algorithm are unlikely. - We give an algorithm that runs in time polynomial in n, m and r under the separablity condition identified by Donoho and Stodden in 2003. The algorithm may be practical since it is simple and noise tolerant (under benign assumptions). Separability is believed to hold in many practical settings.n To the best of our knowledge, this last result is the first polynomial-time algorithm that provably works under a non-trivial condition on the input matrix and we believe that this will be an interesting and important direction for future work.


Mathematika | 2001

Deterministic and randomized polynomial-time approximation of RADII

Andreas Brieden; Peter Gritzmann; Ravindran Kannan; Victor Klee; László Lovász; Miklós Simonovits

We present an algorithm for learning a mixture of distributions based on spectral projection. We prove a general property of spectral projection for arbitrary mixtures and show that the resulting algorithm is efficient when the components of the mixture are logconcave distributions in R n whose means are separated. The separation required grows with k, the number of components, and logn. This is the first result demonstrating the benefit of spectral projection for general Gaussians and widens the scope of this method. It improves substantially on previous results, which focus either on the special case of spherical Gaussians or require a separation that has a considerably larger dependence on n.


Mathematics of Operations Research | 2012

Random Walks on Polytopes and an Affine Interior Point Method for Linear Programming

Ravindran Kannan; Hariharan Narayanan

There has been much progress on efficient algorithms for clustering data points generated by a mixture of k probability distributions under the assumption that the means of the distributions are well-separated, i.e., the distance between the means of any two distributions is at least Omega(k) standard deviations. These results generally make heavy use of the generative model and particular properties of the distributions. In this paper, we show that a simple clustering algorithm works without assuming any generative (probabilistic) model. Our only assumption is what we call a proximity condition: the projection of any data point onto the line joining its cluster center to any other cluster center is Omega(k) standard deviations closer to its own center than the other center. Here the notion of standard deviations is based on the spectral norm of the matrix whose rows represent the difference between a point and the mean of the cluster to which it belongs. We show that in the generative models studied, our proximity condition is satisfied and so we are able to derive most known results for generative models as corollaries of our main result. We also prove some new results for generative models - e.g., we can cluster all but a small fraction of points only assuming a bound on the variance. Our algorithm relies on the well known k-means algorithm, and along the way, we prove a result of independent interest – that the k-means algorithm converges to the true centers even in the presence of spurious points provided the initial (estimated) centers are close enough to the corresponding actual centers and all but a small fraction of the points satisfy the proximity condition. Finally, we present a new technique for boosting the ratio of inter-center separation to standard deviation. This allows us to prove results for learning certain mixture of distributions under weaker separation conditions.


symposium on the theory of computing | 2009

Random walks on polytopes and an affine interior point method for linear programming

Ravindran Kannan; Hariharan Narayanan

This paper is concerned with convex bodies in n-dimensional l p spaces, where each body is accessible only by a weak separation or optimization oracle. It studies the asymptotic relative accuracy, as n → ∞, of polynomial-time approximation algorithms for the diameter, width, circumradius, and inradius of a body K, and also for the maximum of the norm over K In the case of l 2 (Euclidean n-space), a 1987 result of Barany and Furedi severely limits the degree of relative accuracy that can be guaranteed in approximating Ks volume by any deterministic polynomial-time algorithm. This led to a similarly severe limit on the relative accuracy of deterministic polynomial-time algorithms for computing Ks diameter. However, these limitations on the accuracy of deterministic computation were soon followed by the work of Dyer, Frieze and Kannan showing that, for volume approximation, arbitrarily good accuracy can be attained with the aid of suitable randomization. It was therefore natural to wonder whether the same is true of the diameter. The first main result of this paper is that, in contrast to the situation for the volume, randomization does not help in approximating the diameter. The same limitation on accuracy that applies to deterministic polynomial-time computation still applies when randomization is permitted. This conclusion applies also to the width, circumradius, and inradius of a body, and to maximization of the norm over K. The second main result is that, for each of the five radius measurements just mentioned, the inapproximability results for deterministic polynomial-time approximation are optimal for width and inradius when 1≤p≤2, are optimal for diameter, circumradius, and norm-maximization when 2≤p≤∞, and in the remaining cases are within a logarithmic factor of being optimal. In particular, all are optimal when p = 2. The optimality is established by producing deterministic polynomial-time approximation algorithms whose accuracy is bounded below by a positive constant multiple (independent of the dimension n) of the upper bounds on accuracy. Since the bodies are assumed to be presented by a weak oracle, our approach belongs to the algorithmic theory of convex bodies initiated by Grotschel, Lovasz and Schrijver. In the deterministic case we sharpen and extend l 2 results due to these authors, anti in the randomized case we refine some ideas presented earlier by Lovasz and Simonovits. The algorithms that establish lower bounds on accuracy use certain polytopal approximations of l p unit balls that


symposium on the theory of computing | 2010

Spectral methods for matrices and tensors

Ravindran Kannan

Let K be a polytope in Rn defined by m linear inequalities. We give a new Markov chain algorithm to draw a nearly uniform sample from K. The underlying Markov chain is the first to have a mixing time that is strongly polynomial when started from a “central” point. We use this result to design an affine interior point algorithm that does a single random walk to solve linear programs approximately.


international colloquium on automata languages and programming | 2012

Zero-one rounding of singular vectors

Amit Deshpande; Ravindran Kannan; Nikhil Srivastava

Let K be a polytope in R<sup>n</sup> defined by m linear inequalities. We give a new Markov Chain algorithm to draw a nearly uniform sample from K. The underlying Markov Chain is the first to have a mixing time that is strongly polynomial when started from a central point x<sub>0</sub>. If s is the supremum over all chords pq passing through x<sub>0</sub> of (|p-x<sub>0</sub>|)/(|q-x<sub>0</sub>|) and ε is an upper bound on the desired total variation distance from the uniform, it is sufficient to take O(m n( n log (s m) + log 1/ε)) steps of the random walk. We use this result to design an affine interior point algorithm that does a <i>single</i> random walk to solve linear programs approximately. More precisely, suppose Q = {z | Bz ≤ 1} contains a point z such that c<sup>T</sup> z ≥ d and r := sup<sub>z ∈ Q</sub> |Bz| + 1, where B is an m x n matrix. Then, after τ = O(mn (n ln(mr/ε) + ln 1/δ)) steps, the random walk is at a point x<sub>τ</sub> for which c<sup>T</sup> x<sub>τ</sub> ≥ d(1-ε) with probability greater than 1-δ. The fact that this algorithm has a run-time that is provably polynomial is notable since the analogous deterministic affine algorithm analyzed by Dikin has no known polynomial guarantees.


Acta Numerica | 2017

Randomized algorithms in numerical linear algebra

Ravindran Kannan; Santosh Vempala

While Spectral Methods have long been used for Principal Component Analysis, this survey focusses on work over the last 15 years with three salient features: (i) Spectral methods are useful not only for numerical problems, but also discrete optimization problems (Constraint Optimization Problems - CSPs) like the max. cut problem and similar mathematical considerations underlie both areas. (ii) Spectral methods can be extended to tensors. The theory and algorithms for tensors are not as simple/clean as for matrices, but the survey describes methods for low-rank approximation which extend to tensors. These tensor approximations help us solve Max-

Collaboration


Dive into the Ravindran Kannan's collaboration.

Top Co-Authors

Avatar

Santosh Vempala

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David P. Woodruff

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ankur Moitra

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge