Raghu Meka
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raghu Meka.
symposium on the theory of computing | 2010
Raghu Meka; David Zuckerman
We study the natural question of constructing pseudorandom generators (PRGs) for low-degree polynomial threshold functions (PTFs). We give a PRG with seed-length log n/εO(d) fooling degree d PTFs with error at most ε. Previously, no nontrivial constructions were known even for quadratic threshold functions and constant error ε. For the class of degree 1 threshold functions or halfspaces, we construct PRGs with much better dependence on the error parameter ε and obtain the following results. A PRG with seed length O(log n log(1/ε)) for error ε ≥ 1/poly(n). A PRG with seed length O(log n) for ε ≥ 1/poly(log n). Previously, only PRGs with seed length O(log n log2(1/ε)/ ε2) were known for halfspaces. We also obtain PRGs with similar seed lengths for fooling halfspaces over the
international conference on machine learning | 2008
Raghu Meka; Prateek Jain; Constantine Caramanis; Inderjit S. Dhillon
n
symposium on the theory of computing | 2010
Ilias Diakonikolas; Prahladh Harsha; Adam R. Klivans; Raghu Meka; Prasad Raghavendra; Rocco A. Servedio; Li-Yang Tan
dimensional unit sphere.n The main theme of our constructions and analysis is the use of invariance principles to construct pseudorandom generators. We also introduce the notion of monotone read-once branching programs, which is key to improving the dependence on the error rate ε for halfspaces. These techniques may be of independent interest.
foundations of computer science | 2011
Parikshit Gopalan; Adam R. Klivans; Raghu Meka; Daniel Å tefankovic; Santosh Vempala; Eric Vigoda
Minimum rank problems arise frequently in machine learning applications and are notoriously difficult to solve due to the non-convex nature of the rank objective. In this paper, we present the first online learning approach for the problem of rank minimization of matrices over polyhedral sets. In particular, we present two online learning algorithms for rank minimization - our first algorithm is a multiplicative update method based on a generalized experts framework, while our second algorithm is a novel application of the online convex programming framework (Zinkevich, 2003). In the latter, we flip the role of the decision maker by making the decision maker search over the constraint space instead of feasible points, as is usually the case in online convex programming. A salient feature of our online learning approach is that it allows us to give provable approximation guarantees for the rank minimization problem over polyhedral sets. We demonstrate the effectiveness of our methods on synthetic examples, and on the real-life application of low-rank kernel learning.
symposium on the theory of computing | 2015
Raghu Meka; Aaron Potechin; Avi Wigderson
We give the first non-trivial upper bounds on the average sensitivity and noise sensitivity of degree-d polynomial threshold functions (PTFs). These bounds hold both for PTFs over the Boolean hypercube {-1,1}n and for PTFs over Rn under the standard n-dimensional Gaussian distribution N(0,In). Our bound on the Boolean average sensitivity of PTFs represents progress towards the resolution of a conjecture of Gotsman and Linial [17], which states that the symmetric function slicing the middle d layers of the Boolean hypercube has the highest average sensitivity of all degree-d PTFs. Via the L1 polynomial regression algorithm of Kalai et al. [22], our bounds on Gaussian and Boolean noise sensitivity yield polynomial-time agnostic learning algorithms for the broad class of constant-degree PTFs under these input distributions.n The main ingredients used to obtain our bounds on both average and noise sensitivity of PTFs in the Gaussian setting are tail bounds and anti-concentration bounds on low-degree polynomials in Gaussian random variables [20, 7]. To obtain our bound on the Boolean average sensitivity of PTFs, we generalize the critical-index machinery of [37] (which in that work applies to halfspaces, i.e. degree-1 PTFs) to general PTFs. Together with the invariance principle of [30], this lets us extend our techniques from the Gaussian setting to the Boolean setting. Our bound on Boolean noise sensitivity is achieved via a simple reduction from upper bounds on average sensitivity of Boolean PTFs to corresponding bounds on noise sensitivity.
Theory of Computing | 2009
Prahladh Harsha; Adam R. Klivans; Raghu Meka
Given
symposium on the theory of computing | 2015
Mika Göös; Shachar Lovett; Raghu Meka; Thomas Watson; David Zuckerman
n
international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2011
Daniel M. Kane; Raghu Meka; Jelani Nelson
elements with non-negative integer weights
international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2009
Raghu Meka; David Zuckerman
w_1,..., w_n
symposium on the theory of computing | 2011
Parikshit Gopalan; Raghu Meka; Omer Reingold; David Zuckerman
and an integer capacity