Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aaditya Ramdas is active.

Publication


Featured researches published by Aaditya Ramdas.


PLOS ONE | 2014

Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses

Leila Wehbe; Brian Murphy; Partha Pratim Talukdar; Alona Fyshe; Aaditya Ramdas; Tom M. Mitchell

Story understanding involves many perceptual and cognitive subprocesses, from perceiving individual words, to parsing sentences, to understanding the relationships among the story characters. We present an integrated computational model of reading that incorporates these and additional subprocesses, simultaneously discovering their fMRI signatures. Our model predicts the fMRI activity associated with reading arbitrary text passages, well enough to distinguish which of two story segments is being read with 74% accuracy. This approach is the first to simultaneously track diverse reading subprocesses during complex story processing and predict the detailed neural representation of diverse story features, ranging from visual word properties to the mention of different story characters and different actions they perform. We construct brain representation maps that replicate many results from a wide range of classical studies that focus each on one aspect of language processing and offer new insights on which type of information is processed by different areas involved in language processing. Additionally, this approach is promising for studying individual differences: it can be used to create single subject maps that may potentially be used to measure reading comprehension and diagnose reading disorders.


Journal of Computational and Graphical Statistics | 2016

Fast and Flexible ADMM Algorithms for Trend Filtering

Aaditya Ramdas; Ryan J. Tibshirani

This article presents a fast and robust algorithm for trend filtering, a recently developed nonparametric regression tool. It has been shown that, for estimating functions whose derivatives are of bounded variation, trend filtering achieves the minimax optimal error rate, while other popular methods like smoothing splines and kernels do not. Standing in the way of a more widespread practical adoption, however, is a lack of scalable and numerically stable algorithms for fitting trend filtering estimates. This article presents a highly efficient, specialized alternating direction method of multipliers (ADMM) routine for trend filtering. Our algorithm is competitive with the specialized interior point methods that are currently in use, and yet is far more numerically robust. Furthermore, the proposed ADMM implementation is very simple, and, importantly, it is flexible enough to extend to many interesting related problems, such as sparse trend filtering and isotonic trend filtering. Software for our method is freely available, in both the C and R languages.


SIAM Journal on Matrix Analysis and Applications | 2015

Convergence Properties of the Randomized Extended Gauss--Seidel and Kaczmarz Methods

Anna Ma; Deanna Needell; Aaditya Ramdas

The Kaczmarz and Gauss-Seidel methods both solve a linear system


algorithmic learning theory | 2013

Algorithmic Connections between Active Learning and Stochastic Convex Optimization

Aaditya Ramdas; Aarti Singh

\bf{X}\bf{\beta} = \bf{y}


SIAM Journal on Scientific Computing | 2017

Rows versus Columns: Randomized Kaczmarz or Gauss--Seidel for Ridge Regression

Ahmed Hefny; Deanna Needell; Aaditya Ramdas

by iteratively refining the solution estimate. Recent interest in these methods has been sparked by a proof of Strohmer and Vershynin which shows the randomized Kaczmarz method converges linearly in expectation to the solution. Lewis and Leventhal then proved a similar result for the randomized Gauss-Seidel algorithm. However, the behavior of both methods depends heavily on whether the system is under or overdetermined, and whether it is consistent or not. Here we provide a unified theory of both methods, their variants for these different settings, and draw connections between both approaches. In doing so, we also provide a proof that an extended version of randomized Gauss-Seidel converges linearly to the least norm solution in the underdetermined case (where the usual randomized Gauss Seidel fails to converge). We detail analytically and empirically the convergence properties of both methods and their extended variants in all possible system settings. With this result, a complete and rigorous theory of both methods is furnished.


international symposium on information theory | 2017

Decoding from pooled data: Phase transitions of message passing

Ahmed El Alaoui; Aaditya Ramdas; Florent Krzakala; Lenka Zdeborová; Michael I. Jordan

Interesting theoretical associations have been established by recent papers between the fields of active learning and stochastic convex optimization due to the common role of feedback in sequential querying mechanisms. In this paper, we continue this thread in two parts by exploiting these relations for the first time to yield novel algorithms in both fields, further motivating the study of their intersection. First, inspired by a recent optimization algorithm that was adaptive to unknown uniform convexity parameters, we present a new active learning algorithm for one-dimensional thresholds that can yield minimax rates by adapting to unknown noise parameters. Next, we show that one can perform


The Annals of Applied Statistics | 2015

Regularized brain reading with shrinkage and smoothing

Leila Wehbe; Aaditya Ramdas; Rebecca C. Steorts; Cosma Shalizi

d


SIAM Journal on Matrix Analysis and Applications | 2018

Iterative Methods for Solving Factorized Linear Systems

Anna Ma; Deanna Needell; Aaditya Ramdas

-dimensional stochastic minimization of smooth uniformly convex functions when only granted oracle access to noisy gradient signs along any coordinate instead of real-valued gradients, by using a simple randomized coordinate descent procedure where each line search can be solved by


international symposium on information theory | 2016

Minimax lower bounds for linear independence testing

Aaditya Ramdas; David Isenberg; Aarti Singh; Larry Wasserman

1


Optimization Methods & Software | 2016

Towards a deeper geometric, analytic and algorithmic understanding of margins

Aaditya Ramdas; Javier Peña

-dimensional active learning, provably achieving the same error convergence rate as having the entire real-valued gradient. Combining these two parts yields an algorithm that solves stochastic convex optimization of uniformly convex and smooth functions using only noisy gradient signs by repeatedly performing active learning, achieves optimal rates and is adaptive to all unknown convexity and smoothness parameters.

Collaboration


Dive into the Aaditya Ramdas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aarti Singh

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Larry Wasserman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Barnabás Póczos

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sashank J. Reddi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leila Wehbe

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Deanna Needell

Claremont McKenna College

View shared research outputs
Top Co-Authors

Avatar

Fanny Yang

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge