Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryan J. Tibshirani is active.

Publication


Featured researches published by Ryan J. Tibshirani.


Annals of Statistics | 2011

The solution path of the generalized lasso

Ryan J. Tibshirani; Jonathan Taylor

We present a path algorithm for the generalized lasso problem. This problem penalizes the


Annals of Statistics | 2014

A significance test for the lasso

Richard A. Lockhart; Jonathan Taylor; Ryan J. Tibshirani; Robert Tibshirani

\ell_1


Electronic Journal of Statistics | 2013

The lasso problem and uniqueness

Ryan J. Tibshirani

norm of a matrix D times the coefficient vector, and has a wide range of applications, dictated by the choice of D. Our algorithm is based on solving the dual of the generalized lasso, which greatly facilitates computation of the path. For


Annals of Statistics | 2012

Degrees of freedom in lasso problems

Ryan J. Tibshirani; Jonathan Taylor

D=I


The Annals of Applied Statistics | 2009

A bias correction for the minimum error rate in cross-validation

Ryan J. Tibshirani; Robert Tibshirani

(the usual lasso), we draw a connection between our approach and the well-known LARS algorithm. For an arbitrary D, we derive an unbiased estimate of the degrees of freedom of the generalized lasso fit. This estimate turns out to be quite intuitive in many applications.


Technometrics | 2011

Nearly-Isotonic Regression

Ryan J. Tibshirani; Holger Hoefling; Robert Tibshirani

In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a [Formula: see text] distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than [Formula: see text] under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the [Formula: see text] penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties-adaptivity and shrinkage-and its null distribution is tractable and asymptotically Exp(1).


PLOS Computational Biology | 2015

Flexible Modeling of Epidemics with an Empirical Bayes Framework.

Logan Brooks; David C. Farrow; Sangwon Hyun; Ryan J. Tibshirani; Roni Rosenfeld

The lasso is a popular tool for sparse linear regression, especially for problems in which the number of variables p exceeds the number of observations n. But when p > n, the lasso criterion is not strictly convex, and hence it may not have a unique minimum. An important question is: when is the lasso solution well-defined (unique)? We review results from the literature, which show that if the predictor variables are drawn from a continuous probability distribution, then there is a unique lasso solution with probability one, regardless of the sizes of n and p. We also show that this result extends easily to l1 penalized minimization problems over a wide range of loss functions. A second important question is: how can we manage the case of non-uniqueness in lasso solutions? In light of the aforementioned result, this case really only arises when some of the predictor variables are discrete, or when some post-processing has been performed on continuous predictor measurements. Though we certainly cannot claim to provide a complete answer to such a broad question, we do present progress towards understanding some aspects of non- uniqueness. First, we extend the LARS algorithm for computing the lasso solution path to cover the non-unique case, so that this path algorithm works for any predictor matrix. Next, we derive a simple method for computing the component-wise uncertainty in lasso solutions of any given problem instance, based on linear programming. Finally, we review results from the literature on some of the unifying properties of lasso solutions, and also point out particular forms of solutions that have distinctive properties.


Journal of Computational and Graphical Statistics | 2016

Fast and Flexible ADMM Algorithms for Trend Filtering

Aaditya Ramdas; Ryan J. Tibshirani

We derive the degrees of freedom of the lasso fit, placing no assumptions on the predictor matrix


Annals of Statistics | 2018

Uniform asymptotic inference and the bootstrap after model selection

Ryan J. Tibshirani; Alessandro Rinaldo; Robert Tibshirani; Larry Wasserman

X


Annals of Statistics | 2016

Nonparametric modal regression

Yen-Chi Chen; Christopher R. Genovese; Ryan J. Tibshirani; Larry Wasserman

. Like the well-known result of Zou, Hastie and Tibshirani [Ann. Statist. 35 (2007) 2173-2192], which gives the degrees of freedom of the lasso fit when

Collaboration


Dive into the Ryan J. Tibshirani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Sharpnack

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sangwon Hyun

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Yu-Xiang Wang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roni Rosenfeld

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David C. Farrow

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Logan Brooks

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge