Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Radchenko is active.

Publication


Featured researches published by Peter Radchenko.


Journal of the American Statistical Association | 2010

Variable Selection Using Adaptive Nonlinear Interaction Structures in High Dimensions

Peter Radchenko; Gareth M. James

Numerous penalization based methods have been proposed for fitting a traditional linear regression model in which the number of predictors, p, is large relative to the number of observations, n. Most of these approaches assume sparsity in the underlying coefficients and perform some form of variable selection. Recently, some of this work has been extended to nonlinear additive regression models. However, in many contexts one wishes to allow for the possibility of interactions among the predictors. This poses serious statistical and computational difficulties when p is large, as the number of candidate interaction terms is of order p2. We introduce a new approach, “Variable selection using Adaptive Nonlinear Interaction Structures in High dimensions” (VANISH), that is based on a penalized least squares criterion and is designed for high dimensional nonlinear problems. Our criterion is convex and enforces the heredity constraint, in other words if an interaction term is added to the model, then the corresponding main effects are automatically included. We provide theoretical conditions under which VANISH will select the correct main effects and interactions. These conditions suggest that VANISH should outperform certain natural competitors when the true interaction structure is sufficiently sparse. Detailed simulation results are also provided, demonstrating that VANISH is computationally efficient and can be applied to nonlinear models involving thousands of terms while producing superior predictive performance over other approaches.


Journal of the American Statistical Association | 2008

Variable inclusion and shrinkage algorithms

Peter Radchenko; Gareth M. James

The Lasso is a popular and computationally efficient procedure for automatically performing both variable selection and coefficient shrinkage on linear regression models. One limitation of the Lasso is that the same tuning parameter is used for both variable selection and shrinkage. As a result, it typically ends up selecting a model with too many variables to prevent overshrinkage of the regression coefficients. We suggest an improved class of methods called variable inclusion and shrinkage algorithms (VISA). Our approach is capable of selecting sparse models while avoiding overshrinkage problems and uses a path algorithm, and so also is computationally efficient. We show through extensive simulations that VISA significantly outperforms the Lasso and also provides improvements over more recent procedures, such as the Dantzig selector, relaxed Lasso, and adaptive Lasso. In addition, we provide theoretical justification for VISA in terms of nonasymptotic bounds on the estimation error that suggest it should exhibit good performance even for large numbers of predictors. Finally, we extend the VISA methodology, path algorithm, and theoretical bounds to the generalized linear models framework.


Annals of Statistics | 2015

FUNCTIONAL ADDITIVE REGRESSION

Yingying Fan; Gareth M. James; Peter Radchenko

We suggest a new method, called Functional Additive Regression, or FAR, for efficiently performing high-dimensional functional regression. FAR extends the usual linear regression model involving a functional predictor,


Journal of Multivariate Analysis | 2015

High dimensional single index models

Peter Radchenko

X(t)


Annals of Statistics | 2008

MIXED-RATES ASYMPTOTICS

Peter Radchenko

, and a scalar response,


Journal of the American Statistical Association | 2015

Index models for sparsely sampled functional data

Peter Radchenko; Xinghao Qiao; Gareth M. James

Y


Journal of Multivariate Analysis | 2017

Feature screening in large scale cluster analysis

Trambak Banerjee; Gourab Mukherjee; Peter Radchenko

, in two key respects. First, FAR uses a penalized least squares optimization approach to efficiently deal with high-dimensional problems involving a large number of functional predictors. Second, FAR extends beyond the standard linear regression setting to fit general nonlinear additive models. We demonstrate that FAR can be implemented with a wide range of penalty functions using a highly efficient coordinate descent algorithm. Theoretical results are developed which provide motivation for the FAR optimization criterion. Finally, we show through simulations and two real data sets that FAR can significantly outperform competing methods.


Archive | 1999

On Homogeneity of Two Semi-Markov Samples

Larisa Afanasyeva; Peter Radchenko

This paper addresses the problem of fitting nonlinear regression models in high-dimensional situations, where the number of predictors, p , is large relative to the number of observations, n . Most of the research in this area has been conducted under the assumption that the regression function has a simple additive structure. This paper focuses instead on single index models, which are becoming increasingly popular in many scientific fields including biostatistics, economics and financial econometrics. Novel methodology is presented for estimating high-dimensional single index models and simultaneously performing variable selection. A computationally efficient algorithm is provided for constructing a solution path. Asymptotic theory is developed for the proposed estimates of the regression function and the index coefficients in the high-dimensional setting. An investigation of the empirical performance on both simulated and real data demonstrates strong performance of the proposed approach.


Journal of The Royal Statistical Society Series B-statistical Methodology | 2009

DASSO: connections between the Dantzig selector and lasso

Gareth M. James; Peter Radchenko; Jinchi Lv

A general method is presented for deriving the limiting behavior of estimators that are defined as the values of parameters optimizing an empirical criterion function. The asymptotic behavior of such estimators is typically deduced from uniform limit theorems for rescaled and reparametrized criterion functions. The new method can handle cases where the standard approach does not yield the complete limiting behavior of the estimator. The asymptotic analysis depends on a decomposition of criterion functions into sums of components with different rescalings. The method is explained by examples from Lasso-type estimation, k-means clustering, Shorth estimation and partial linear models.


Biometrika | 2009

A generalized Dantzig selector with shrinkage tuning

Gareth M. James; Peter Radchenko

The regression problem involving functional predictors has many important applications and a number of functional regression methods have been developed. However, a common complication in functional data analysis is one of sparsely observed curves, that is predictors that are observed, with error, on a small subset of the possible time points. Such sparsely observed data induce an errors-in-variables model, where one must account for measurement error in the functional predictors. Faced with sparsely observed data, most current functional regression methods simply estimate the unobserved predictors and treat them as fully observed; thus failing to account for the extra uncertainty from the measurement error. We propose a new functional errors-in-variables approach, sparse index model functional estimation (SIMFE), which uses a functional index model formulation to deal with sparsely observed predictors. SIMFE has several advantages over more traditional methods. First, the index model implements a nonlinear regression and uses an accurate supervised method to estimate the lower dimensional space into which the predictors should be projected. Second, SIMFE can be applied to both scalar and functional responses and multiple predictors. Finally, SIMFE uses a mixed effects model to effectively deal with very sparsely observed functional predictors and to correctly model the measurement error. Supplementary materials for this article are available online.

Collaboration


Dive into the Peter Radchenko's collaboration.

Top Co-Authors

Avatar

Gareth M. James

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rahul Mazumder

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Antoine Dedieu

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinchi Lv

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Trambak Banerjee

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Yingying Fan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Xinghao Qiao

London School of Economics and Political Science

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge