Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Po-Ling Loh is active.

Publication


Featured researches published by Po-Ling Loh.


neural information processing systems | 2011

High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity

Po-Ling Loh; Martin J. Wainwright

Although the standard formulations of prediction problems involve fully-observed and noiseless data drawn in an i.i.d. manner, many applications involve noisy and/or missing data, possibly involving dependence, as well. We study these issues in the context of high-dimensional sparse linear regression, and propose novel estimators for the cases of noisy, missing and/or dependent data. Many standard approaches to noisy or missing data, such as those using the EM algorithm, lead to optimization problems that are inherently nonconvex, and it is difficult to establish theoretical guarantees on practical algorithms. While our approach also involves optimizing nonconvex programs, we are able to both analyze the statistical error associated with any global optimum, and more surprisingly, to prove that a simple algorithm based on projected gradient descent will converge in polynomial time to a small neighborhood of the set of all global minimizers. On the statistical side, we provide nonasymptotic bounds that hold with high probability for the cases of noisy, missing and/or dependent data. On the computational side, we prove that under the same types of conditions required for statistical consistency, the projected gradient descent algorithm is guaranteed to converge at a geometric rate to a near-global minimizer. We illustrate these theoretical predictions with simulations, showing close agreement with the predicted scalings.


neural information processing systems | 2012

Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses

Po-Ling Loh; Martin J. Wainwright

We investigate the relationship between the structure of a discrete graphical model and the support of the inverse of a generalized covariance matrix. We show that for certain graph structures, the support of the inverse covariance matrix of indicator variables on the vertices of a graph reflects the conditional independence structure of the graph. Our work extends results that have previously been established only in the context of multivariate Gaussian graphical models, thereby addressing an open question about the significance of the inverse covariance matrix of a non-Gaussian distribution. The proof exploits a combination of ideas from the geometry of exponential families, junction tree theory and convex analysis. These population-level results have various consequences for graph selection methods, both known and novel, including a novel method for structure estimation for missing or corrupted observations. We provide nonasymptotic guarantees for such methods and illustrate the sharpness of these predictions via simulations.


Annals of Statistics | 2017

Statistical consistency and asymptotic normality for high-dimensional robust

Po-Ling Loh

We study theoretical properties of regularized robust M-estimators, applicable when data are drawn from a sparse high-dimensional linear model and contaminated by heavy-tailed distributions and/or outliers in the additive errors and covariates. We first establish a form of local statistical consistency for the penalized regression estimators under fairly mild conditions on the error distribution: When the derivative of the loss function is bounded and satisfies a local restricted curvature condition, all stationary points within a constant radius of the true regression vector converge at the minimax rate enjoyed by the Lasso with sub-Gaussian errors. When an appropriate nonconvex regularizer is used in place of an l_1-penalty, we show that such stationary points are in fact unique and equal to the local oracle solution with the correct support---hence, results on asymptotic normality in the low-dimensional case carry over immediately to the high-dimensional setting. This has important implications for the efficiency of regularized nonconvex M-estimators when the errors are heavy-tailed. Our analysis of the local curvature of the loss function also has useful consequences for optimization when the robust regression function and/or regularizer is nonconvex and the objective function possesses stationary points outside the local region. We show that as long as a composite gradient descent algorithm is initialized within a constant radius of the true regression vector, successive iterates will converge at a linear rate to a stationary point within the local region. Furthermore, the global optimum of a convex regularized robust regression function may be used to obtain a suitable initialization. The result is a novel two-step procedure that uses a convex M-estimator to achieve consistency and a nonconvex M-estimator to increase efficiency.


Annals of Statistics | 2017

M

Po-Ling Loh; Martin J. Wainwright

We demonstrate that the primal-dual witness proof method may be used to establish variable selection consistency and


international symposium on information theory | 2012

-estimators

Po-Ling Loh; Martin J. Wainwright

\ell_\infty


algorithmic learning theory | 2013

Support recovery without incoherence: A case for nonconvex regularization

Po-Ling Loh; Sebastian Nowozin

-bounds for sparse regression problems, even when the loss function and/or regularizer are nonconvex. Using this method, we derive two theorems concerning support recovery and


international symposium on information theory | 2009

Corrupted and missing predictors: Minimax bounds for high-dimensional linear regression

Po-Ling Loh; Hongchao Zhou; Jehoshua Bruck

\ell_\infty


IEEE Transactions on Network Science and Engineering | 2017

Faster Hoeffding Racing: Bernstein Races via Jackknife Estimates

Justin Khim; Po-Ling Loh

-guarantees for the regression estimator in a general setting. Our results provide rigorous theoretical justification for the use of nonconvex regularization: For certain nonconvex regularizers with vanishing derivative away from the origin, support recovery consistency may be guaranteed without requiring the typical incoherence conditions present in


Random Structures and Algorithms | 2018

The robustness of stochastic switching networks

Varun Jog; Po-Ling Loh

\ell_1


Electronic Journal of Statistics | 2018

Confidence Sets for the Source of a Diffusion in Regular Trees

Po-Ling Loh; Xin Lu Tan

-based methods. We then derive several corollaries that illustrate the wide applicability of our method to analyzing composite objective functions involving losses such as least squares, nonconvex modified least squares for errors-in variables linear regression, the negative log likelihood for generalized linear models, and the graphical Lasso. We conclude with empirical studies to corroborate our theoretical predictions.

Collaboration


Dive into the Po-Ling Loh's collaboration.

Top Co-Authors

Avatar

Varun Jog

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Justin Khim

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Andre Wibisono

University of California

View shared research outputs
Top Co-Authors

Avatar

Hongchao Zhou

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jehoshua Bruck

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anand Sriramulu

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Boon Thau Loo

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Brian Litt

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

David Issadore

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge