Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhaoran Wang is active.

Publication


Featured researches published by Zhaoran Wang.


Annals of Statistics | 2014

OPTIMAL COMPUTATIONAL AND STATISTICAL RATES OF CONVERGENCE FOR SPARSE NONCONVEX LEARNING PROBLEMS.

Zhaoran Wang; Han Liu; Tong Zhang

We provide theoretical analysis of the statistical and computational properties of penalized M-estimators that can be formulated as the solution to a possibly nonconvex optimization problem. Many important estimators fall in this category, including least squares regression with nonconvex regularization, generalized linear models with nonconvex regularization and sparse elliptical random design regression. For these problems, it is intractable to calculate the global solution due to the nonconvex formulation. In this paper, we propose an approximate regularization path-following method for solving a variety of learning problems with nonconvex objective functions. Under a unified analytic framework, we simultaneously provide explicit statistical and computational rates of convergence for any local solution attained by the algorithm. Computationally, our algorithm attains a global geometric rate of convergence for calculating the full regularization path, which is optimal among all first-order algorithms. Unlike most existing methods that only attain geometric rates of convergence for one single regularization parameter, our algorithm calculates the full regularization path with the same iteration complexity. In particular, we provide a refined iteration complexity bound to sharply characterize the performance of each stage along the regularization path. Statistically, we provide sharp sample complexity analysis for all the approximate local solutions along the regularization path. In particular, our analysis improves upon existing results by providing a more refined sample complexity bound as well as an exact support recovery result for the final estimator. These results show that the final estimator attains an oracle statistical property due to the usage of nonconvex penalty.


Siam Journal on Optimization | 2014

A Strictly Contractive Peaceman-Rachford Splitting Method for Convex Programming

Bingsheng He; Han Liu; Zhaoran Wang; Xiaoming Yuan

In this paper, we focus on the application of the Peaceman-Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas-Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O(1/t). A worst-case O(1/t) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O(1/t) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing.


knowledge discovery and data mining | 2016

A Truth Discovery Approach with Theoretical Guarantee

Houping Xiao; Jing Gao; Zhaoran Wang; Shiyu Wang; Lu Su; Han Liu

In the information age, people can easily collect information about the same set of entities from multiple sources, among which conflicts are inevitable. This leads to an important task, truth discovery, i.e., to identify true facts (truths) via iteratively updating truths and source reliability. However, the convergence to the truths is never discussed in existing work, and thus there is no theoretical guarantee in the results of these truth discovery approaches. In contrast, in this paper we propose a truth discovery approach with theoretical guarantee. We propose a randomized gaussian mixture model (RGMM) to represent multi-source data, where truths are model parameters. We incorporate source bias which captures its reliability degree into RGMM formulation. The truth discovery task is then modeled as seeking the maximum likelihood estimate (MLE) of the truths. Based on expectation-maximization (EM) techniques, we propose population-based (i.e., on the limit of infinite data) and sample-based (i.e., on a finite set of samples) solutions for the MLE. Theoretically, we prove that both solutions are contractive to an ε-ball around the MLE, under certain conditions. Experimentally, we evaluate our method on both simulated and real-world datasets. Experimental results show that our method achieves high accuracy in identifying truths with convergence guarantee.


Biometrika | 2018

A convex formulation for high-dimensional sparse sliced inverse regression

Kean Ming Tan; Zhaoran Wang; Tong Zhang; Han Liu; R. Dennis Cook

Summary Sliced inverse regression is a popular tool for sufficient dimension reduction, which replaces covariates with a minimal set of their linear combinations without loss of information on the conditional distribution of the response given the covariates. The estimated linear combinations include all covariates, making results difficult to interpret and perhaps unnecessarily variable, particularly when the number of covariates is large. In this paper, we propose a convex formulation for fitting sparse sliced inverse regression in high dimensions. Our proposal estimates the subspace of the linear combinations of the covariates directly and performs variable selection simultaneously. We solve the resulting convex optimization problem via the linearized alternating direction methods of multiplier algorithm, and establish an upper bound on the subspace distance between the estimated and the true subspaces. Through numerical studies, we show that our proposal is able to identify the correct covariates in the high‐dimensional setting.


neural information processing systems | 2015

A nonconvex optimization framework for low rank matrix estimation

Tuo Zhao; Zhaoran Wang; Han Liu


arXiv: Machine Learning | 2014

Nonconvex Statistical Optimization: Minimax-Optimal Sparse PCA in Polynomial Time.

Zhaoran Wang; Huanran Lu; Han Liu


information theory and applications | 2018

Symmetry. Saddle Points, and Global Optimization Landscape of Nonconvex Matrix Factorization

Xingguo Li; Jarvis D. Haupt; Junwei Lu; Zhaoran Wang; Raman Arora; Han Liu; Tuo Zhao


arXiv: Machine Learning | 2015

Sparse Nonlinear Regression: Parameter Estimation and Asymptotic Inference

Zhuoran Yang; Zhaoran Wang; Han Liu; Yonina C. Eldar; Tong Zhang


neural information processing systems | 2016

NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization

Davood Hajinezhad; Mingyi Hong; Tuo Zhao; Zhaoran Wang


neural information processing systems | 2015

Optimal linear estimation under unknown nonlinear transform

Xinyang Yi; Zhaoran Wang; Constantine Caramanis; Han Liu

Collaboration


Dive into the Zhaoran Wang's collaboration.

Top Co-Authors

Avatar

Han Liu

Princeton University

View shared research outputs
Top Co-Authors

Avatar

Tuo Zhao

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Quanquan Gu

University of Virginia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xingguo Li

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fang Han

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Constantine Caramanis

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge