Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yen-Huan Li is active.

Publication


Featured researches published by Yen-Huan Li.


IEEE Journal of Selected Topics in Signal Processing | 2016

Learning-Based Compressive Subsampling

Luca Baldassarre; Yen-Huan Li; Jonathan Scarlett; Baran Gözcü; Ilija Bogunovic; Volkan Cevher

The problem of recovering a structured signal x ∈ C<sup>p</sup> from a set of dimensionality-reduced linear measurements b = Ax arises in a variety of applications, such as medical imaging, spectroscopy, Fourier optics, and computerized tomography. Due to computational and storage complexity or physical constraints imposed by the problem, the measurement matrix A ∈ C<sup>n×p</sup> is often of the form A = P<sub>Ω</sub>Ψ for some orthonormal basis matrix Ψ ∈ C<sup>P×P</sup> and subsampling operator P<sub>Ω</sub> : C<sup>p</sup> → C<sup>n</sup> that selects the rows indexed by Ω. This raises the fundamental question of how best to choose the index set Ω in order to optimize the recovery performance. Previous approaches to addressing this question rely on nonuniform random subsampling using application-specific knowledge of the structure of x. In this paper, we instead take a principled learning-based approach in which affixed index set is chosen based on a set of training signals x<sub>1</sub>, . . . , x<sub>m</sub>. We formulate combinatorial optimization problems seeking to maximize the energy captured in these signals in an average-case or worst-case sense, and we show that these can be efficiently solved either exactly or approximately via the identification of modularity and submodularity structures. We provide both deterministic and statistical theoretical guarantees showing how the resulting measurement matrices perform on signals differing from the training signals, and we provide numerical examples showing our approach to be effective on a variety of data sets.


international conference on acoustics, speech, and signal processing | 2016

Frank-Wolfe works for non-Lipschitz continuous gradient objectives: Scalable poisson phase retrieval

Gergely Ódor; Yen-Huan Li; Alp Yurtsever; Ya Ping Hsieh; Quoc Tran-Dinh; Marwa El Halabi; Volkan Cevher

We study a phase retrieval problem in the Poisson noise model. Motivated by the PhaseLift approach, we approximate the maximum-likelihood estimator by solving a convex program with a nuclear norm constraint. While the Frank-Wolfe algorithm, together with the Lanczos method, can efficiently deal with nuclear norm constraints, our objective function does not have a Lipschitz continuous gradient, and hence existing convergence guarantees for the Frank-Wolfe algorithm do not apply. In this paper, we show that the Frank-Wolfe algorithm works for the Poisson phase retrieval problem, and has a global convergence rate of O(1/t), where t is the iteration counter. We provide rigorous theoretical guarantee and illustrating numerical results.


international conference on acoustics, speech, and signal processing | 2015

Consistency of ℓ 1 -regularized maximum-likelihood for compressive Poisson regression

Yen-Huan Li; Volkan Cevher

We consider Poisson regression with the canonical link function. This regression model is widely used in regression analysis involving count data; one important application in electrical engineering is transmission tomography. In this paper, we establish the variable selection consistency and estimation consistency of the ℓ1-regularized maximum-likelihood estimator in this regression model, and characterize the asymptotic sample complexity that ensures consistency even under the compressive sensing setting (or the n ≪ p setting in high-dimensional statistics).


international conference on acoustics, speech, and signal processing | 2014

Barrier smoothing for nonsmooth convex minimization

Quoc Tran-Dinh; Yen-Huan Li; Volkan Cevher

This paper proposes a smoothing technique for nonsmooth convex minimization using self-concordant barriers. To illustrate the main ideas, we compare our technique and the proximity smoothing approach [1] via the classical gradient method on both the theoretical and numerical aspects. While the barrier smoothing approach maintains the sublinear-convergence rate, it affords a new analytic step size, which significantly enhances the practical convergence of the gradient method as compared to proximity smoothing.


Archive | 2018

Learning without Smoothness and Strong Convexity

Yen-Huan Li

Recent advances in statistical learning and convex optimization have inspired many successful practices. Standard theories assume smoothness—bounded gradient, Hessian, etc.—and strong convexity of the loss function. Unfortunately, such conditions may not hold in important real-world applications, and sometimes, to fulfill the conditions incurs unnecessary performance degradation. Below are three examples. 1. The standard theory for variable selection via `1-penalization only considers the linear regression model, as the corresponding quadratic loss function has a constant Hessian and allows for exact second-order Taylor series expansion. In practice, however, nonlinear regression models are often chosen to match data characteristics. 2. The standard theory for convex optimization considers almost exclusively smooth functions. Important applications such as portfolio selection and quantum state estimation, however, correspond to loss functions that violate the smoothness assumption; existing convergence guarantees for optimization algorithms hence do not apply. 3. The standard theory for compressive magnetic resonance imaging (MRI) guarantees the restricted isometry property (RIP)—a smoothness and strong convexity condition on the quadratic loss restricted on the set of sparse vectors—via random uniform sampling. The random uniform sampling strategy, however, yields unsatisfactory signal reconstruction performance empirically, in comparison to heuristic sampling approaches. In this thesis, we provide rigorous solutions to the three examples above and other related problems. For the first two problems above, our key idea is to instead consider weaker localized versions of the smoothness condition. For the third, our solution is to propose a new theoretical framework for compressive MRI: We pose compressive MRI as a statistical learning problem, and solve it by empirical risk minimization. Interestingly, the RIP is not required in this framework.


international conference on acoustics, speech, and signal processing | 2016

Learning data triage: Linear decoding works for compressive MRI

Yen-Huan Li; Volkan Cevher

The standard approach to compressive sampling considers recovering an unknown deterministic signal with certain known structure, and designing the sub-sampling pattern and recovery algorithm based on the known structure. This approach requires looking for a good representation that reveals the signal structure, and solving a non-smooth convex minimization problem (e.g., basis pursuit). In this paper, another approach is considered: We learn a good sub-sampling pattern based on available training signals, without knowing the signal structure in advance, and reconstruct an accordingly sub-sampled signal by computationally much cheaper linear reconstruction. We provide a theoretical guarantee on the recovery error, and show via experiments on real-world MRI data the effectiveness of the proposed compressive MRI scheme.


allerton conference on communication, control, and computing | 2016

Estimation error of the constrained lasso

Nissim Zerbib; Yen-Huan Li; Ya-Ping Hsieh; Volkan Cevher

This paper presents a non-asymptotic upper bound for the estimation error of the constrained lasso, under the high-dimensional (n ≪ p) setting. In contrast to existing results, the error bound in this paper is sharp, is valid when the parameter to be estimated is not exactly sparse (e.g., when it is weakly sparse), and shows explicitly the effect of overestimating the ℓ1-norm of the parameter to be estimated on the estimation performance. The results of this paper show that the constrained lasso is minimax optimal for estimating a parameter with bounded ℓ1-norm, and also for estimating a weakly sparse parameter if its ℓ1-norm is accessible.


international conference on artificial intelligence and statistics | 2015

{Sparsistency of \ell_1-Regularized M-Estimators}

Yen-Huan Li; Jonathan Scarlett; Pradeep Ravikumar; Volkan Cevher


modelling computation and optimization in information systems and management sciences | 2015

Composite convex minimization involving self-concordant-like cost functions

Quoc Tran-Dinh; Yen-Huan Li; Volkan Cevher


arXiv: Statistics Theory | 2015

A Geometric View on Constrained M-Estimators

Yen-Huan Li; Ya-Ping Hsieh; Nissim Zerbib; Volkan Cevher

Collaboration


Dive into the Yen-Huan Li's collaboration.

Top Co-Authors

Avatar

Volkan Cevher

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Jonathan Scarlett

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Quoc Tran-Dinh

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Ya-Ping Hsieh

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Nissim Zerbib

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Baran Gözcü

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Alp Yurtsever

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Ilija Bogunovic

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Luca Baldassarre

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Marwa El Halabi

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge