Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jingwei Liang is active.

Publication


Featured researches published by Jingwei Liang.


Mathematical Programming | 2016

Convergence rates with inexact non-expansive operators

Jingwei Liang; Jalal M. Fadili; Gabriel Peyré

In this paper, we present a convergence rate analysis for the inexact Krasnosel’skiĭ–Mann iteration built from non-expansive operators. The presented results include two main parts: we first establish the global pointwise and ergodic iteration-complexity bounds; then, under a metric sub-regularity assumption, we establish a local linear convergence for the distance of the iterates to the set of fixed points. The obtained results can be applied to analyze the convergence rate of various monotone operator splitting methods in the literature, including the Forward–Backward splitting, the Generalized Forward–Backward, the Douglas–Rachford splitting, alternating direction method of multipliers and Primal–Dual splitting methods. For these methods, we also develop easily verifiable termination criteria for finding an approximate solution, which can be seen as a generalization of the termination criterion for the classical gradient descent method. We finally develop a parallel analysis for the non-stationary Krasnosel’skiĭ–Mann iteration.


Siam Journal on Optimization | 2017

Activity Identification and Local Linear Convergence of Forward-Backward-type methods

Jingwei Liang; Jalal M. Fadili; Gabriel Peyré

In this paper, we consider a class of Forward--Backward (FB) splitting methods that includes several variants (e.g. inertial schemes, FISTA) for minimizing the sum of two proper convex and lower semi-continuous functions, one of which has a Lipschitz continuous gradient, and the other is partly smooth relatively to a smooth active manifold


international conference on scale space and variational methods in computer vision | 2015

Activity Identification and Local Linear Convergence of Douglas–Rachford/ADMM under Partial Smoothness

Jingwei Liang; Jalal M. Fadili; Gabriel Peyré; Russell Luke

\mathcal{M}


Journal of Optimization Theory and Applications | 2017

Local Convergence Properties of Douglas---Rachford and Alternating Direction Method of Multipliers

Jingwei Liang; Jalal M. Fadili; Gabriel Peyré

. We propose a unified framework, under which we show that, this class of FB-type algorithms (i) correctly identifies the active manifolds in a finite number of iterations (finite activity identification), and (ii) then enters a local linear convergence regime, which we characterize precisely in terms of the structure of the underlying active manifolds. For simpler problems involving polyhedral functions, we show finite termination. We also establish and explain why FISTA (with convergent sequences) locally oscillates and can be slower than FB. These results may have numerous applications including in signal/image processing, sparse recovery and machine learning. Indeed, the obtained results explain the typical behaviour that has been observed numerically for many problems in these fields such as the Lasso, the group Lasso, the fused Lasso and the nuclear norm regularization to name only a few.


Optimization | 2018

Local linear convergence analysis of Primal–Dual splitting methods

Jingwei Liang; Jalal M. Fadili; Gabriel Peyré

Convex optimization has become ubiquitous in most quantitative disciplines of science, including variational image processing. Proximal splitting algorithms are becoming popular to solve such structured convex optimization problems. Within this class of algorithms, Douglas–Rachford (DR) and ADMM are designed to minimize the sum of two proper lower semi-continuous convex functions whose proximity operators are easy to compute. The goal of this work is to understand the local convergence behaviour of DR (resp. ADMM) when the involved functions (resp. their Legendre-Fenchel conjugates) are moreover partly smooth. More precisely, when both of the two functions (resp. their conjugates) are partly smooth relative to their respective manifolds, we show that DR (resp. ADMM) identifies these manifolds in finite time. Moreover, when these manifolds are affine or linear, we prove that DR/ADMM is locally linearly convergent with a rate in terms of the cosine of the Friedrichs angle between the tangent spaces of the identified manifolds. This is illustrated by several concrete examples and supported by numerical experiments.


international conference on image processing | 2014

On the convergence rates of proximal splitting algorithms

Jingwei Liang; Jalal M. Fadili; Gabriel Peyré

The Douglas–Rachford and alternating direction method of multipliers are two proximal splitting algorithms designed to minimize the sum of two proper lower semi-continuous convex functions whose proximity operators are easy to compute. The goal of this work is to understand the local linear convergence behaviour of Douglas–Rachford (resp. alternating direction method of multipliers) when the involved functions (resp. their Legendre–Fenchel conjugates) are moreover partly smooth. More precisely, when the two functions (resp. their conjugates) are partly smooth relative to their respective smooth submanifolds, we show that Douglas–Rachford (resp. alternating direction method of multipliers) (i) identifies these manifolds in finite time; (ii) enters a local linear convergence regime. When both functions are locally polyhedral, we show that the optimal convergence radius is given in terms of the cosine of the Friedrichs angle between the tangent spaces of the identified submanifolds. Under polyhedrality of both functions, we also provide conditions sufficient for finite convergence. The obtained results are illustrated by several concrete examples and supported by numerical experiments.


neural information processing systems | 2014

Local Linear Convergence of Forward--Backward under Partial Smoothness

Jingwei Liang; Jalal M. Fadili; Gabriel Peyré

Abstract In this paper, we study the local linear convergence properties of a versatile class of Primal–Dual splitting methods for minimizing composite non-smooth convex optimization problems. Under the assumption that the non-smooth components of the problem are partly smooth relative to smooth manifolds, we present a unified local convergence analysis framework for these methods. More precisely, in our framework, we first show that (i) the sequences generated by Primal–Dual splitting methods identify a pair of primal and dual smooth manifolds in a finite number of iterations, and then (ii) enter a local linear convergence regime, which is characterized based on the structure of the underlying active smooth manifolds. We also show how our results for Primal–Dual splitting can be specialized to cover existing ones on Forward–Backward splitting and Douglas–Rachford splitting/ADMM (alternating direction methods of multipliers). Moreover, based on these obtained local convergence analysis result, several practical acceleration techniques are discussed. To exemplify the usefulness of the obtained result, we consider several concrete numerical experiments arising from fields including signal/image processing, inverse problems and machine learning. The demonstration not only verifies the local linear convergence behaviour of Primal–Dual splitting methods, but also the insights on how to accelerate them in practice.


neural information processing systems | 2016

A Multi-step Inertial Forward-Backward Splitting Method for Non-convex Optimization

Jingwei Liang; Jalal M. Fadili; Gabriel Peyré

In this work, we first provide iteration-complexity bounds (pointwise and ergodic) for the inexact Krasnoselskî-Mann iteration built from nonexpansive operators. Moreover, under an appropriate regularity assumption on the fixed point operator, local linear convergence rate is also established. These results are then applied to analyze the convergence rate of various proximal splitting methods in the literature, which includes the Forward-Backward, generalized Forward-Backward, Douglas-Rachford, ADMM and some primal-dual splitting methods. For these algorithms, we develop easily verifiable termination criteria for finding an approximate solution, which is a generalization of the termination criterion for the classical gradient descent method. We illustrate the usefulness of our results on a large class of problems in signal and image processing.


arXiv: Optimization and Control | 2015

Activity Identification and Local Linear Convergence of Inertial Forward-Backward Splitting

Jingwei Liang; Jalal M. Fadili; Gabriel Peyré


international conference on machine learning | 2018

Local Convergence Properties of SAGA/Prox-SVRG and Acceleration

Clarice Poon; Jingwei Liang; Carola-Bibiane Schönlieb

Collaboration


Dive into the Jingwei Liang's collaboration.

Top Co-Authors

Avatar

Gabriel Peyré

Paris Dauphine University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Clarice Poon

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

Russell Luke

University of Göttingen

View shared research outputs
Researchain Logo
Decentralizing Knowledge