Samuel Vaiter
Paris Dauphine University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Samuel Vaiter.
IEEE Transactions on Information Theory | 2013
Samuel Vaiter; Gabriel Peyré; Charles Dossal; Jalal M. Fadili
This paper investigates the theoretical guarantees of l1-analysis regularization when solving linear inverse problems. Most of previous works in the literature have mainly focused on the sparse synthesis prior where the sparsity is measured as the l1 norm of the coefficients that synthesize the signal from a given dictionary. In contrast, the more general analysis regularization minimizes the l1 norm of the correlations between the signal and the atoms in the dictionary, where these correlations define the analysis support. The corresponding variational problem encompasses several well-known regularizations such as the discrete total variation and the fused Lasso. Our main contributions consist in deriving sufficient conditions that guarantee exact or partial analysis support recovery of the true signal in presence of noise. More precisely, we give a sufficient condition to ensure that a signal is the unique solution of the l1 -analysis regularization in the noiseless case. The same condition also guarantees exact analysis support recovery and l2-robustness of the l1-analysis minimizer vis-à-vis an enough small noise in the measurements. This condition turns to be sharp for the robustness of the sign pattern. To show partial support recovery and l2 -robustness to an arbitrary bounded noise, we introduce a stronger sufficient condition. When specialized to the l1-synthesis regularization, our results recover some corresponding recovery and robustness guarantees previously known in the literature. From this perspective, our work is a generalization of these results. We finally illustrate these theoretical findings on several examples to study the robustness of the 1-D total variation, shift-invariant Haar dictionary, and fused Lasso regularizations.
Siam Journal on Imaging Sciences | 2014
Charles-Alban Deledalle; Samuel Vaiter; Jalal M. Fadili; Gabriel Peyré
Algorithms to solve variational regularization of ill-posed inverse problems usually involve operators that depend on a collection of continuous parameters. When these operators enjoy some (local) regularity, these parameters can be selected using the so-called Stein Unbiased Risk Estimate (SURE). While this selection is usually performed by exhaustive search, we address in this work the problem of using the SURE to efficiently optimize for a collection of continuous parameters of the model. When considering non-smooth regularizers, such as the popular l1-norm corresponding to soft-thresholding mapping, the SURE is a discontinuous function of the parameters preventing the use of gradient descent optimization techniques. Instead, we focus on an approximation of the SURE based on finite differences as proposed in (Ramani et al., 2008). Under mild assumptions on the estimation mapping, we show that this approximation is a weakly differentiable function of the parameters and its weak gradient, coined the Stein Unbiased GrAdient estimator of the Risk (SUGAR), provides an asymptotically (with respect to the data dimension) unbiased estimate of the gradient of the risk. Moreover, in the particular case of soft-thresholding, the SUGAR is proved to be also a consistent estimator. The SUGAR can then be used as a basis to perform a quasi-Newton optimization. The computation of the SUGAR relies on the closed-form (weak) differentiation of the non-smooth function. We provide its expression for a large class of iterative proximal splitting methods and apply our strategy to regularizations involving non-smooth convex structured penalties. Illustrations on various image restoration and matrix completion problems are given.
Applied and Computational Harmonic Analysis | 2013
Samuel Vaiter; Charles-Alban Deledalle; Gabriel Peyré; Charles Dossal; Jalal M. Fadili
In this paper, we aim at recovering an unknown signal x0 from noisy L1measurements y=Phi*x0+w, where Phi is an ill-conditioned or singular linear operator and w accounts for some noise. To regularize such an ill-posed inverse problem, we impose an analysis sparsity prior. More precisely, the recovery is cast as a convex optimization program where the objective is the sum of a quadratic data fidelity term and a regularization term formed of the L1-norm of the correlations between the sought after signal and atoms in a given (generally overcomplete) dictionary. The L1-sparsity analysis prior is weighted by a regularization parameter lambda>0. In this paper, we prove that any minimizers of this problem is a piecewise-affine function of the observations y and the regularization parameter lambda. As a byproduct, we exploit these properties to get an objectively guided choice of lambda. In particular, we develop an extension of the Generalized Stein Unbiased Risk Estimator (GSURE) and show that it is an unbiased and reliable estimator of an appropriately defined risk. The latter encompasses special cases such as the prediction risk, the projection risk and the estimation risk. We apply these risk estimators to the special case of L1-sparsity analysis regularization. We also discuss implementation issues and propose fast algorithms to solve the L1 analysis minimization problem and to compute the associated GSURE. We finally illustrate the applicability of our framework to parameter(s) selection on several imaging problems.
IEEE Transactions on Information Theory | 2018
Samuel Vaiter; Gabriel Peyré; Jalal M. Fadili
This paper studies least-square regression penalized with partly smooth convex regularizers. This class of penalty functions is very large and versatile, and allows to promote solutions conforming to some notion of low complexity. Indeed, such penalties/regularizers force the corresponding solutions to belong to a low-dimensional manifold (the so-called model), which remains stable when the argument of the penalty function undergoes small perturbations. Such a good sensitivity property is crucial to make the underlying low-complexity (manifold) model robust to small noise. In a deterministic setting, we show that a generalized “irrepresentable condition” implies stable model selection under small noise perturbations in the observations and the design matrix, when the regularization parameter is tuned proportionally to the noise level. As an algorithmic implication, we also prove that this condition is almost necessary for stable model recovery. We then turn to the random setting, where the design matrix and the noise are random, and the number of observations grows large. We show that under our generalized “irrepresentable condition,” and a proper scaling of the regularization parameter, the regularized estimator is model consistent. In plain words, with a probability tending to one as the number of measurements tends to infinity, the regularized estimator belongs to the correct low-dimensional model manifold. This paper unifies and generalizes a large body of literature, where model consistency was known to hold, for instance for the Lasso, group Lasso, total variation (fused Lasso), and nuclear/trace norm regularizers. As an algorithmic implication, we show that under the deterministic model selection conditions, the forward–backward proximal splitting algorithm used to solve the penalized least-square regression problem is guaranteed to identify the model manifold after a finite number of iterations. Finally, we detail how our results extend from the quadratic loss to an arbitrary smooth and strictly convex loss function. We illustrate the usefulness of our results on the problem of low-rank matrix recovery from random measurements using nuclear norm minimization.
international conference on image processing | 2012
Charles-Alban Deledalle; Samuel Vaiter; Gabriel Peyré; Jalal M. Fadili; Charles Dossal
In this paper, we propose a rigorous derivation of the expression of the projected Generalized Stein Unbiased Risk Estimator (GSURE) for the estimation of the (projected) risk associated to regularized ill-posed linear inverse problems using sparsity-promoting ℓ1 penalty. The projected GSURE is an unbiased estimator of the recovery risk on the vector projected on the orthogonal of the degradation operator kernel. Our framework can handle many well-known regularizations including sparse synthesis- (e.g. wavelet) and analysis-type priors (e.g. total variation). A distinctive novelty of this work is that, unlike previously proposed ℓ1 risk estimators, we have a closed-form expression that can be implemented efficiently once the solution of the inverse problem is computed. To support our claims, numerical examples on ill-posed inverse problems with analysis and synthesis regularizations are reported where our GSURE estimates are used to tune the regularization parameter.
Journal of Physics: Conference Series | 2012
Charles-Alban Deledalle; Samuel Vaiter; Gabriel Peyré; Jalal M. Fadili; Charles Dossal
This paper develops a novel framework to compute a projected Generalized Stein Unbiased Risk Estimator (GSURE) for a wide class of sparsely regularized solutions of inverse problems. This class includes arbitrary convex data fidelities with both analysis and synthesis mixed L1-L2 norms. The GSURE necessitates to compute the (weak) derivative of a solution w.r.t.~the observations. However, as the solution is not available in analytical form but rather through iterative schemes such as proximal splitting, we propose to iteratively compute the GSURE by differentiating the sequence of iterates. This provides us with a sequence of differential mappings, which, hopefully, converge to the desired derivative and allows to compute the GSURE. We illustrate this approach on total variation regularization with Gaussian noise and to sparse regularization with poisson noise, to automatically select the regularization parameter.
Archive | 2015
Samuel Vaiter; Gabriel Peyré; Jalal M. Fadili
Inverse problems and regularization theory is a central theme in imaging sciences, statistics, and machine learning. The goal is to reconstruct an unknown vector from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown vector is to solve a convex optimization problem that enforces some prior knowledge about its structure. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including (i) recovery guarantees and stability to noise, both in terms of l2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation; (iii) convergence properties of the forward-backward proximal splitting scheme that is particularly well suited to solve the corresponding large-scale regularized optimization problem.
Electronic Journal of Statistics | 2017
Pierre C. Bellec; Joseph Salmon; Samuel Vaiter
Following recent success on the analysis of the Slope estimator, we provide a sharp oracle inequality in term of prediction error for Graph-Slope, a generalization of Slope to signals observed over a graph. In addition to improving upon best results obtained so far for the Total Variation denoiser (also referred to as Graph-Lasso or Generalized Lasso), we propose an efficient algorithm to compute Graph-Slope. The proposed algorithm is obtained by applying the forward-backward method to the dual formulation of the Graph-Slope optimization problem. We also provide experiments showing the interest of the method.
arXiv: Optimization and Control | 2015
Samuel Vaiter; Mohammad Golbabaee; Jalal M. Fadili; Gabriel Peyré
arXiv: Information Theory | 2013
Jalal M. Fadili; Gabriel Peyré; Samuel Vaiter; Charles-Alban Deledalle; Joseph Salmon