Sangnam Nam
French Institute for Research in Computer Science and Automation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sangnam Nam.
Applied and Computational Harmonic Analysis | 2013
Sangnam Nam; Mike E. Davies; Michael Elad; Rémi Gribonval
Abstract After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This work proposes effective pursuit methods that aim to solve inverse problems regularized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments, and provide a detailed study of the model associated with the 2D finite difference analysis operator, a close cousin of the TV norm.
Linear Algebra and its Applications | 2014
Raja Giryes; Sangnam Nam; Michael Elad; Rémi Gribonval; Mike E. Davies
The cosparse analysis model has been introduced recently as an interesting alternative to the standard sparse synthesis approach. A prominent question brought up by this new construction is the analysis pursuit problem – the need to find a signal belonging to this model, given a set of corrupted measurements of it. Several pursuit methods have already been proposed based on l1 relaxation and a greedy approach. In this work we pursue this question further, and propose a new family of pursuit algorithms for the cosparse analysis model, mimicking the greedy-like methods – compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), iterative hard thresholding (IHT) and hard thresholding pursuit (HTP). Assuming the availability of a near optimal projection scheme that finds the nearest cosparse subspace to any vector, we provide performance guarantees for these algorithms. Our theoretical study relies on a restricted isometry property adapted to the context of the cosparse analysis model. We explore empirically the performance of these algorithms by adopting a plain thresholding projection, demonstrating their good performance.
international conference on acoustics, speech, and signal processing | 2011
Sangnam Nam; Mike E. Davies; Michael Elad; Rémi Gribonval
In the past decade there has been a great interest in a synthesis-based model for signals, based on sparse and redundant representations. Such a model assumes that the signal of interest can be composed as a linear combination of few columns from a given matrix (the dictionary). An alternative analysis-based model can be envisioned, where an analysis operator multiplies the signal, leading to a cosparse outcome. In this paper, we consider this analysis model, in the context of a generic missing data problem (e.g., compressed sensing, inpainting, source separation, etc.). Our work proposes a uniqueness result for the solution of this problem, based on properties of the analysis operator and the measurement matrix. This paper also considers two pursuit algorithms for solving the missing data problem, an L1-based and a new greedy method. Our simulations demonstrate the appeal of the analysis model, and the success of the pursuit techniques presented.
IEEE Transactions on Signal Processing | 2013
Mehrdad Yaghoobi; Sangnam Nam; Rémi Gribonval; Mike E. Davies
We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterized by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimization framework based on L1 optimization. The reason for introducing a constraint in the optimization framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalized tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and Douglas-Rachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimization problem is not a convex program, we often find a local minimum using such variational methods. For two different settings, we provide preliminary theoretical support for the well-posedness of the learning problem, which can be practically used to test the local identifiability conditions of learnt operators.
international conference on acoustics, speech, and signal processing | 2012
Mehrdad Yaghoobi; Sangnam Nam; Rémi Gribonval; Mike E. Davies
This paper investigates analysis operator learning for the recently introduced cosparse signal model that is a natural analysis complement to the more traditional sparse signal model. Previous work on such analysis operator learning has relied on access to a set of clean training samples. Here we introduce a new learning framework which can use training data which is corrupted by noise and/or is only approximately cosparse. The new model assumes that a p-cosparse signal exists in an epsilon neighborhood of each data point. The operator is assumed to be uniformly normalized tight frame (UNTF) to exclude some trivial operators. In this setting, an alternating optimization algorithm is introduced to learn a suitable analysis operator.
international conference on acoustics, speech, and signal processing | 2012
Sangnam Nam; Rémi Gribonval
Cosparse modeling is a recent alternative to sparse modeling, where the notion of dictionary is replaced by that of an analysis operator. When a known analysis operator is well adapted to describe the signals of interest, the model and associated algorithms can be used to solve inverse problems. Here we show how to derive an operator to model certain classes of signals that satisfy physical laws, such as the heat equation or the wave equation. We illustrate the approach on an acoustic inverse problem with a toy model of wave propagation and discuss its potential extensions and the challenges it raises.
ieee international workshop on computational advances in multi sensor adaptive processing | 2011
Sangnam Nam; Mike E. Davies; Michael Elad; Rémi Gribonval
The sparse synthesis signal model has enjoyed much success and popularity in the recent decade. Much progress ranging from clear theoretical foundations to appealing applications has been made in this field. Alongside the synthesis approach, an analysis counterpart has been used over the years. Despite the similarity, markedly different nature of the two approaches has been observed. In a recent work, the analysis model was formally formulated and the nature of the model was discussed extensively. Furthermore, a new greedy algorithm (GAP) for recovering the signals satisfying the model was proposed and its effectiveness was demonstrated. While the understanding of the analysis model and the new algorithm has been broadened, the stability and the robustness against noise of the model and the algorithm have been mostly left out. In this work, we adapt and propose a new GAP algorithm in order to deal with the presence of noise. Empirical evidence for the algorithm is also provided.
european signal processing conference | 2011
Mehrdad Yaghoobi; Sangnam Nam; Rémi Gribonval; Mike E. Davies
european signal processing conference | 2011
Raja Giryes; Sangnam Nam; Rémi Gribonval; Mike E. Davies
Archive | 2011
Raja Giryes; Sangnam Nam; Rémi Gribonval; Michael Davies