Novel Algorithm for Sparse Solutions to Linear Inverse Problems with Multiple Measurements
NNovel Algorithm for Sparse Solutions to Linear Inverse Problems with Multiple Measurements
Lianlin Li, Fang Li Institute of Electronics, Chinese Academy of Sciences, Beijing, China
ABSTRACT : In this report, a novel efficient algorithm for recovery of jointly sparse signals (sparse matrix) from multiple incomplete measurements has been presented, in particular, the NESTA-based MMV optimization method. In a nutshell, the jointly sparse recovery is obviously superior to applying standard sparse reconstruction methods to each channel individually. Moreover several efforts have been made to improve the NESTA-based MMV algorithm, in particular, (1) the NESTA-based MMV algorithm for partially known support to greatly improve the convergence rate, (2) the detection of partial (or all) locations of unknown jointly sparse signals by using so-called MUSIC algorithm; (3) the iterative NESTA-based algorithm by combing hard thresholding technique to decrease the numbers of measurements. It has been shown that by using proposed approach one can recover the unknown sparse matrix X with ( ) Spark A -sparsity from ( )
Spark A measurements, predicted in Ref. [1], where the measurement matrix denoted by A satisfies the so-called restricted isometry property (RIP). Under a very mild condition on the sparsity of X and characteristics of the A , the iterative hard threshold (IHT)-based MMV method has been shown to be also a very good candidate. INDEX TERMS: compressive sensing, SMV (single measurement vector), MMV (multiple measurement vector), Nesterov’s method, iterative hard threshold algorithm, MUSIC, restricted isometry property (RIP) I.
INTRODUCTION Recovery of sparse signals from a small number of measurements is a undamental problem in many practical applications such as medical imaging, seismic exploration, communication, image denoising, analog-to-digital conversion, and so on. The well-known compressed sensing, developed by Candes, Tao and Donoho et al, studies information acquisition methods as well as efficient computational algorithms. By exploiting colorful results developed within the framework of compressive sensing, we can reconstruct a sparse vector x by solving the highly underdetermined linear equations y Ax = under minimal l -norm constraint only if the measurement matrix A satisfies some properties such as restricted isometry property (RIP), null-space property (NSP), and so on. Though determining the sparest vector x consistent with the data y Ax = is generally an NP-hard problem, many suboptimal algorithms have been formulated to attack this problem, for example, greedy algorithm, basis pursuit (BP), Bayesian algorithm, and so on. The single measurement sparse solution problem has been extensively studied in the past. In many practical applications such as dynamic medical imaging, neromagnetic inverse problem, beam forming, electromagnetic inverse source, communication, and so on, the recovery of jointly sparse signal or MMV problem, the variation of the compressive sensing or sparse linear inverse problem, is an important topic; in particular, to deal with the computation of sparse solution when there are multiple measurement vectors (MMV) and the solutions are assumed to have a common sparsity profile or jointly sparse. The most widely studied approaches to the MMV problem are based on solving the convex optimization problem , min X p q X , subject to n L n N N L B A X × × × = (1) where the mixed , p q l norm of X is defined as , pp q X X = (cid:4) with ( ) ( ) ( ) Tq q q
X X X X N ⎡ ⎤= ⎢ ⎥⎣ ⎦ (cid:4) (cid:34) , ( ) ,: X j is the j th row of X . Up to now, many efforts have made to attack this problem. Cotter et al. considered the minimization problem of [2] min pX p X ⎛ ⎞⎜ ⎟⎝ ⎠ subject to n L n N N L B A X × × × = . Chen and Huo considered the uniqueness under p = via the spark of measurement matrix A and equivalence between the minimization problem with 1 p = and 0 p = [1]. Further, the orthogonal matching pursuit (OMP) algorithm for MMV has also been developed [1]. Tropp dealt with (1) for 2 p = and q = ∞ . Mishali and Eldar proposed the ReMBo algorithm which reduces MMV to a series of SMV problems. Eldar and Rauhut proposed the OMP algorithm with hard threshold technique and analyzed the average case for jointly sparse signal recovery [4]. Berg and Friedlander studied performance of l and l for different structure of sparse X [5]. In this presentation, we consider in depth the extension of a class of algorithm — NESTA algorithm — to the multiple measurement vectors available, and solutions with a common sparsity structure must be computed, especially, NESTA-based MMV algorithm. Inspired by recent breakthroughs in the development of novel first-order methods in convex optimization, the cost functions appropriate to NESTA-based MMV are developed, and algorithms are derived based on their minimization. Further several approaches to improve the NESTA-based MMV algorithm to decrease the number of measurements and increase the convergence rate have been proposed. This report demonstrates that this approach is ideally suited for solving large-scale MMV reconstruction problems. II. ALGORITHMS In this section, the basic idea of NESTA-based MMV algorithm has been provided; moreover, several approaches to improve it have been discussed. We will refer the reader to [7] for detailed discussions about proposed approaches. II.1 NESTA-based MMV Algorithm Similar done by Nesterov’s method for single-measurement problem [9] [10], the NESAT-based MMV algorithm minimize the smooth convex function f on the convex et p Q , ( ) min p X Q f X ∈ (2) where the primal feasible set p Q is defined by { } : : p X Q x AX B ε ∈ = − ≤ . To exploit fully the structure of unknown matrix X , we introduce X α = Ψ with sparse transformation matrix Ψ . The smoothed version of convex function ( ) f X in (2) is ( ) ( ) { } max , d dU Q f U p U μ α α μ ∈ = − (3) where { } : : 1 d Q u U ∞ = ≤ is the dual feasible set. To control flexibly the inherent structure of X , the smoothed convex function (3) is proposed to be rewritten as ( ) ( ) { } max , d u Q d f u p u μ α α μ ∈ = − (cid:4) (4) where ( ) ( ) ( ) ( ) ( ) ( ) T m m m N α α α α⎡ ⎤= ⎣ ⎦ (cid:4) (cid:34) , ( ) ( ) ,: m j α is a homogeneous function of j th row of ( ) ,: j α , ( ) d p u is a prox-function for dual feasible set d Q denoted by { } : : 1 d Q u u ∞ = ≤ . As done by standard NESTA’s method for recovery of single-measurement sparse signal, one has the procedure of NESTA-based MMV algorithm shown in Table 1 From Table 1, it is noted that (1) ( ) f μ α∇ can be easily computed in the closed form, (2) the proposed algorithm belongs to the first-order method for constraint optimization problem,(3) if the row of A is orthogonal, which is often the case in compressed sensing applications [9], the computational cost is very low, in particular, each iteration is extremely fast. To decrease the number of measurements and increase the convergence rate, the following approaches are carried: (1) As done in [9], the homotopy technique can be exploited to accelerate it. 2)
It has been empirically shown if partial support of unknown sparse matrix α , the convergent speed will be improved; moreover, the number of measurements can be greatly decreased. If the partial common locations of α denoted by T are known, the function (3) is modified as ( ) ( ) { } max , cd dTU Q f U p U μ α α μ ∈ = − (3m) Of course, (4) can be modified as ( ) ( ) { } max , cd u Q dT f u p u μ α α μ ∈ = − (cid:4) (4m) Of course, the size of u in (4m) will be smaller than one in (4). (3) To estimate the partial support of jointly sparse matrix α , the so-called MUSIC algorithm can be exploited. As we known, if the more column-rank of α is, the more support of α can be obtained. (4)To decrease the number of measurements, the iterative NESTA-based MMV algorithm by combing hard threshold technique is carried out, see Table 2, TABLE1. The procedure of NESTA-based MMV algorithm ( ) ( )( ) ( ) ( ) Initialize . For 0Step1.Step2. :arg min ,2Step3. :arg min ,12Step4. :12 3Stop pp kkk k k kQk kk p i i iQ ipi kk k k k kk kCompute fCompute y Ly fCompute z Lz p f xiUpdatez ykwh αα α α α α α α αα α α ασα αα τ ττ ∈∈ = ≥∇ ⎧ ⎫= − + ∇ −⎨ ⎬⎩ ⎭⎧ ⎫⎪ ⎪= + ∇ −⎨ ⎬⎪ ⎪⎩ ⎭+== + −= + ∑ en a given criterion is valid Table 2. Iterative NESTA-based MMV algorithm ( ) ( )
Initialize .Step1. 1Step2 ; .3 .4 ,1
Carrying NESTA based MMV algorithm with partiallyknown suppor provided in Tablea given criterion is validstopChoose support T shown in Eq m or Eq m via hard thresholdSTEP α − Ι fesleGoToend II.2 IHT-based Algorithm As a matter of fact, the iterative hard thresholding algorithm proposed by Blumensath and Davies can be easily generalized to deal with MMV problem (see Table 3). Further, the theoretical analysis about performance guarantee can be carried out along the same line as done in [8] Table 3. Iterative hard threshold based MMV algorithm ( ) ( )
11 1
Initialize , ,
Tn nKn n T nn nK y where ADO Ra y xH aStop a given criterion is valid αα αα μα ++ + = Φ Φ = Ψ= −= + Φ − Φ=