Recovery of compressively sensed ultrasound images with structured Sparse Bayesian Learning
11 Recovery of compressively sensed ultrasoundimages with structured Sparse Bayesian Learning
Richard Porter, Vladislav Tadic, and Alin Achim
Abstract —In this paper, we consider the problem of recoveringcompressively sensed ultrasound images. We build on prior work,and consider a number of existing approaches that we consider tobe the state-of-the-art. The methods we consider take advantageof a number of assumptions on the signals including those oftemporal and spatial correlation, block structure, prior knowl-edge of the support, and non-Gaussianity. We conduct a seriesof intensive tests to quantify the performance of these methods.We find that by altering the parameters of the structured SparseBayesian Learning approaches considered, we can significantlyimprove the objective quality of the reconstructed images. Theresults we achieve are a significant improvement upon previouslyproposed reconstruction techniques. In addition, we further showthat by careful choice of parameters, we can obtain near-optimalresults whilst requiring only a small fraction of the computationaltime needed for the best reconstruction quality.
Index Terms —ultrasound, compressed sensing, SparseBayesian Learning
I. I
NTRODUCTION
Ultrasound imaging is possibly the most commonly usedcross-sectional medical imaging modality. It has a numberof advantages over alternatives, as it is relatively cheap, caneasily be made portable, non-invasive and does not make useof ionising radiation. Ultrasound can also produce “real-time”images, and it is generally considered to be safe [1].In general, ultrasound images are formed by the transmis-sion of short ultrasound pulses from an array of transducers(most commonly piezoelectric transducers) towards the objectof interest [2]. The returning (reflected) echoes are analysedand processed in order to construct an image of the objectbeing scanned. As with all imaging modailities, ultrasoundimaging generates a significant amount of data. Therefore im-age compression is needed to reduce the volume of data hencereducing the bit rate, and ideally this compression shouldnot lead to any loss in perceptual image quality. The needfor storage space and transmission bandwidth, particularlythat caused by the diversification of ultrasound applicationsand telemedicine, place significant demands on existing sys-tems in digital radiology departments [2]. The developmentof new technologies, allowing for the acquisition of evergreater amounts of data, places even greater demands on dataprocessing, transmission and storage capabilities, giving riseto a need for more efficient compression techniques. In the
Richard Porter is a PhD graduate of the Department of Electrical &Electronic Engineering, University of Bristol, Bristol, United Kingdom, e-mail: [email protected] Tadic is with the School of Mathematics, University of Bristol,Bristol, United Kingdom, e-mail: [email protected] Achim is with the Department of Electrical & Electronic Engineering,University of Bristol, Bristol, United Kingdom, e-mail: [email protected]. field of medical ultrasound, there have been several recentdevelopments that have significantly increased the amount ofdata generated. These developments include scanners with theability to produce real-time 3D (RT3D) [3] or 4D [4] imagedata sets. One issue with these techniques is that of low framerates, with most scanners capable of generating only a fewimages per second. Whilst this is fast enough to view fetalfacial expressions, it is not fast enough to view the operation ofthe fetal heart in detail. Although several techniques have beenproposed to increase the frame rates of these methods, suchas multiline transmit imaging, plane-/diverging wave imaging,and retrospective gating, acquiring data at these higher framerates results in a loss of image quality [5], [6].The phenomenon of growth in the amount of data beinggenerated outstripping the growth of data processing andstorage capabilities is not unique to the field of ultrasound ormedical imaging in general [7]. One approach that has beenproposed to deal with this growth in data is that of compressedsensing. The field of compressed sensing has grown from workby Candès, Romberg, Tao [8], and Donoho [9] on the singlemeasurement vector (SMV) model. Later work has shownthat with the shared sparsity assumption, performance can beincreased in the multiple measurement vector (MMV) model[10].Compressed sensing leverages the concept of sparsity, whichis fundamental to much of modern signal processing. The ideaunderlying this is that many natural signals can be representedwith less data than the number of samples that would beimplied by the Nyquist sampling theorem [8]. This concept isused in transform coding, for example in JPEG [11] for imagecoding and MPEG [12] for video coding. In transform codingapproaches, the signal must first be acquired at the Nyquistrate, and then compressed, effectively wasting much of theacquired data. The method of compressed sensing allows usto reduce the rate at which we sample signals, thus avoidingthe need to first sample at the Nyquist rate by combining theacquisition step with the compression step. This is achieveddue to two significant differences between compressed sensingand classical sampling [13]. Firstly, rather than sampling atspecific points in time as with classical sensing, compressedsensing typically consists of taking inner products betweenthe signal and general sampling kernels. Secondly, signalreconstruction in the Shannon-Nyquist framework is doneby sinc interpolation, and this takes very little computation,whereas compressed sensing signal recovery methods are typ-ically computationally intensive. In addition, with traditionaltransform coding approaches, the quality of the resulting imageis determined primarily by the encoder at the time of encoding,whereas with compressed sensing development of improved a r X i v : . [ ee ss . SP ] N ov recovery algorithms may improve the quality of the finalimage. The ability of compressed sensing techniques to allowsignals to be acquired at rates below the Nyquist rate mayallow for an increase in the framerate of ultrasound imagingtechniques by reducing the amount of data acquired.In medical imaging. compressed sensing techniques havebeen successfully applied to MRI in order to reduce scantime [14]. MRI is particularly amenable to compressed sensingtechniques as the images are already acquired in the Fourierdomain (k-space), and therefore the primary difficulty lies inthe design of appropriate sampling patterns, and does notrequire the development of any new hardware. MRI is alsoof interest as it is the other commonly used cross-sectionalmedical imaging modality that does not make use of ionisingradiation. Although MRI is capable of greater detail thanultrasound, it is significantly more expensive, not portable, andhas much slower scan times.Typically, compressed sensing approaches make no assump-tion on the signals being acquired other than that of sparsity.However, it is often the case that we may have knowledge ofthe signal properties, and the use of this knowledge can im-prove the ability to reconstruct compressively sensed signals.For example, it may be the case that we expect the signalsto be temporally correlated, and a method was proposed in[15] to reduce the negative effect of temporal correlation onthe recovery of compressively sensed signals. In addition, wemight expect the non-zero elements of a signal to clustertogether, and this leads to the assumption of block structure,used in [16], and combined with the assumption of temporalcorrelation in [17]. Another approach is to take into accountthe expected statistical properties of the signals. Although acommon assumption, justified by the central limit theorem isthat signals are Gaussian, in recent years it has become knownthat some natural signals do not obey this assumption. As theprimary assumption needed for the central limit theorem to beapplicable is that of finite variance, it is not surprising thatwe find that these signals can often be modeled as α − stabledistributions, as is implied by the generalised central limittheorem [18]. These distributions have found applications infinancial modeling [19] (indeed, it has been argued that the2007-2008 financial crisis can be partially attributed to themodel error caused by the assumption of Gaussianity [20]),and it has also been shown that ultrasound images can bebetter modeled by a symmetric α − stable (S α S) distributionthan by a Gaussian distribution [21].There has been significant work done on applying com-pressed sensing techniques to ultrasound imaging. In 2012,[22] produced a review of these attempts which suggested thatthey fall into four categories.The first category consists of methods that model the objectbeing scanned as a sparse collection of scatterers. This isperhaps the easiest form to implement with existing ultrasoundhardware. [23] demonstrated an implementation of this idea,although they did note some difficulties with dealing withthe sensing matrix (estimated to be 458GB for a typicalproblem size), which they addressed by using a powerful GPUand recomputing the entries of the sensing matrix instead ofstoring them. It is also worth noting that they used a discrete cosine basis, as is used in this paper (although they useda 2D DCT). Work since includes the work of [24], whomodelled the acquired signals as a convolution of a pointspread function and tissue reflectivity function, and improvedon the reconstruction quality of previous work. The previouswork of [25] combined deconvolution and CS ideas.The second category consists of methods that take advantageof the sparsity of the raw RF data, e.g. [26] and [27]. Morerecent work includes the work of [28], who introduced the ideaof compressed beamforming in the context of the Xamplingframework, and the work of [29], which extended this workto the idea of beamforming in the frequency domain.The third category consists of methods that take advan-tage of the sparsity of ultrasound images in the 2D Fouriertransform domain. Several of these such as [30] and [31]adopt a Bayesian approach for the reconstruction of ultra-sound images.[32] introduced a framework for the compressedsensing of medical ultrasound based on modelling data witha S α S distribution, and an approach using the iterativelyreweighted least squares (IRLS) algorithm for (cid:96) p pseudonormminimisation was proposed, with p related to the the character-istic exponent of the distribution of the underlying data. Thisapproach was further extended in [33], where it was shownthat performance could be improved by taking into accountknowledge of the support. Another approach using a line-by-line strategy is in [34], which made use of the correlationsbetween each ultrasound line. It is these approaches that aremost closely related to the work presented in this paper.Another 1D approach can be found in the work of [35], wherean FRI based approach was used.The final category relates to Doppler imaging, which is aproblem with a somewhat different nature [36], [37].We have previously shown that it is possible to improve thereconstruction performance by taking advantage of the non-Gaussianity, temporal and block structure of the ultrasounddata [38], [39], building on the work in [34] which was thefirst to apply the T-MSBL method (compensating for thenegative effect of temporal correlation) to the recovery ofcompressively sensed ultrasound images. The acquisition ofmedical ultrasound data in a manner suitable for compressedsensing techniques has been examined in other works, e.g.[28], and also general methods using compressed sensing forsub Nyquist analog-to-digital convertors have been developed,e.g. [40], but this is beyond the scope of this paper. Here, webuild on previous work, and show that by careful selection ofparameters for selected structured Sparse Bayesian Learningmethods, we can significantly improve the resulting recon-struction quality. We also compare these methods to a numberof existing approaches, and show that we obtain significantimprovements.The rest of this paper is organised as follows. We firstintroduce some technical background in section II, while insection III we introduce the methods we will be comparing.In section IV we describe the datasets used, in section V wepresent our results on these datasets, and in section VI weconclude the paper. II. B
ACKGROUND
In this section, we provide a brief overview of the modelsand theory used for compressed sensing, as well as introducingthe notation used in the paper.
A. Notation • (cid:107) x (cid:107) , (cid:107) x (cid:107) , (cid:107) x (cid:107) denote the (cid:96) pseudo-norm, and the (cid:96) and (cid:96) norms of the vector x . • A i. denotes the i th row of the matrix A , and A .j denotesthe j th column of the matrix A • A ⊗ B denotes the Kronecker product of matrices A and B B. Models
The SMV model of compressed sensing is given by y = Ax + v (1)Here, y ∈ R N × represents the observed measurements, A ∈ R N × M is the measurement matrix, v ∈ R N × is a noisevector, and x ∈ R M is the source vector we want to recover.In the context of ultrasound imaging, we can consider x tocorrespond to a line of the ultrasound image, with each linecorresponding to a single transducer element. The elements of x therefore correspond to equally spaced time domain samplesof the reflected echoes.The MMV model is given by Y = AX + V (2)Here, Y ∈ R N × L represents the observed measurements, A ∈ R N × M is the measurement (or sensing) matrix, V ∈ R N × L is a noise matrix, and X ∈ R M × L is the sourcematrix we want to recover, with each row corresponding to apossible source. In the ultrasound context, X is now the entireultrasound image, with each column of X corresponding to asingle line of the ultrasound image. The columns of X arearranged such that they correspond to the spatial positions ofthe individual transducers.Compressed sensing relies upon the idea of sparsity. We saythat x is k -sparse if at most k components of x are non-zero,and similarly we will say that X is k -sparse if at most k rowsof X are non-zero.In the absence of noise, in the SMV case, it has beenshown that under certain assumptions (which are satisfied withprobability 1 if the entries of Φ are drawn independently froma continuous probability distribution), k measurements (i.e. N = 2 k ) are sufficient to guarantee the exact recovery of x byfinding the x with the minimal number of non-zero elementssuch that Φ x = y [13], and under similar assumptions, alongwith the assumption that X is of maximal column rank andhas a sufficient number of columns, k + 1 measurements aresufficient to recover X exactly by finding the X with theminimal number of non-zero rows such that AX = Y [10].However, this method of recovery is computationally ex-pensive, as it requires searching over the possible sets ofnon-zero elements or columns. If we assume that there are at most k non-zero elements (or rows), and assume that westart searching from the smallest possible set, then we wouldneed to check O ( M k ) sets, as shown in equation k (cid:88) j =0 (cid:18) Mj (cid:19) = O ( M k ) (3)It is clear that this quickly becomes unfeasible for largervalues of M and k , and hence faster approximate methodssuch as (cid:96) norm minimisation are used. If we assume that theentries of the sensing matrix are drawn independently froma Gaussian distribution with zero mean [41] and variance N ,then approximately kC log( MN ) measurements are needed toensure that the recovery of a k sparse vector will be exact (i.e.the recovery based on (cid:96) norm-minimisation will coincide withthat of the recovery based on (cid:96) pseudonorm minimisation)with high probability, and similarly, that if a k -sparse vectorprovides a good approximation of x , kC log( MN ) are needed toensure that the estimate of x recovered with (cid:96) minimisationcan also be expected to provide a good approximation for x .III. R ECONSTRUCTION M ETHODS
This section describes several methods that can be usedfor the reconstruction of compressively sensed signals. Themethods include techniques for both the SMV and MMVcases, and take into consideration assumptions on the signalsincluding those of temporal and spatial correlation, blockstructure, prior knowledge of the support, and non-Gaussianity.
A. Temporal-Multiple Sparse Bayesian Learning & Temporal-Multiple Sparse Bayesian Learning-Mixture of Gaussians-a
The T-MSBL and T-MSBL-MoG-a algorithms for the MMVmodel are as described in the work of [15] and [39]. Thecore idea is to learn the correlation structure between themeasurement vectors and compensate for it.
B. Block Sparse Bayesian Learning-Bound Optimization
Another technique derived in a similar way to T-MSBLis the method of Block Sparse Bayesian Learning (BSBL)proposed by [16]. This method can be used to exploit thefact that the non-zero components of each sample in time(column of the image) tend to occur in clusters. This techniqueworks on each column individually as a technique for the SMVmodel.The block structure model for x is given by equation (4). x = [ x , ...x d (cid:124) (cid:123)(cid:122) (cid:125) x T , ..., x d g − +1 , ..., x d g (cid:124) (cid:123)(cid:122) (cid:125) x T g ] T (4)The assumption used is that each block is independent, anddistributed according to a zero-mean Gaussian distribution.This gives the prior shown by equations (5) and (6). p ( x ; γ i , B i , ∀ i ) ∼ N ( , Σ ) (5) Σ = γ B . . . γ g B g (6)Here, B i represents the correlation structure within a block,and γ i an unknown nonnegative scalar parameter that deter-mines the sparsity level of the i -th block. To avoid overfitting,it is assumed that B i = B j = B ∀ i, j In this paper, it will be assumed that the blocks all haveequal length.The learning rules are obtained by following an Expectation-Maximisation method (for details, see the work of [15]).In this paper, the BSBL-Bound Optimisation (BSBL-BO)algorithm, which is significantly faster than the Expectation-Maximisation based BSBL algorithm [42] is used. Thischanges only the learning rule for γ i , the other learning rulesremain the same. C. Spatio Temporal-Sparse Bayesian Learning
The ST-SBL method, proposed by [17] is combination ofthe block sparsity idea of the BSBL method, and the correctionfor temporal correlation of the T-MSBL method [17], and itworks on the MMV model.The assumption ST-SBL makes on the structure of X is thatit has block structure as given by X = X [1] . X [2] . ... X [ g ] . (7)Where X [ i ] . ∈ R d i × L is the i -th block of X and (cid:80) gi =1 d i = N , and it is assumed that only a few of the blocks X [ i ] . arenon-zero. As with the BSBL-BO method, in this paper it willbe assumed that the blocks are all of equal size, and therefore d i = d ∀ i . For each block, it is assumed that entries in thesame row of X [ i ] . are correlated and that entries in the samecolumn of X [ i ] . are correlated, and therefore that each blockhas spatiotemporal correlation.Similarly to the other SBL based algorithms, it is assumedthat each block has a Gaussian distribution as p ( vec ( X T [ i ] . ); γ i , B , C i ) = N (0 , ( γ i C i ) ⊗ B ) (8)Here, B ∈ R L × L is an unknown positive definite matrixthat captures the correlation structure in each row of X [ i ] . , C i ∈ R d × d is an unknown positive definite matrix thatcaptures the correlation structure in each column of X [ i ] . , and γ i is an unknown nonnegative scalar parameter that determinesthe sparsity level of the i -th block.The distribution of X (assuming independence of theblocks) can be written as shown by equation (9). p ( vec ( X T ); B , { C i , γ i } i ) = N (0 , Π ⊗ B ) (9)Where Π is a block diagonal matrix given by Π (cid:44) γ C γ C . . . γ g C g (10)The relationship to the BSBL model is clear and indeedwhen L = 1 the ST-SBL model reduces to the BSBL model.As with BSBL and T-MSBL, the learning rules are found byfollowing an Expectation-Maximisation algorithm. For details,see the work of [17].Note that the implementations of both the ST-SBL andBSBL-BO algorithm use a rule to remove blocks (i.e. set themto zero) if the corresponding γ i is below a certain threshold.This is done by removing the corresponding columns of A and rows of X to produce ˜ A and ˜ X which are then used forthe remainder of the process. In this paper, this threshold willbe referred to as ¯Γ . D. Iterative Reweighted Least Squares with Dual Prior
It was shown by [21] that ultrasound RF echoes can be mod-eled using a power-law shot noise model, and it was shownby [43] that this model is related to α -stable distributions.The IRLS approach for (cid:96) p pseudonorm minimisation has beenused to attempt to take advantage of this, with p related to α [32]. This has been further extended to use knowledge ofthe support by [33], using the modified IRLS algorithm fromthe work of [44], and it is that algorithm that will be usedin this paper. p is related to the characteristic exponent of theunderlying distribution by p = α − . .
1) Block IRLS:
The BIRLS algorithm used in this paperis an adaptation of the IRLS algorithm, with the weightscalculated by summing across each block, and no prior supportinformation is used.IV. C
OMPRESSIVE ULTRASOUND SIMULATIONS
A. Thyroid image data set (a) (b) (c)Figure 1. DCTs of images (a) 1, (b) 2 and (c) 3 from image set 1 (a) (b) (c)(d) (e) (f)Figure 2. Ultrasound images (a) 1, (b) 2, (c) 3, (d) 4, (e) 5 and (f) 6 fromset 1
The first set of images we use to test these methodscome from the same dataset as those used in [33]. The datacorresponds to in vivo healthy thyroid glands. The images wereacquired using a Siemens Sonoline Elegra scanner using a 7.5MHz linear probe and a sampling frequency of 50MHz. Eachof the 7 images we use for testing were acquired by croppingpatches of size × from the original images. That is,the images consist of 256 lines, each of length 512.We use a Discrete Cosine Transform (DCT) over a DiscreteFourier Transform (DFT) to avoid mapping the original realdata to complex data which results in essentially doubling theamount of data as we would then have effectively mapped thedata from R M to C M before applying the sensing matrix toit, which is equivalent to a mapping from R M to R M . It ispossible to take this into account by halving the samplingrate and then taking into account the conjugate symmetryof a DFT of real data whilst recovering the original signal,but this adds a layer of unnecessary complexity. It is forsimilar reasons that the DCT is used in preference to theDFT in several compression standards such as JPEG. Althoughit is common to use 2D DCT or wavelet transforms forcompressed sensing of images, this relates to the use of aCCD to capture the images, and as a CCD is a 2D arrayof sensors, this approach is useful. However, for ultrasound,each ultrasound line corresponds to a single transducer, andso a line by line approach provides a more practical approach.Using a line-by-line approach also has the advantage thatthe reconstruction of each line can be handled independently,and so parallelisation is trivial. In addition, if we wish tocompressively sense an M × M image, and we assume that thenumber of measurements we need is a constant fraction of thenumber of pixels in the image, then compressively sensing theentire image at once requires a sensing matrix with O ( M ) entries, whereas the line by line approach requires a sensing matrix with only O ( M ) entries. Figure 1 shows the DCTsof image 1, 2, and 3 with logarithmic compression to betterhighlight the locations of the non-zero elements. This shows usthat the assumption of sparsity in the frequency (DCT) domainis reasonable and is therefore likely to lead to good results.To simulate a compressive sampling system, we take theDCT of each line of the image, and then project this onto arandom Gaussian basis at two levels, 33% subsampling and50% subsampling, corresponding to A ∈ R × and A ∈ R × .This simulation of the sensing system is as follows:1) The DCT of each line (prior to envelope detec-tion/logarithmic compression being applied) of the orig-inal ultrasound data is calculated2) Each of these DCTs is multiplied by the sensing matrix A
3) The CS reconstruction methods are applied to recoverthese DCTs4) The inverse DCT is then applied to each of the recoveredlines5) Envelope Detection/Logarithmic compression is thenappliedNote that the signals we sense are the raw ultrasound data andnot the displayed image. The displayed (B-mode) images arethe images we use to calculate the PSNR. These are createdby taking the Hilbert transform of the original data, addingthis to the original data as the complex component, taking theabsolute value, logarithmically compressing this and finallyrescaling such that the smallest value correspond to black(zero) and the largest value corresponds to white (one).
B. Brno dataset
The final set of tests are on a set of 84 images fromthe Signal processing laboratory of the Brno University ofTechnology. The description of the dataset given is reproducedfor reference below.The database contains images of common carotidartery (CCA) of ten volunteers (mean age 27.5± 3.5 years) with different weight (mean weight76.5 ± 9.7 kg). Images (usually eight images pervolunteer) were acquired with Sonix OP ultrasoundscanner with different set-up of depth, gain, timegain compensation (TGC) curve and different lineararray transducers. The image database contains 84B-mode ultrasound images of CCA in longitudinalsection. The resolution of images is approximately390x330px. The exact resolution depends on the set-up of the ultrasound scanner. Two different lineararray transducers with different frequencies (10MHzand 14MHz) were used. These frequencies werechosen because of their suitability for superficialorgans imaging. All images were taken by the spe-cialists with five year experience with scanning ofarteries. Images were captured in accordance to thestandard protocol with patients lying in the supineposition and with the neck rotated to the left sidewhile the right CCA was examined.
It should be noted that these images, unlike those in the firstdata set are provided after envelope detection and logarithmiccompression, and hence any differences in the results whencompared to those in the previous data set must be examinedwith this in mind. Before being used to test the algorithms,the images were cropped such that the height was a multipleof 32 and the width a multiple of 4. The simulated sensingsystem works as it did in the previous section, except thatthe envelope detection and logarithmic compression steps areskipped. V. R
ESULTS
In this section, we conduct intensive tests in order toquantify the performance of the various methods describedin section III. In order to evaluate the performance of thealgorithms, we calculate the PSNR of the recovered images.The PSNR is given by equation (11).PSNR ( ˆ
I, I ) = 20 log ( MA X I ) −
10 log ( MSE ( ˆ
I, I ) (11)Here, MAX I is the maximum possible pixel value in theimage.In this section, ST-SBL x/y refers to ST-SBL using a blocksize of y and processing columns in blocks of size x , andBSBL-BO x refers to BSBL-BO using a blocksize of x . A. Thyroid dataset
We first consider the effect of block size on the performanceof the BSBL-BO method, fixing the pruning parameter ¯Γ to be − . The block size can be thought of as the size we expectclusters of non-zero elements in the DCT of each line of theultrasound image to be. We consider only the case where allblock sizes are equal, and therefore all block sizes we considerare powers of 2. In fact, we tested all such block sizes, butpresent only the most relevant results. In this case, we cansee from Table I that at a subsampling rate of , a blocksize of 32 provides optimal recovery for all images, whereasfor a subsampling rate of we can see from Table II thatalthough a block size of 32 is still optimal for most of theimages, a block size of now provides better results forsome of the images. This is slightly surprising, as we wouldexpect the block structure to be a property of the image beingreconstructed, and not of the sampling method.We now consider the effect of block size for ST-SBL,along with considering the effect of the number of columnsprocessed at a time. As with block size, we only considerprocessing the columns in blocks of equal size, and thereforeall column block sizes we consider are powers of 2. As withBSBL-BO, we tested all such block sizes and column blocksizes, but present only the most relevant results. ST-SBL workson the assumption of shared sparsity between columns, and wecan therefore think of the column block size as representinghow fast we expect the sparsity structure to change as we movebetween the DCTs of each line of the ultrasound image, withsmaller column block sizes corresponding to faster changes. Inthis case, we can see from Table III that at a 33% subsamplingrate, the optimal block size is typically , which is consistent with the results we obtained for BSBL-BO and the optimalcolumn block size is typically 1. For the images that deviatefrom this, these parameters would still be close to optimum,with a maximum loss in term of PSNR of . dB. Moving toa subsampling rate of , these parameters become optimalfor all images.Note that with L = 1 ST-SBL reduces to the BSBL case,and so the small difference observed between these methodsis due to BSBL being implemented with a bound optimiza-tion method and ST-SBL with an expectation-maximizationmethod, although there may also be other slight differencesbetween the implementations.We now consider the effect of the pruning parameter ¯Γ .This parameter controls when blocks are pruned, that is, atwhat level blocks are assumed to be zero. It can be thoughtof as relating to how small (in terms of the sum of squares ofthe block) we expect a block to be before it no longer has asignificant effect on the quality of the recovered image.For BSBL-BO, we can see from Table VII that decreasing ¯Γ to . × − results at the 33% subsampling levels in aslight increase in performance, but does not change the optimalblock size. The pattern is repeated at the subsamplinglevel, with Table V showing a greater increase in performancethan at the 33% subsampling level, but no change in optimalblock size.For ST-SBL, decreasing ¯Γ to . × − results at the 33%subsampling level in no change in performance (results are notshown as they are identical to those in Table III). At a 50%subsampling rate, Table VI shows an increase in performance,but no change in optimal block size or column block size.These results are consistent with results seen with BSBL-BO. Block SizeImage 16 32 641 43.46
ESULTS FOR
BSBL-BO (PSNR)
AT A
SUBSAMPLING RATE ( ¯Γ = 10 − )Block SizeImage 16 32 641 34.95 47.43 Table IIR
ESULTS FOR
BSBL-BO (PSNR)
AT A
SUBSAMPLING RATE ( ¯Γ = 10 − ) Column block size 1 2 4Block Size 16 32 16 32 16 32Image 1 43.58
ESULTS FOR
ST-SBL (PSNR)
AT A
SUBSAMPLING RATE ( ¯Γ = 10 − )Column block size 1 2 4Block Size 16 32 16 32 16 32Image 1 52.42 ESULTS FOR
ST-SBL (PSNR)
AT A
SUBSAMPLING RATE ( ¯Γ = 10 − )Block SizeImage 16 32 641 34.96 45.69 Table VR
ESULTS FOR
BSBL-BO (PSNR)
AT A
SUBSAMPLING RATE ( ¯Γ = 2 . × − )Column block size 1 2 4Block Size 16 32 16 32 16 32Image 1 52.75 ESULTS FOR
ST-SBL (PSNR)
AT A
SUBSAMPLING RATE ( ¯Γ = 2 . × − )PSNR Block SizeImage 16 321 43.48 Table VIIR
ESULTS FOR
BSBL-BO (PSNR)
AT A
SUBSAMPLING RATE ( ¯Γ = 2 . × − ) : We now compare the results obtained with ST-SBL andBSBL-BO to the other methods we described in sectionIII. Table VIII shows comparisons with a number of othermethods for recovery of compressively sensed signals at a33% subsampling rate, and Table IX for a 50% subsamplingrate. Of the methods tested, IRLS dual prior consistently hasthe worst performance. At a subsampling rate of 33%, T-MSBL-MoG-4 outperforms T-MSBL in 6 out of 7 cases,whereas when we move to a 50% subsampling rate, T-MSBLconsistently outperforms T-MSBL-MoG-4. In addition to thesemethods Tables VIII and IX also show the PSNR that would beachieved by taking the 86 and 128 largest elements (in absolutevalue) of the DCT of each ultrasound line respectively. 86and 128 were chosen to be half the measurements taken, asall optimal methods were SMV methods, and in this case, ifthe vector we wish to recover is k -sparse, a minimum of k measurements are required. Interestingly, both ST-SBL 1/32( ¯Γ = 2 . × − ) and BSBL-BO ( ¯Γ = 2 . × − )performed significantly better than this method, suggesting thatmethods seeking to approximate the k -sparse approximationmay not be ideal. MethodImage ST-SBL 1/32( ¯Γ =2 . × − ) BSBL-BO 32( ¯Γ =2 . × − ) IRLS - Dualprior1 44.03 43.98 30.012 36.62 36.49 26.123 35.35 35.97 27.544 39.94 39.93 28.745 37.49 37.50 28.716 39.34 39.24 30.097 43.52 43.48 25.34MethodImage T-MSBL T-MSBL-MoG-4 86-spase1 31.11 31.53 38.112 26.92 26.94 34.563 28.34 29.78 35.914 29.78 29.31 36.105 29.92 30.75 34.586 31.66 32.28 36.727 29.05 27.64 36.58Table VIIIC OMPARISON OF RECOVERY RESULTS AT
SUBSAMPLING LEVEL (PSNR)
MethodImage ST-SBL 1/32( ¯Γ =2 . × − ) BSBL-BO 32( ¯Γ =2 . × − ) IRLS - Dualprior1 53.27 45.69 33.432 46.07 45.19 29.643 44.90 31.56 31.294 50.04 50.01 32.785 47.89 48.05 32.236 49.80 49.45 33.137 53.09 50.49 32.49MethodImage T-MSBL T-MSBL-MoG-4 128-sparse1 37.44 37.31 41.802 32.62 30.37 38.153 35.25 34.94 39.264 34.02 34.01 39.785 35.87 35.84 38.696 38.10 37.70 40.827 35.27 35.12 39.73Table IXC OMPARISON OF RECOVERY RESULTS AT
SUBSAMPLING LEVEL (PSNR)(a) (b) (c)(d) (e) (f)Figure 3. Image 1, subsampled at 50% and recovered with (a) ST-SBL 1/32( ¯Γ = 2 . × − ), (b) BSBL-BO 32 ( ¯Γ = 2 . × − ), (c) IRLS -Dual prior, (d) T-MSBL, (e) T-MSBL-MoG-4, (f) 128-sparse (a) (b) (c)(d) (e) (f)Figure 4. Image 2, subsampled at 33% and recovered with (a) ST-SBL 1/32( ¯Γ = 2 . × − ), (b) BSBL-BO 32 ( ¯Γ = 2 . × − ), (c) IRLS -Dual prior, (d) T-MSBL, (e) T-MSBL-MoG-4, (f) 86-sparseFigure 5. Time vs Performance for a 33% subsampling rate Figure 3 shows Image 1 after being subsampled at 50%and then recovered with various algorithms. Figure 4 showsthe same for Image 2 after it was subsampled at 33%.Whilst we are primarily concerned with the quality ofthe reconstruction, the time taken for various reconstructionmethods is of some practical interest. Figures 5 and 6 showplots of average time against the average PSNR achieved. Forthese timings, the algorithms were run using MATLAB 2014aon computer equipped with an i7-3770 processor and 8GBof RAM. Although processing more than a single columnat a time with ST-SBL does not typically lead to improvedrecovery performance, the drop in performance is minimal,and as seen in figures 5 and 6, there is a significant reduction incomputational time required by processing multiple columnsat once.
Figure 6. Time vs Performance for a 50% subsampling rate
Overall, the best recovery performance was achieved withST-SBL 1/32 ( ¯Γ = 2 . × − ) . However, once we take thecomputational time into account, we would consider ST-SBL4/32 ( ¯Γ = 10 − ) to be the best practical choice, as it offersonly a minimal decrease in performance, and a very significantreduction in the computational time required.A number of other methods were also tested on this setof images. The settings used for these methods are describedbelow. • CoSaMP, FBMP and Subspace Pursuit were fed with the“true” number of non-zero elements. This was chosen asthe size of the support that was used in the IRLS DualPrior method. • The value of p for the calculation of the weights in theBIRLS method was chosen in the same way as for IRLS. • BIRLS and Block OMP (BOMP) were tested with blocksizes of 16 and 32. • MFOCUSS, MSBL, FOCUSS, FBMP, and (cid:96) − ) for the noise variance. Image 1 2 3 4 5 6 7HTP[45] 28.52 25.13 25.74 26.70 23.97 27.62 25.99EM-SBL[46] 28.65 25.26 26.53 28.51 27.77 29.51 26.69FOCUSS[47] 27.75 24.62 25.98 27.43 25.38 27.65 26.43CoSaMP[48] 16.62 15.21 15.37 14.83 13.97 13.83 16.78Sl [49] 30.57 26.70 27.89 29.46 29.33 31.88 27.78FBMP[50] 29.14 25.49 26.26 27.80 28.03 30.81 26.17SubspacePursuit [51] 28.14 24.92 24.73 24.85 22.68 28.52 25.09ExCoV[52] 25.17 22.72 23.80 22.64 22.91 22.51 24.35AMP[53] 29.06 25.84 27.02 28.66 27.03 29.67 27.32BCS[54] 28.36 24.91 25.71 27.26 27.40 30.20 25.72l1-magic[55] 29.04 25.82 27.00 28.68 27.19 29.67 27.31BIRLS 16 38.17 34.97 35.60 36.64 38.25 40.18 36.14BOMP 16[56] 18.12 16.88 15.97 16.56 14.26 13.45 18.20BIRLS 32 BOMP 32 17.76 16.24 16.48 15.81 14.44 14.49 17.47MSBL[57] 33.12 29.13 29.67 30.66 32.77 34.65 30.01MFOCUSS[10] 33.30 29.99 31.04 32.30 34.11 35.77 30.57Table XR
ECONSTRUCTION QUALITY OF IMAGE SET AFTER THE
DCT
WASSUBSAMPLED AT THE
LEVEL ( A ∈ R × ). R ESULTS GIVEN INTERMS OF
PSNR.1 2 3 4 5 6 7HTP 21.63 19.17 19.29 19.91 18.83 20.21 20.46EM-SBL 22.67 FAIL FAIL FAIL FAIL FAIL FAILFOCUSS 22.20 20.00 20.87 21.07 20.07 20.69 21.91CoSaMP 17.00 15.54 15.81 15.48 13.75 14.69 17.54Sl BOMP 32 14.59 13.80 13.12 13.25 14.48 14.09 14.60MSBL 29.82 26.03 27.21 28.51 28.90 31.03 28.28MFOCUSS 30.98 27.15 28.87 29.86 30.37 32.07 28.66Table XIR
ECONSTRUCTION QUALITY OF IMAGE SET AFTER THE
DCT
WASSUBSAMPLED AT THE
LEVEL ( A ∈ R × ). R ESULTS GIVEN INTERMS OF
PSNR. FAIL
INDICATES THAT THE METHOD RETURNED ONLYZEROS . The results of these tests can be seen in Table XI, whichshows the results when the subsampling rate was 50%, andTable X which shows the results when the subsampling ratewas 33%. The only method that performed sufficiently wellto be of interest was BIRLS, which was the only method tooutperform any of the previously tested methods. However,it was still outperformed by the previously tested methodsthat took advantage of block structure. This suggests thatthe use of the block structure assumption is able to providesignificant performance gains. It is also notable that at the33% subsampling level, some of the methods returned a vectorconsisting only of zeros when attempting to recover someimages, and in these cases the methods are considered to havefailed entirely.However, block sparsity is clearly not a sufficient as-sumption in and of itself, which can be seen clearly fromthe dreadful performance of the BOMP method. The easiestexplanation for this is that due to the nature of the method,a number of elements of the estimated vector are guaranteedto be zero, and the effect this has on performance can also beseen from the poor performance of the CoSaMP method.Also of interest is the fact that both MFOCUSS and MSBLoutperformed their SMV model counterparts. Unlike ST-SBL,these methods do not make any use of block structure. Thissuggests that attempting to use both block structure and jointsparsity on the signal effectively forces “too much” structureon the signal, leading to worse performance. This is in linewith the poor results of BOMP, which forces some blocks tobe exactly zero, whereas ST-SBL, BSBL-BO, and BIRLS allallow for solutions to be only approximately sparse.
B. Brno dataset
Due to the large size of this test set, only a limited numberof algorithms were tested. The algorithms that were chosen fortesting are ST-SBL 1/32 ( ¯Γ = 2 . × − ), ST-SBL 4/32( ¯Γ = 10 − ), BIRLS and (cid:96) Average PSNR Number of times bestmethodST-SBL 1/32( ¯Γ = 2 . × − ) 24.90 20ST-SBL 4/32( ¯Γ = 10 − ) BIRLS 32 23.46 5 (cid:96)
ECONSTRUCTION QUALITY OF IMAGES FROM SET AFTERDOWNSAMPLING AT THE
LEVEL
Average PSNR Number of times bestmethodST-SBL 1/32( ¯Γ = 2 . × − ) 24.96 20ST-SBL 4/32( ¯Γ = 10 − ) BIRLS 32 23.44 2 (cid:96)
ECONSTRUCTION QUALITY OF IMAGES FROM SET AFTERDOWNSAMPLING AT THE
LEVEL
The results are mostly as would be expected from previousresults. The only unexpected item to note is that ST-SBL 4/32( ¯Γ = 10 − ) is outperforming ST-SBL 1/32 ( ¯Γ = 2 . × − )) on this data set. This could be due to the signalsbeing downsampled after envelope detection and logarithmiccorrection, or it could be due to the support of the DCTs ofneighbouring ultrasound lines being more closely related inthis set of images. VI. C ONCLUSIONS
In this paper, we have thoroughly investigated what we con-sider to be the state-of-the-art methods for the reconstructionof compressively sensed medical ultrasound images. We haveshown that by varying the parameters of structured SparseBayesian learning methods, we can achieve significant im-provements in the recovery of compressively sensed ultrasoundimages. On the other hand, if we are willing to accept aslight decrease in recovery performance, we can significantlyreduce the computational time required for recovery. Theadvantage of the structured Sparse Bayesian Learning methodsis very significant, and it is worth considering why this is thecase. The IRLS approach is an attempt to improve upon (cid:96) norm minimisation by more closely mimicking (cid:96) pseudonormminimisation. However, if we examine the signals we aresensing, we see that in fact they have very few non-zeroelements, and hence (cid:96) pseudonorm minimisation is unlikelyto be the ideal approach. Therefore, we can conclude that thisadvantage comes from the previously noted ability of thesemethods to recover non-sparse signals [17]. Hence although itmay be of some theoretical interest, development of methodswhich more closely approximate (cid:96) pseudonorm minimisationis unlikely to provide significant practical advantages, andefforts to improve the reconstruction of compressively sensedsignals should instead focus on more accurately modeling thestructure and statistical properties of the signals.Our current work focuses on taking advantage of the statisti-cal properties of ultrasound images in the MMV case, buildingon previous work [32], [33].VII. A CKNOWLEDGEMENTS
The authors would like to thank Dr Zhilin Zhang fromSamsung Research America - Dallas for making his code forT-MSBL and BSBL available online, and for providing uswith the code for the ST-SBL method. We would also liketo thank Dr Adrian Basarab from IRIT Laboratory, Toulouse, France for providing the thyroid ultrasound images used inthis study.This work was supported in part by the Engineering andPhysical Sciences Research Council (EP/I028153/1) and theUniversity of Bristol. R
EFERENCES[1] C. Merritt, “Ultrasound safety: what are the issues?,”
Radiology ,vol. 173, no. 2, pp. 304–306, 1989.[2] T. L. Szabo,
Diagnostic ultrasound imaging: inside out . Academic Press,2004.[3] G. D. Stetten, T. Ota, C. J. Ohazama, C. Fleishman, J. Castellucci,J. Oxaal, T. Ryan, J. Kisslo, and O. v. Ramm, “Real-time 3D ultrasound:A new look at the heart,”
Journal of Cardiovascular Diagnosis andProcedures , vol. 15, no. 2, pp. 73–84, 1998.[4] S. Yagel, S. Cohen, I. Shapiro, and D. Valsky, “3D and 4D ultrasoundin fetal cardiac scanning: a new look at the fetal heart,”
Ultrasound inobstetrics & gynecology , vol. 29, no. 1, pp. 81–95, 2007.[5] M. Cikes, L. Tong, G. R. Sutherland, and J. Dhooge, “Ultrafast car-diac ultrasound imaging: technical principles, applications, and clinicalbenefits,”
JACC: Cardiovascular Imaging , vol. 7, no. 8, pp. 812–823,2014.[6] M. Tanter and M. Fink, “Ultrafast imaging in biomedical ultrasound,”
Ultrasonics, Ferroelectrics, and Frequency Control, IEEE Transactionson , vol. 61, no. 1, pp. 102–119, 2014.[7] R. G. Baraniuk, “More is less: signal processing and the data deluge,”
Science , vol. 331, no. 6018, pp. 717–719, 2011.[8] E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Ex-act signal reconstruction from highly incomplete frequency information,”
Information Theory, IEEE Transactions on , vol. 52, no. 2, pp. 489–509,2006.[9] D. L. Donoho, “Compressed sensing,”
Information Theory, IEEE Trans-actions on , vol. 52, no. 4, pp. 1289–1306, 2006.[10] S. F. Cotter, B. D. Rao, K. Engan, and K. Kreutz-Delgado, “Sparsesolutions to linear inverse problems with multiple measurement vectors,”
Signal Processing, IEEE Transactions on , vol. 53, no. 7, pp. 2477–2488,2005.[11] G. K. Wallace, “The JPEG still picture compression standard,”
Commu-nications of the ACM , vol. 34, no. 4, pp. 30–44, 1991.[12] D. Le Gall, “MPEG: A video compression standard for multimediaapplications,”
Communications of the ACM , vol. 34, no. 4, pp. 46–58,1991.[13] R. Baraniuk, M. A. Davenport, M. F. Duarte, and C. Hegde, “Anintroduction to compressive sensing,”
Connexions e-textbook , 2011.[14] S. Vasanawala, M. Murphy, M. T. Alley, P. Lai, K. Keutzer, J. M.Pauly, and M. Lustig, “Practical parallel imaging compressed sensingMRI: Summary of two years of experience in accelerating body MRI ofpediatric patients,” in
Biomedical Imaging: From Nano to Macro, 2011IEEE International Symposium on , pp. 1039–1043, IEEE, 2011.[15] Z. Zhang and B. D. Rao, “Sparse signal recovery with temporallycorrelated source vectors using sparse Bayesian learning,”
SelectedTopics in Signal Processing, IEEE Journal of , vol. 5, no. 5, pp. 912–926,2011.[16] Z. Zhang and B. D. Rao, “Recovery of block sparse signals using theframework of block sparse Bayesian learning,” in
Acoustics, Speech andSignal Processing (ICASSP), 2012 IEEE International Conference on ,pp. 3345–3348, IEEE, 2012.[17] Z. Zhang, T.-P. Jung, S. Makeig, Z. Pi, and B. Rao, “Spatiotemporalsparse Bayesian learning with applications to compressed sensing ofmultichannel physiological signals,”
Neural Systems and RehabilitationEngineering, IEEE Transactions on , vol. 22, no. 6, pp. 1186–1197, 2014.[18] B. Gnedenko, A. Kolmogorov, B. Gnedenko, and A. Kolmogorov, “Limitdistributions for sums of independent random variables,”
Amer. J. Math. ,vol. 105, pp. 28–35, 1954.[19] J. Voit,
The statistical mechanics of financial markets . Springer Science& Business Media, 2013.[20] T. Marsh and P. Pfleiderer, “"Black Swans" and the financial crisis,”
Review of Pacific Basin Financial Markets and Policies , vol. 15, no. 02,p. 1250008, 2012.[21] M. A. Kutay, A. P. Petropulu, and C. W. Piccoli, “On modelingbiomedical ultrasound RF echoes using a power-law shot-noise model,”
Ultrasonics, Ferroelectrics, and Frequency Control, IEEE Transactionson , vol. 48, no. 4, pp. 953–968, 2001. [22] H. Liebgott, A. Basarab, D. Kouame, O. Bernard, and D. Friboulet,“Compressive sensing in medical ultrasound,” in
Ultrasonics Symposium(IUS), 2012 IEEE International , pp. 1–6, IEEE, 2012.[23] M. F. Schiffner and G. Schmitz, “Fast pulse-echo ultrasound imagingemploying compressive sensing,” in , pp. 688–691, IEEE, 2011.[24] Z. Chen, A. Basarab, and D. Kouamé, “Reconstruction of enhancedultrasound images from compressed measurements using simultaneousdirection method of multipliers,” arXiv preprint arXiv:1512.05586 ,2015.[25] Z. Chen, A. Basarab, and D. Kouamé, “Compressive deconvolution inmedical ultrasound imaging,”
IEEE transactions on medical imaging ,vol. 35, no. 3, pp. 728–737, 2016.[26] L. Demanet and L. Ying, “Wave atoms and sparsity of oscillatorypatterns,”
Applied and Computational Harmonic Analysis , vol. 23, no. 3,pp. 368–387, 2007.[27] H. Liebgott, R. Prost, and D. Friboulet, “Pre-beamformed rf signalreconstruction in medical ultrasound using compressive sensing,”
Ul-trasonics , vol. 53, no. 2, pp. 525–533, 2013.[28] N. Wagner, Y. C. Eldar, and Z. Friedman, “Compressed beamforming inultrasound imaging,”
Signal Processing, IEEE Transactions on , vol. 60,no. 9, pp. 4643–4657, 2012.[29] T. Chernyakova and Y. C. Eldar, “Fourier-domain beamforming: the pathto compressed ultrasound imaging,”
IEEE transactions on ultrasonics,ferroelectrics, and frequency control , vol. 61, no. 8, pp. 1252–1267,2014.[30] C. Quinsac, N. Dobigeon, A. Basarab, D. Kouamé, and J.-Y. Tourneret,“Bayesian compressed sensing in ultrasound imaging,” in
ComputationalAdvances in Multi-Sensor Adaptive Processing (CAMSAP), 2011 4thIEEE International Workshop on , pp. 101–104, IEEE, 2011.[31] N. Dobigeon, A. Basarab, D. Kouamé, and J.-Y. Tourneret, “RegularizedBayesian compressed sensing in ultrasound imaging,” in
Signal Process-ing Conference (EUSIPCO), 2012 Proceedings of the 20th European ,pp. 2600–2604, IEEE, 2012.[32] A. Achim, B. Buxton, G. Tzagkarakis, and P. Tsakalides, “Compres-sive sensing for ultrasound RF echoes using a-stable distributions,” in
Engineering in Medicine and Biology Society (EMBC), 2010 AnnualInternational Conference of the IEEE , pp. 4304–4307, IEEE, 2010.[33] A. Achim, A. Basarab, G. Tzagkarakis, P. Tsakalides, and D. Kouamé,“Reconstruction of compressively sampled ultrasound images using dualprior information,” in
Image Processing (ICIP), 2014 IEEE InternationalConference on , pp. 1283–1286, IEEE, 2014.[34] G. Tzagkarakis, A. Achim, P. Tsakalides, and J.-L. Starck, “Joint recon-struction of compressively sensed ultrasound RF echoes by exploitingtemporal correlations,” in
Biomedical Imaging (ISBI), 2013 IEEE 10thInternational Symposium on , pp. 632–635, IEEE, 2013.[35] R. Tur, Y. C. Eldar, and Z. Friedman, “Innovation rate sampling of pulsestreams with application to ultrasound imaging,”
IEEE Transactions onSignal Processing , vol. 59, no. 4, pp. 1827–1842, 2011.[36] O. Lorintiu, H. Liebgott, and D. Friboulet, “Compressed sensing dopplerultrasound reconstruction using block sparse bayesian learning,”
IEEEtransactions on medical imaging , vol. 35, no. 4, pp. 978–987, 2016.[37] S. M. Zobly and Y. M. Kakah, “Compressed sensing: Doppler ultrasoundsignal recovery by using non-uniform sampling & random sampling,” in
Radio Science Conference (NRSC), 2011 28th National , pp. 1–9, IEEE,2011.[38] R. Porter, V. B. Tadic, and A. M. Achim, “Reconstruction of com-pressively sensed ultrasound rf echoes by exploiting non-gaussianityand temporal structure,” in , (Quebec, Canada), Sept. 2015.[39] R. Porter, V. Tadic, and A. Achim, “Sparse Bayesian learning for non-Gaussian sources,”
Digital Signal Processing , vol. 45, pp. 2–12, 2015.[40] M. Mishali, Y. C. Eldar, O. Dounaevsky, and E. Shoshan, “Xampling:Analog to digital at sub-nyquist rates,”
IET circuits, devices and systems ,vol. 5, no. 1, pp. 8–20, 2011.[41] E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery fromincomplete and inaccurate measurements,”
Communications on pure andapplied mathematics , vol. 59, no. 8, pp. 1207–1223, 2006.[42] Z. Zhang and B. D. Rao, “Extension of SBL algorithms for the recoveryof block sparse signals with intra-block correlation,”
IEEE Transactionson Signal Processing , vol. 61, no. 8, pp. 2009–2015, 2013.[43] A. P. Petropulu and J.-C. Pesquet, “Power-law shot noise and itsrelationship to long-memory α -stable processes,” Signal Processing,IEEE Transactions on , vol. 48, no. 7, pp. 1883–1892, 2000.[44] C. J. Miosso, R. Von Borries, M. Argàez, L. Velázquez, C. Quintero,and C. Potes, “Compressive sensing reconstruction with prior informa- tion by iteratively reweighted least-squares,” Signal Processing, IEEETransactions on , vol. 57, no. 6, pp. 2424–2431, 2009.[45] S. Foucart, “Hard thresholding pursuit: an algorithm for compressivesensing,”
SIAM Journal on Numerical Analysis , vol. 49, no. 6, pp. 2543–2563, 2011.[46] D. P. Wipf and B. D. Rao, “Sparse Bayesian learning for basis selection,”
IEEE Transactions on Signal Processing , vol. 52, no. 8, pp. 2153–2164,2004.[47] I. F. Gorodnitsky and B. D. Rao, “Sparse signal reconstruction fromlimited data using FOCUSS: A re-weighted minimum norm algorithm,”
IEEE Transactions on signal processing , vol. 45, no. 3, pp. 600–616,1997.[48] D. Needell and J. A. Tropp, “Cosamp: Iterative signal recovery fromincomplete and inaccurate samples,”
Applied and Computational Har-monic Analysis , vol. 26, no. 3, pp. 301–321, 2009.[49] G. H. Mohimani, M. Babaie-Zadeh, and C. Jutten, “Fast sparse repre-sentation based on smoothed l0 norm,” in
International Conference onIndependent Component Analysis and Signal Separation , pp. 389–396,Springer, 2007.[50] P. Schniter, L. C. Potter, and J. Ziniel, “Fast Bayesian matching pursuit,”in
Information Theory and Applications Workshop, 2008 , pp. 326–333,IEEE, 2008.[51] W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing:Closing the gap between performance and complexity,” tech. rep., DTICDocument, 2008.[52] K. Qiu and A. Dogandzic, “Variance-component based sparse signalreconstruction and model selection,”
IEEE Transactions on SignalProcessing , vol. 58, no. 6, pp. 2935–2952, 2010.[53] D. L. Donoho, A. Maleki, and A. Montanari, “Message passing al-gorithms for compressed sensing: I. motivation and construction,” in
Information Theory (ITW 2010, Cairo), 2010 IEEE Information TheoryWorkshop on , pp. 1–5, IEEE, 2010.[54] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,”
SignalProcessing, IEEE Transactions on , vol. 56, no. 6, pp. 2346–2356, 2008.[55] E. Candes and J. Romberg, “l1-magic: Recoveryof sparse signals via convex programming,” , vol. 4, p. 14,2005.[56] Y. C. Eldar, P. Kuppinger, and H. Bolcskei, “Block-sparse signals:Uncertainty relations and efficient recovery,”
IEEE Transactions onSignal Processing , vol. 58, no. 6, pp. 3042–3054, 2010.[57] D. P. Wipf and B. D. Rao, “An empirical Bayesian strategy for solvingthe simultaneous sparse approximation problem,”