EXIT Chart Analysis of Turbo Compressed Sensing Using Message Passing De-Quantization
11 EXIT Chart Analysis of Turbo Compressed SensingUsing Message Passing De-Quantization
Amin Movahed,
Student Member, IEEE,
Mark C. Reed,
Senior Member, IEEE, Shahriar Etemadi Tajbakhsh
Abstract —We propose an iterative decoding method, which wecall turbo-CS, for the reception of concatenated source-channelencoded sparse signals transmitted over an AWGN channel. Theturbo-CS encoder applies 1-bit compressed sensing as a sourceencoder concatenated serially with a convolutional channelencoder. At the turbo-CS decoder, an iterative joint source-channel decoding method is proposed for signal reconstruction.We analyze, for the first time, the convergence of turbo-CSdecoder by determining an EXIT chart of the constituentdecoders. We modify the soft-outputs of the decoder to improvethe signal reconstruction performance of turbo-CS decoder.For a fixed signal reconstruction performance RSNR of dBwe achieve more than dB of improvement in the channelSNR after iterations of the turbo-CS. Alternatively, for afixed SNR of − dB, we achieve a dB improvement in RSNR. Index Terms — turbo-coding, 1-bit compressed sensing,joint source-channel decoding, message passing de-quantization
I. I
NTRODUCTION
Sparse signals can be widely used and observed in differentapplications from astronomical and medical imaging to datacollected by a group of sensors [1]–[3]. The sparsity in suchsignals is effectively exploited in compressed sensing (CS) toretrieve a high dimensional sparse signal via a smaller numberof measurements and to encode it to an arguably smallernumber of values [4], [5]. In other words, compressed sensingcan be categorized as a source coding method for sparsesignals. In many applications those sparse signals are requiredto be transmitted to a receiver over a communication channelwhich is typically prone to errors. Therefore channel codingis also needed to protect the transmission of the bits againstthe channel errors. It has been proven that under idealisticasymptotic conditions, where both source and channel codesare assumed to be of infinite length and complexity, optimalperformance is achievable via separate source and channelcoding. In practice, however, source coding usually leavessome redundancy in the representation of the signal. Jointsource-channel decoding is provably optimum under such cir-cumstances, where the residual redundancies from the sourcecoding can be jointly used with redundancies introduced bythe channel code to correct the channel-errors [6], [7].In this work, we consider a single-transmitter single-receivercommunication system for transmission of sparse signals overan AWGN channel. We propose an iterative (turbo) methodwhich we refer to as turbo-CS. The turbo-CS encoder consistsof a 1-bit CS encoder, [8], [9], as a source encoder concate-nated serially with a convolutional channel encoder. In otherwords, the inner encoder in serial concatenated convolutional
Amin Movahed, Mark C. Reed and Shahriar Etemadi Tajbakhsh arewith the School of Engineering and Information Technology, Universityof New South Wales, Canberra, Australia (e-mails: {a.movahed@student.,mark.reed@, s.etemadiajbakhsh@}unsw.edu.au}). code is replaced with a 1-bit CS encoder. We introduce aniterative source-channel decoder where the process of decom-pression is performed within the loop of a turbo-decoder cycle.It should be noted that apart from the high performance ofthe proposed decoder due to the aforementioned reasons, thecomputational complexity vs. performance of our system isexpected to be significantly better than a separate system ofconcatenated CS decoder and a turbo decoder.The term ‘turbo’ refers to the approach of turbo-codes wheretwo decoders at the receiver exchange a posteriori / a priori information through a number of iterations to improve thedecoding process [10]–[12]. Turbo-CS decoder applies an aposteriori probability (APP) decoder as channel decoder and soft-in/soft-out (SISO) 1-bit CS decoder as the source decoder.This paper is a substantial extension of our previous workin [13] where we have introduced the basic model of turbo-CS decoding system. In SISO 1-bit CS decoder in [13],a heuristically chosen linear mapping function provides apriori information for APP decoder to be used in the nextiteration of the turbo-CS decoder [13]. Since the proposedmapping method is based on a heuristic conversion function,its output bit probabilities are not of the proper distribution tobe fed to the APP decoder as input. Consequently, despite therelatively good performance of the proposed system it is yetconsiderably sub-optimal. Also, [13] lacks a solid analysis onthe performance of the system, again because of the heuristicmapping function used.The key contribution of this work is to introduce a newSISO 1-bit decoder which is based on message passing de-quantization (MPDQ) method in [14]. MPDQ applies Gaussianapproximation to solve a loopy belief propagation problem.The input of MPDQ is quantized version of noisy CS mea-surements. This method reconstructs the sparse signal byestimating the mean and variance of the signal elementsfrom which the bit probabilities we are looking for can bederived. In a special case when the quantizer has two levels,the MPDQ algorithm converts to 1-bit CS reconstructionmethod using Gaussian approximation. We propose SISO 1-bitMPDQ algorithm whose input and output are soft-values. TheSISO 1-bit MPDQ is a tractable algorithm and its output bitprobabilities are matched with the required bit probabilities byAPP decoder. Therefore, we can analyse the performance andconvergence of the turbo-CS decoder using extrinsic transfer(EXIT) chart. Moreover, we consider a more general class ofBernoulli-Gaussian signals in this work (in comparison to themodel in [13]) where the sparsity level of the signal is alsostochastic and unknown to the decoder.This paper is organized as follows: a summary of relatedbackground is provided in Section II. In Section III, weexplain turbo-CS and channel transmission model. In SectionIV, we introduce turbo-CS decoder for the system model of a r X i v : . [ c s . I T ] M a y Section III. The EXIT chart analysis of the turbo-CS decoder ispresented in Section V. The performance of turbo-CS throughnumerical experiment is investigated in Section VI. The paperis concluded in Section VII.II. R
ELATED WORKS
In this work, compressive sensing is applied in the contextof joint source-channel decoding. CS has been introduced in[4], [15] and extensively studied [16]. The number of CSmeasurements can be reduced in the presence of a statisticalcharacterization of the signal where Bayesian inference offersthe potential for more precise estimation of signal [17], [18].In [19], a belief propagation (BP) decoder has been developedto accelerate CS encoding and decoding under the Bayesianframework. To overcome the computational complexity of BPa Gaussian approximation is applied to CS decoding in [20].In practical applications, the CS measurements are quan-tized to finite symbols for storage and transmission purposes[21], [22]. Gaussian approximation of BP, referred to MPDQ,has also been effectively applied in quantized compressedsensing [14]. The SISO source decoding method introduced inour paper is inspired by MPDQ. However, we have modifiedMPDQ to accept and provide soft-values (probabilities) asinput and output.A wide range of joint source-channel decoding schemeshas been proposed in the literature. Here, we list few mostlyrelevant works to ours (see [7] for an extended survey). In[23], EXIT chart analysis has been used to design an iterativesource-channel decoder. Statistical characteristics of hiddenMarkov sources have been used to modify a turbo decoder forjoint source-channel decoding [24]. Optimal vector quantizerhas been incorporated as a channel encoder jointly with CSencoder for sparse signal transmission over AWGN channel[25]. In a paper by Schniter, BP has been applied in an iterativeapproach for reconstruction of structured-sparse signals [26].III. S
YSTEM MODEL
In this section we explain turbo-CS encoding model. We ap-ply concatenated coding scheme consisting of the combinationof two constituent encoders. Turbo-CS encoder includes a 1-bit CS encoder (as a source encoder) and a channel encoder toform a serial concatenated source-channel encoder. The inputof turbo-CS encoder is a sparse signal.
A. Sparse signal model
We denote a block of sparse signal with x ∈ R N . Signal x is called K -sparse when K elements out of N elementsare non-zero. In this work, we consider a case where K isstochastic and the probability of an element being non-zero isknown and fixed to ρ . We assume that the elements of x areindependent and identically distributed (i.i.d) and generatedfrom the following Bernoulli-Gaussian distribution: x i ∼ (cid:40) N (cid:16) , ρ (cid:17) , with probability ρ, , with probability − ρ (1) where x i denotes i th element of vector x . Therefore, theprobability density function (PDF) of x i is P x i ( x ) = ρ (cid:114) ρ π e − ρx + (1 − ρ ) δ ( x ) . (2)The PDF of a signal block is obtained from P x ( x ) = N (cid:89) i =1 P x i ( x i ) . (3) B. 1-bit compressed sensing
In classic CS, the sparse signal x is measured through afew number of linear measurements. The measurement vector y is obtained by multiplying signal vector x with matrix Φ ∈ R M × N . The elements of Φ are i.i.d. zero mean Gaussianrandom variables with variance /M and the output of CS is y = Φx . (4)It is shown that matrix Φ satisfies the restricted isometryproperty (RIP) which guarantees signal reconstruction withhigh probability [27]. In quantized CS, for further storageand transmission purposes, the measurements are quantizedto a number of alphabets. In addition, the noise of themeasurement/quantization is modelled by an additive Gaussiannoise [21]. We denote the measurement/quantization noise byvector e where e i ∼ N (cid:0) , σ e (cid:1) for all i . We denote a scalarquantization function with Q L : R → F that maps real-valuedCS measurement to discrete alphabet F with |F| = L .1-bit CS is a special case of quantized CS where L = 2 and F = {− , +1 } . In fact, 1-bit quantizer is a simple signfunction. We denote the block of the 1-bit CS measurementsby b ∈ {− , +1 } M and 1-bit quantizer with Q , hence, wehave Q ( y + e ) = sign ( y + e ) = b . (5)Since in CS M < N , it can be generally considered as acompression method. In particular, we are interested in 1-bitCS as the outputs are in the form of bits and therefore can beinterfaced to conventional channel encoders.
C. Concatenated source-channel encoder
We consider the serial concatenated source/channel en-coding transmission scheme of Figure 1. The signal x iscompressed to b by a 1-bit CS encoder with compressionrate N/M . We denote the bit energy by E b as a measureof transmission power. In the next step, a channel code of rate M/P expands the block of 1-bit CS measurements b to asequence d of size P .In classic turbo encoder system models where two channelencoders are concatenated an interleaver is used between thetwo encoder at transmitter due to the correlation in the twoencoding processes [11]. However, since in our scenario thereis no correlation between source and channel coding, weconfirmed through experiment that utilization of an interleaverdoes not improve the performance of the turbo system. In thiswork, we restrict our consideration to convolutional codes. Theindividual code bits y i are BPSK-modulated with bit energy + x y b d CS encoder z sign ( · ) convolutional encoder AWGN1-bit CS encoder Fig. 1. Turbo-CS encoding: serial concatenation of 1-bit CS encoder and aconvolutional encoder. Coded bits are transmitted over an AWGN channel E b M/P and transmitted over a memoryless AWGN channelwith noise spectral density N = σ c .For simplicity, we assume that the measurement and quan-tization process in 1-bit CS is error-free and, therefore, thereis no measurement/quantization noise in (5) ( σ e = 0 ). Asillustrated in Figure 1, the only source of the noise in turbo-CS setup is the channel noise.The remainder of the paper is devoted to design and analysisof an iterative joint source-channel decoder for the abovetransmission model. In the next section, we explain a turbo-decoder to reconstruct the signal from noisy turbo-CS encodedbits. IV. T URBO -CS
DECODER
In this section, we propose an iterative joint source-channeldecoder for mentioned transmission system in Section IIIwhich we refer to as turbo-CS decoder. Turbo-CS decoderconsists of two main parts: an APP decoder as the innercomponent decoder and a 1-bit CS decoder as the outercomponent decoder.
A. A posteriori probability decoder
An APP decoder is a soft-in/soft-out (SISO) decoder [28].The APP decoder accepts two inputs: 1) the received signal z from the noisy channel and 2) the a priori probabilities of thebits. The latter is provided from the outer constituent decoder(in our case the 1-bit CS decoder) and in the form of soft-values. The soft-value is a convenient method of providinginformation about the sign and reliability of a bit. Specifically,the sign of a soft-value indicates the most likely sign of thebit and the magnitude gives the probability that it is correct.Soft-values can be in the form of log likelihood ratio (LLR)[29]. Given the probability of a bit, b ∈{− , +1 } , LLR of b is defined by Λ := log ! P b (+1)1 − P b (+1) " . (6)The logarithm in (6) is natural logarithm. In the same way,the probability of b given Λ is calculated by P b (+1) = tanh $ Λ2 % + 12 & . (7)APP decoder applies BCJR algorithm which generates APPof the states and transitions of a discrete time finite-stateMarkov process [30]. The output of APP decoder is in the formof extrinsic LLR values. We denote the vector of extrinsic LLRvalues at the output of APP decoder with Λ [ ext ] CD . In addition, Λ [ apri ] CD refers to a priori LLR values at the input of APP decoder(channel decoder).The aim with the turbo-CS decoder is to apply a SISO innercomponent decoder that updates the extrinsic LLR values, Λ [ ext ] CD , from APP decoder. Then the updated LLR values arepassed to APP decoder as a priori LLR values, Λ [ apri ] CD , for thenext decoding iteration.Thus, the inner decoding component should be in the formof SISO 1-bit CS decoder. In the following section, we proposea SISO 1- bit CS decoder as the inner decoder component ofturbo-CS decoder. B. Bayesian formulation of soft-in/soft-out 1-bit CS decodingproblem
In this section, we derive Bayesian formulation of SISO 1-bit CS decoding problem for turbo-CS decoding. In the firststep, we focus on a transmission case where there is no noisein the channel. Therefore, the convolutional encoder in Figure1 can be removed and we have z = b where b is obtained from(5). As mentioned in the previous section, we assume that themeasurement/quantization process is error-free and σ e = 0 .Hence, conditional PDF of the signal x given the received b is P x | b ( x | b ) ∝ P b | y ( b | y ) P x ( x ) (8) ∝ M ’ i =1 P b | y ( b i | y i ) M ’ i =1 P x i ( x i ) (9)where ∝ denotes equality after normalization to unity. Sincethere is no measurement/quantization noise, we have P b | y ( b i | y i ) = ( , if y i ∈ Q − ( b i ) , , y i / ∈ Q − ( b i ) . (10)where Q − ( b ) = ( [0 , + ∞ ) , if b = +1 , ( −∞ , , b = − . (11)The distribution in (9), describes the complete statisticalcharacterization of the decoding problem. The minimum meansquare error (MMSE) estimator of x is obtained from thefollowing ˆ x MMSE ( b ) = E ( x | b ) . (12)As shown in Figure 1, due to the channel noise there isuncertainty in received signal z and after channel decoding,the APP decoder provides probability of bits instead of fixed b . We have P [ ext ] b ( b ) = M ’ i =1 P [ ext ] b i ( b i ) (13)where P [ ext ] b i ( b i ) for each element of b is obtained from (7). Fig. 1. Turbo-CS encoding: serial concatenation of 1-bit CS encoder and aconvolutional encoder. Coded bits are transmitted over an AWGN channel E b M/P and transmitted over a memoryless AWGN channelwith noise spectral density N = σ c .For simplicity, we assume that the measurement and quan-tization process in 1-bit CS is error-free and, therefore, thereis no measurement/quantization noise in (5) ( σ e = 0 ). Asillustrated in Figure 1, the only source of the noise in turbo-CS setup is the channel noise.The remainder of the paper is devoted to design and analysisof an iterative joint source-channel decoder for the abovetransmission model. In the next section, we explain a turbo-decoder to reconstruct the signal from noisy turbo-CS encodedbits. IV. T URBO -CS
DECODER
In this section, we propose an iterative joint source-channeldecoder for mentioned transmission system in Section IIIwhich we refer to as turbo-CS decoder. Turbo-CS decoderconsists of two main parts: an APP decoder as the innercomponent decoder and a 1-bit CS decoder as the outercomponent decoder.
A. A posteriori probability decoder
An APP decoder is a soft-in/soft-out (SISO) decoder [28].The APP decoder accepts two inputs: 1) the received signal z from the noisy channel and 2) the a priori probabilities of thebits. The latter is provided from the outer constituent decoder(in our case the 1-bit CS decoder) and in the form of soft-values. The soft-value is a convenient method of providinginformation about the sign and reliability of a bit. Specifically,the sign of a soft-value indicates the most likely sign of thebit and the magnitude gives the probability that it is correct.Soft-values can be in the form of log likelihood ratio (LLR)[29]. Given the probability of a bit, b ∈ {− , +1 } , LLR of b is defined by Λ := log (cid:18) P b (+1)1 − P b (+1) (cid:19) . (6)The logarithm in (6) is natural logarithm. In the same way,the probability of b given Λ is calculated by P b (+1) = (cid:32) tanh (cid:0) Λ2 (cid:1) + 12 (cid:33) . (7)APP decoder applies BCJR algorithm which generates APPof the states and transitions of a discrete time finite-stateMarkov process [30]. The output of APP decoder is in the formof extrinsic LLR values. We denote the vector of extrinsic LLRvalues at the output of APP decoder with Λ [ ext ] CD . In addition, Λ [ apri ] CD refers to a priori LLR values at the input of APP decoder(channel decoder).The aim with the turbo-CS decoder is to apply a SISO innercomponent decoder that updates the extrinsic LLR values, Λ [ ext ] CD , from APP decoder. Then the updated LLR values arepassed to APP decoder as a priori LLR values, Λ [ apri ] CD , for thenext decoding iteration.Thus, the inner decoding component should be in the formof SISO 1-bit CS decoder. In the following section, we proposea SISO 1- bit CS decoder as the inner decoder component ofturbo-CS decoder. B. Bayesian formulation of soft-in/soft-out 1-bit CS decodingproblem
In this section, we derive Bayesian formulation of SISO 1-bit CS decoding problem for turbo-CS decoding. In the firststep, we focus on a transmission case where there is no noisein the channel. Therefore, the convolutional encoder in Figure1 can be removed and we have z = b where b is obtained from(5). As mentioned in the previous section, we assume that themeasurement/quantization process is error-free and σ e = 0 .Hence, conditional PDF of the signal x given the received b is P x | b ( x | b ) ∝ P b | y ( b | y ) P x ( x ) (8) ∝ M (cid:89) i =1 P b | y ( b i | y i ) M (cid:89) i =1 P x i ( x i ) (9)where ∝ denotes equality after normalization to unity. Sincethere is no measurement/quantization noise, we have P b | y ( b i | y i ) = (cid:40) , if y i ∈ Q − ( b i ) , , y i / ∈ Q − ( b i ) . (10)where Q − ( b ) = (cid:40) [0 , + ∞ ) , if b = +1 , ( −∞ , , b = − . (11)The distribution in (9), describes the complete statisticalcharacterization of the decoding problem. The minimum meansquare error (MMSE) estimator of x is obtained from thefollowing ˆ x MMSE ( b ) = E ( x | b ) . (12)As shown in Figure 1, due to the channel noise there isuncertainty in received signal z and after channel decoding,the APP decoder provides probability of bits instead of fixed b . We have P [ ext ] b ( b ) = M (cid:89) i =1 P [ ext ] b i ( b i ) (13)where P [ ext ] b i ( b i ) for each element of b is obtained from (7). Therefore, the decoding problem for the SISO 1-bit CSdecoder is the estimation of x given the bit probabilities in(13). From the law of total expectation we have E (ˆ x MMSE ( b )) = (cid:88) all b E ( x | b ) P [ ext ] b ( b ) = E ( x ) = ˆ x (14)where ˆ x denotes the estimate of original signal x via 1-bitSISO decoding. In other words, in turbo-CS decoder, theoutput LLR-values of APP decoder (6) are input as soft-values to SISO 1-bit CS decoder. As the next step, SISO 1-bitCS decoder estimates original signal x from (14) through anestimation method. A popular iterative computational methodto approximate the MMSE estimator ˆ x MMSE is loopy beliefpropagation (LBP) [31]. However, implementation of LBPfor de-quantization problems when matrix Φ is not sparse ishighly computationally complex. LBP requires computation ofhigh-dimensional distributions at measurement nodes whichmakes its application impractical in de-quantization problems.In the next section, we explain SISO 1-bit CS decoding algo-rithm which solves (14) using Gaussian approximate messagepassing technique [20], [32], [33]. C. Soft-in/soft-out 1-bit message passing de-quantization
As discussed in section IV-B, due to high complexity, imple-mentation of MMSE estimator in (12) using belief propagationtechniques is not possible. Recently, Kamilov et al. introducedmessage passing de-quantization (MPDQ) algorithm to solve(12) [14]. MPDQ is based on the Gaussian approximatemessage-passing algorithm referred to as generalized approx-imate message passing (G-AMP) [20].In the following, we modify MPDQ to estimate the originalsignal x to ˆ x in (14). The proposed SISO 1-bit CS decoder (asthe inner component of turbo-CS decoder) applies the modifiedversion of MPDQ to estimate the original signal x jointly withAPP decoder through turbo-iterations.We have modified two parts in MPDQ: • We change the non-linear factor update functions andadapt them to accept the probability of the bits as input.The input bit probabilities P [ ext ] b ( b ) are obtained froma priori LLR values given to SISO 1-bit CS decoderdenoted by vector Λ [ apri ] SD (a priori LLR values for sourcedecoder). • We extend the algorithm to provide bit probabilities atthe output of MPDQ (for the a priori input of the APPdecoder on the next iteration). We denote the related LLRvalues at the output of MPDQ by vector Λ [ ext ] SD .Here, we explain MPDQ algorithm including the modifiedsteps:
1) Initialization : The initial expected and variance vectorare set with respect to prior distribution of x in (2). Therefore, ˆ x (0) = E ( x ) = and v x (0) = var ( x ) = where the numberinside the parenthesis denotes the number of correspondingiteration and and denotes all-zero and all-one vectorsrespectively. We set initial value for the intermediate variable ˆ s (0) = .
2) Linear factor update functions : v p ( t ) = ( Φ • Φ ) v x ( t − (15) ˆ p ( t ) = Φ ˆ x ( t − − v p ( t ) • ˆ s ( t − (16)
3) Non-linear factor update functions : ˆ s i ( t ) = E F (cid:16) P [ ext ] b i ( b i ) , ˆ p i ( t ) , v pi ( t ) (cid:17) (17) v si ( t ) = V F (cid:16) P [ ext ] b i ( b i ) , ˆ p i ( t ) , v pi ( t ) (cid:17) (18)where • denotes element-wise multiplication and E F and V F are functions over scalar values: E F ( P b ( b ) , ˆ p, v p ) = 1 v p ( E ( z ) − ˆ p ) (19) V F ( P b ( b ) , ˆ p, v p ) = 1 v p (cid:18) − var ( z ) v p (cid:19) (20)where we have a priori distribution z ∼ N (ˆ p, v p ) . The non-linear functions E F and V F are the modified versions of updat-ing factor functions in [14]. In fact, these modified functionsallow a more general input model where all quantizationsymbols for each factor are possible as input with differentprobabilities. The expectation and variance in (19) and (20)are calculated as follows: from the law of total expectationwe have E ( z ) = (cid:88) b = − , +1 P b ( b ) E (cid:0) z | z ∈ Q − ( b ) (cid:1) . (21)In addition, from the law of total variance we obtainvar ( z ) = E b ( var ( z | b )) + var b ( E ( z | b )) == E b ( var ( z | b )) + E b (cid:16) E ( z | b ) (cid:17) − E ( z ) (22)where E b ( var ( z | b )) = (cid:88) b = − , +1 P b ( b ) var (cid:0) z | z ∈ Q − ( b ) (cid:1) (23) E b (cid:16) E ( z | b ) (cid:17) = (cid:88) b = − , +1 P b ( b ) E (cid:0) z | z ∈ Q − ( b ) (cid:1) . (24)We refer the reader to [34] for calculation of the expectationand variance of the truncated Gaussian distributions in (21),(23) and (24).
4) Linear variable update functions : v r ( t ) = (cid:16) ( Φ • Φ ) T v s ( t ) (cid:17) − (25) ˆ r ( t ) = ˆ x ( t −
1) + v r ( t ) • (cid:16) Φ T ˆ s ( t ) (cid:17) (26)
5) Non-linear variable update functions : ˆ x i ( t ) = E V (ˆ r i ( t ) , v ri ( t )) (27) v xi ( t ) = V V (ˆ r i ( t ) , v ri ( t )) (28)where E V and V V are functions over scalar values and we have E V (ˆ r, v r ) = E ( x | ˆ r ) (29)and V V (ˆ r, v r ) = var ( x | ˆ r ) . (30)The above expected value and variance are with respect to ˆ r = x + w where w ∼ N (0 , v r ) .
6) Termination of iterations : Then, t ← t + 1 and proceedto step 2) until convergence criterion is satisfied or t reachesthe maximum iteration number. Therefore, through aboveupdating steps MPDQ estimates ˆ x of original signal x throughinner iterations.
7) Soft-output : The term ’soft-output’ refers to the updatedbit probabilities at the output of SISO 1-bit CS decoder. Toupdate the bit probabilities, we need to find the distribution ofthe estimate of y . We can approximate the distribution of theestimated elements of original signal x with mean and varianceof x in (29) and (30). Since y is a linear combination of x ,the estimate of original y is obtained from ˆ y = E ( y ) = Φ ˆ x . (31)and v y = var ( y ) = ( Φ • Φ ) v x (32)where each element of the estimated y has approximatelyGaussian distribution with the mean and variance in (31)and (32). Hence, we can update the probability of eachcorresponding bit at the output of SISO 1-bit CS decoderwhich is denoted by P [ apri ] b . The updated bit probabilities arecalculated from P [ apri ] b i ( b i ) = Q (cid:32) − b i ˆ y i (cid:112) v yi (cid:33) . (33)where Q ( x ) = √ π (cid:82) ∞ x exp (cid:16) − u (cid:17) du . Then the updated bitprobabilities are converted to LLR values via (6) and formvector Λ [ ext ] SD (extrinsic LLR values of source decoder). Werefer to the mentioned algorithm as SISO 1-bit MPDQ. D. Combination of APP decoding and soft-in/soft-out 1-bitmessage passing de-quantization
In Turbo-CS decoding, APP decoder and SISO 1-bit MPDQare applied as inner and outer decoder components respec-tively. In each iteration of turbo-CS decoder, the APP decoderdecodes each block of received signal z to provide extrinsicLLR values, Λ [ ext ] CD . These extrinsic values are given to SISO1-bit MPDQ as Λ [ apri ] SD . SISO 1-bit MPDQ through numberof inner iterations estimate signal block ˆ x and updates bitprobabilities from (33). The updated bit probabilities P [ apri ] b ( b )
5) Non-linear variable update functions : ˆ x i ( t ) = E V (ˆ r i ( t ) , v ri ( t )) (27) v xi ( t ) = V V (ˆ r i ( t ) , v ri ( t )) (28)where E V and V V are functions over scalar values and we have E V (ˆ r, v r ) = E ( x | ˆ r ) (29)and V V (ˆ r, v r ) = var ( x | ˆ r ) . (30)The above expected value and variance are with respect to ˆ r = x + w where w ∼N (0 , v r ) .
6) Termination of iterations : Then, t ← t + 1 and proceedto step 2) until convergence criterion is satisfied or t reachesthe maximum iteration number. Therefore, through aboveupdating steps MPDQ estimates ˆ x of original signal x throughinner iterations.
7) Soft-output : The term ’soft-output’ refers to the updatedbit probabilities at the output of SISO 1-bit CS decoder. Toupdate the bit probabilities, we need to find the distribution ofthe estimate of y . We can approximate the distribution of theestimated elements of original signal x with mean and varianceof x in (29) and (30). Since y is a linear combination of x ,the estimate of original y is obtained from ˆ y = E ( y ) = Φ ˆ x . (31)and v y = var ( y ) = ( Φ • Φ ) v x (32)where each element of the estimated y has approximatelyGaussian distribution with the mean and variance in (31)and (32). Hence, we can update the probability of eachcorresponding bit at the output of SISO 1-bit CS decoderwhich is denoted by P [ apri ] b . The updated bit probabilities arecalculated from P [ apri ] b i ( b i ) = Q ! − b i ˆ y i " v yi . (33)where Q ( x ) = √ π $ ∞ x exp % − u & du . Then the updated bitprobabilities are converted to LLR values via (6) and formvector Λ [ ext ] SD (extrinsic LLR values of source decoder). Werefer to the mentioned algorithm as SISO 1-bit MPDQ. D. Combination of APP decoding and soft-in/soft-out 1-bitmessage passing de-quantization
In Turbo-CS decoding, APP decoder and SISO 1-bit MPDQare applied as inner and outer decoder components respec-tively. In each iteration of turbo-CS decoder, the APP decoderdecodes each block of received signal z to provide extrinsicLLR values, Λ [ ext ] CD . These extrinsic values are given to SISO1-bit MPDQ as Λ [ apri ] SD . SISO 1-bit MPDQ through numberof inner iterations estimate signal block ˆ x and updates bitprobabilities from (33). The updated bit probabilities P [ apri ] b ( b ) APP Decoder z ProbabilityTransformer ModifiedFactorUpdateUpdateVariablesoft-outputgenerator Λ [ ext ] SD Λ [ mod ] SD Λ [ ext ] CD Λ [ apri ] CD Λ [ apri ] SD ˆ x SISO 1-bit MPDQ
Fig. 2. Turbo-CS decoding: APP decoder and SISO 1-bit MPDQ decoder areiteratively estimate signal ˆ x by exchanging LLR values are converted to Λ [ ext ] SD and given to APP decoder as Λ [ apri ] CD forthe next turbo-iteration. Turbo-CS decoder, estimates originalsignal x with ˆ x through several turbo-iterations (Figure 2). Inthe first iteration of turbo-CS, there is no a priori informationavailable at the input of APP decoder and a priori bits areequiprobable, hence, Λ [ apri ] CD is initialized with all-zero vector.V. A NALYSIS OF TURBO -CS
DECODER
A. EXIT chart for iterative processes
Extrinsic information transfer (EXIT) chart is a popular andpowerful tool for analysis and prediction of the convergencebehaviour of iterative techniques [35]–[37]. In this section, webriefly review the EXIT chart approach. We refer the readerto [35] for comprehensive explanation of EXIT chart.In an EXIT chart, each constituent decoder of the turboprocess is presented by an EXIT function. The EXIT functiondescribes the relation between input/output of each individualdecoder by applying mutual information measure I ( · ; · ) from(34). For constituent decoders of turbo-CS scheme we have: • APP decoder: I [ apri ] CD = I % b ;Λ [ ext ] SD & →I [ ext ] CD = I % b ;Λ [ ext ] CD & • SISO 1-bit MPDQ decoder: I [ apri ] SD = I % b ;Λ [ ext ] CD & →I [ ext ] SD = I % b ;Λ [ ext ] SD & The EXIT functions for 1-bit MPDQ and APP decoderare illustrated in Figure 3. Here, we apply a correspondingAPP decoder of a rate / recursive systematic convolutionalchannel encoder . The code polynomials are G = ’ , , ( .Note that the EXIT function for APP decoder depends onthe quality of channel because P z | d ( z | d ) is considered indetermination of Λ [ ext ] CD [37]. We define signal to noise ratio(SNR) as the channel noise measure bySNR = E b N = 1 σ c (34) I ( b ;Λ) = 12 ) b = − , +1 P Λ | b (Λ | b ) log * P Λ | b (Λ | b ) P Λ | b (Λ |−
1) + P Λ | b (Λ | + 1) + (34) Fig. 2. Turbo-CS decoding: APP decoder and SISO 1-bit MPDQ decoderare iteratively estimate signal ˆ x by exchanging LLR values are converted to Λ [ ext ] SD and given to APP decoder as Λ [ apri ] CD forthe next turbo-iteration. Turbo-CS decoder, estimates originalsignal x with ˆ x through several turbo-iterations (Figure 2). Inthe first iteration of turbo-CS, there is no a priori informationavailable at the input of APP decoder and a priori bits areequiprobable, hence, Λ [ apri ] CD is initialized with all-zero vector.V. A NALYSIS OF TURBO -CS
DECODER
A. EXIT chart for iterative processes
Extrinsic information transfer (EXIT) chart is a popular andpowerful tool for analysis and prediction of the convergencebehaviour of iterative techniques [35]–[37]. In this section, webriefly review the EXIT chart approach. We refer the readerto [35] for comprehensive explanation of EXIT chart.In an EXIT chart, each constituent decoder of the turboprocess is presented by an EXIT function. The EXIT functiondescribes the relation between input/output of each individualdecoder by applying mutual information measure I ( · ; · ) from(34). For constituent decoders of turbo-CS scheme we have: • APP decoder: I [ apri ] CD = I (cid:16) b ; Λ [ ext ] SD (cid:17) → I [ ext ] CD = I (cid:16) b ; Λ [ ext ] CD (cid:17) • SISO 1-bit MPDQ decoder: I [ apri ] SD = I (cid:16) b ; Λ [ ext ] CD (cid:17) → I [ ext ] SD = I (cid:16) b ; Λ [ ext ] SD (cid:17) The EXIT functions for 1-bit MPDQ and APP decoderare illustrated in Figure 3. Here, we apply a correspondingAPP decoder of a rate / recursive systematic convolutionalchannel encoder . The code polynomials are G = (cid:0) , , (cid:1) .Note that the EXIT function for APP decoder depends onthe quality of channel because P z | d ( z | d ) is considered indetermination of Λ [ ext ] CD [37]. We define signal to noise ratio I ( b ; Λ) = 12 (cid:88) b = − , +1 P Λ | b (Λ | b ) log (cid:18) P Λ | b (Λ | b ) P Λ | b (Λ | −
1) + P Λ | b (Λ | + 1) (cid:19) (34) SISO(cid:0)1-bit(cid:0)MPDQRSC(1,(37,33)/23)Decoding(cid:0)trajectory I [ apri ] CD , I [ ext ] SD [bit] I [ a p r i ] S D , I [ e x t ] C D [ b it ] Fig. 3. EXIT chart of turbo-CS receiver at SNR = − [dB] where E b is the average power of a bit at the output of 1-bit CS encoder. We set SNR = − dB and bit block size M = 500 . The EXIT functions are measured individuallywhile the decoding trajectory is measured in the actual turbo-CS configuration. In Figure 3, the decoding trajectory is shownfor turbo-iterations by the circled line. The decoding tunnelbetween two EXIT functions indicates that turbo-CS decodermight converge after number of iterations.As it can be seen, the decoding trajectory undershoots theAPP EXIT curve. In classic EXIT chart, the undershooting(overshooting) effect can result from a too small signal blocksize [35]. However, this is not the case in turbo-CS scheme asthere is no correlation between the inner and outer componentcoding systems. Hence, there is no interleaver applied in turbo-CS and the signal block length does not affect the turbo-decoding performance. In our case, the undershooting effect isdue to the mismatch between the output LLR values of SISO1-bit MPDQ, Λ [ ext ] SD , and input LLR values of APP decoder, Λ [ apri ] CD . In fact, the EXIT functions in Figure 3 are obtainedthough individual classic configuration where Λ [ apri ] CD and Λ [ apri ] SD are modelled with the following random process : Λ= µ Λ b + n Λ (35)where n Λ ∼N ! ,σ " and µ Λ = σ / [35]. However, theobtained Λ [ ext ] SD in actual configuration (trajectory) does nothave the same random Gaussian characteristic as in (35). Thereason behind this mismatch is the approximation applied ingeneration of soft-output values in SISO 1-bit MPDQ in (31),(32) and (33).In the next section, a training-based modification methodwill be introduced which maps Λ [ ext ] SD to true LLR values. Genie(cid:0)aided(cid:0)modificationTransfom(cid:0)function P [ apri ] b ( b ) P [ m od ] b ( b ) Fig. 4. Input probability vs. modified probability at the output comparing theGenie aided results with our proposed transform function (M=500, T=5)
Genie(cid:0)aided(cid:0)modificationTransfom(cid:0)function P [ apri ] b ( b ) P [ m od ] b ( b ) Fig. 5. Input probability vs. modified probability at the output comparing theGenie aided results with our proposed transform function (M=500, T=50)
B. Modification of a priori LLR values
To obtain true LLR values from Λ [ ext ] SD , we consider thefollowing characteristic of LLR values generated from (35) Λ= log P Λ | b (Λ | + 1) P Λ | b (Λ |− $ . (36)This property of LLR values gives us a benchmark toinvestigate the accuracy of the obtained probabilities from(33). In other words, if the corresponding bit probabilities, P [ apri ] b ( b ) , were true then Λ [ ext ] SD would satisfy the property in(36).Accordingly, we calculate the right-hand side of (33) anddenote the result LLR values (left-hand side) by Λ [ mod ] SD . The Fig. 3. EXIT chart of turbo-CS receiver at SNR = − [dB] (SNR) as the channel noise measure bySNR = E b N = 1 σ c (34)where E b is the average power of a bit at the output of 1-bit CS encoder. We set SNR = − dB and bit block size M = 500 . The EXIT functions are measured individuallywhile the decoding trajectory is measured in the actual turbo-CS configuration. In Figure 3, the decoding trajectory is shownfor turbo-iterations by the circled line. The decoding tunnelbetween two EXIT functions indicates that turbo-CS decodermight converge after number of iterations.As it can be seen, the decoding trajectory undershoots theAPP EXIT curve. In classic EXIT chart, the undershooting(overshooting) effect can result from a too small signal blocksize [35]. However, this is not the case in turbo-CS scheme asthere is no correlation between the inner and outer componentcoding systems. Hence, there is no interleaver applied in turbo-CS and the signal block length does not affect the turbo-decoding performance. In our case, the undershooting effect isdue to the mismatch between the output LLR values of SISO1-bit MPDQ, Λ [ ext ] SD , and input LLR values of APP decoder, Λ [ apri ] CD . In fact, the EXIT functions in Figure 3 are obtainedthough individual classic configuration where Λ [ apri ] CD and Λ [ apri ] SD are modelled with the following random process : Λ = µ Λ b + n Λ (35)where n Λ ∼ N (cid:0) , σ (cid:1) and µ Λ = σ / [35]. However, theobtained Λ [ ext ] SD in actual configuration (trajectory) does nothave the same random Gaussian characteristic as in (35). Thereason behind this mismatch is the approximation applied ingeneration of soft-output values in SISO 1-bit MPDQ in (31),(32) and (33).In the next section, a training-based modification methodwill be introduced which maps Λ [ ext ] SD to true LLR values. SISO(cid:0)1-bit(cid:0)MPDQRSC(1,(37,33)/23)Decoding(cid:0)trajectory I [ apri ] CD , I [ ext ] SD [bit] I [ a p r i ] S D , I [ e x t ] C D [ b it ] Fig. 3. EXIT chart of turbo-CS receiver at SNR = − [dB] where E b is the average power of a bit at the output of 1-bit CS encoder. We set SNR = − dB and bit block size M = 500 . The EXIT functions are measured individuallywhile the decoding trajectory is measured in the actual turbo-CS configuration. In Figure 3, the decoding trajectory is shownfor turbo-iterations by the circled line. The decoding tunnelbetween two EXIT functions indicates that turbo-CS decodermight converge after number of iterations.As it can be seen, the decoding trajectory undershoots theAPP EXIT curve. In classic EXIT chart, the undershooting(overshooting) effect can result from a too small signal blocksize [35]. However, this is not the case in turbo-CS scheme asthere is no correlation between the inner and outer componentcoding systems. Hence, there is no interleaver applied in turbo-CS and the signal block length does not affect the turbo-decoding performance. In our case, the undershooting effect isdue to the mismatch between the output LLR values of SISO1-bit MPDQ, Λ [ ext ] SD , and input LLR values of APP decoder, Λ [ apri ] CD . In fact, the EXIT functions in Figure 3 are obtainedthough individual classic configuration where Λ [ apri ] CD and Λ [ apri ] SD are modelled with the following random process : Λ= µ Λ b + n Λ (35)where n Λ ∼N ! ,σ " and µ Λ = σ / [35]. However, theobtained Λ [ ext ] SD in actual configuration (trajectory) does nothave the same random Gaussian characteristic as in (35). Thereason behind this mismatch is the approximation applied ingeneration of soft-output values in SISO 1-bit MPDQ in (31),(32) and (33).In the next section, a training-based modification methodwill be introduced which maps Λ [ ext ] SD to true LLR values. Genie(cid:0)aided(cid:0)modificationTransfom(cid:0)function P [ apri ] b ( b ) P [ m od ] b ( b ) Fig. 4. Input probability vs. modified probability at the output comparing theGenie aided results with our proposed transform function (M=500, T=5)
Genie(cid:0)aided(cid:0)modificationTransfom(cid:0)function P [ apri ] b ( b ) P [ m od ] b ( b ) Fig. 5. Input probability vs. modified probability at the output comparing theGenie aided results with our proposed transform function (M=500, T=50)
B. Modification of a priori LLR values
To obtain true LLR values from Λ [ ext ] SD , we consider thefollowing characteristic of LLR values generated from (35) Λ= log P Λ | b (Λ | + 1) P Λ | b (Λ |− $ . (36)This property of LLR values gives us a benchmark toinvestigate the accuracy of the obtained probabilities from(33). In other words, if the corresponding bit probabilities, P [ apri ] b ( b ) , were true then Λ [ ext ] SD would satisfy the property in(36).Accordingly, we calculate the right-hand side of (33) anddenote the result LLR values (left-hand side) by Λ [ mod ] SD . The Fig. 4. Input probability vs. modified probability at the output comparingthe Genie aided results with our proposed transform function (M=500, T=5) SISO(cid:0)1-bit(cid:0)MPDQRSC(1,(37,33)/23)Decoding(cid:0)trajectory I [ apri ] CD , I [ ext ] SD [bit] I [ a p r i ] S D , I [ e x t ] C D [ b it ] Fig. 3. EXIT chart of turbo-CS receiver at SNR = − [dB] where E b is the average power of a bit at the output of 1-bit CS encoder. We set SNR = − dB and bit block size M = 500 . The EXIT functions are measured individuallywhile the decoding trajectory is measured in the actual turbo-CS configuration. In Figure 3, the decoding trajectory is shownfor turbo-iterations by the circled line. The decoding tunnelbetween two EXIT functions indicates that turbo-CS decodermight converge after number of iterations.As it can be seen, the decoding trajectory undershoots theAPP EXIT curve. In classic EXIT chart, the undershooting(overshooting) effect can result from a too small signal blocksize [35]. However, this is not the case in turbo-CS scheme asthere is no correlation between the inner and outer componentcoding systems. Hence, there is no interleaver applied in turbo-CS and the signal block length does not affect the turbo-decoding performance. In our case, the undershooting effect isdue to the mismatch between the output LLR values of SISO1-bit MPDQ, Λ [ ext ] SD , and input LLR values of APP decoder, Λ [ apri ] CD . In fact, the EXIT functions in Figure 3 are obtainedthough individual classic configuration where Λ [ apri ] CD and Λ [ apri ] SD are modelled with the following random process : Λ= µ Λ b + n Λ (35)where n Λ ∼N ! ,σ " and µ Λ = σ / [35]. However, theobtained Λ [ ext ] SD in actual configuration (trajectory) does nothave the same random Gaussian characteristic as in (35). Thereason behind this mismatch is the approximation applied ingeneration of soft-output values in SISO 1-bit MPDQ in (31),(32) and (33).In the next section, a training-based modification methodwill be introduced which maps Λ [ ext ] SD to true LLR values. Genie(cid:0)aided(cid:0)modificationTransfom(cid:0)function P [ apri ] b ( b ) P [ m od ] b ( b ) Fig. 4. Input probability vs. modified probability at the output comparing theGenie aided results with our proposed transform function (M=500, T=5)
Genie(cid:0)aided(cid:0)modificationTransfom(cid:0)function P [ apri ] b ( b ) P [ m od ] b ( b ) Fig. 5. Input probability vs. modified probability at the output comparing theGenie aided results with our proposed transform function (M=500, T=50)
B. Modification of a priori LLR values
To obtain true LLR values from Λ [ ext ] SD , we consider thefollowing characteristic of LLR values generated from (35) Λ= log P Λ | b (Λ | + 1) P Λ | b (Λ |− $ . (36)This property of LLR values gives us a benchmark toinvestigate the accuracy of the obtained probabilities from(33). In other words, if the corresponding bit probabilities, P [ apri ] b ( b ) , were true then Λ [ ext ] SD would satisfy the property in(36).Accordingly, we calculate the right-hand side of (33) anddenote the result LLR values (left-hand side) by Λ [ mod ] SD . The Fig. 5. Input probability vs. modified probability at the output comparingthe Genie aided results with our proposed transform function (M=500, T=50)
B. Modification of a priori LLR values
To obtain true LLR values from Λ [ ext ] SD , we consider thefollowing characteristic of LLR values generated from (35) Λ = log (cid:18) P Λ | b (Λ | + 1) P Λ | b (Λ | − (cid:19) . (36)This property of LLR values gives us a benchmark toinvestigate the accuracy of the obtained probabilities from(33). In other words, if the corresponding bit probabilities, P [ apri ] b ( b ) , were true then Λ [ ext ] SD would satisfy the property in(36).Accordingly, we calculate the right-hand side of (33) anddenote the result LLR values (left-hand side) by Λ [ mod ] SD . The obtained values of Λ [ mod ] SD are true LLR values that need to begiven to APP decoder as Λ [ apri ] CD . Determining the conditionalprobability values in (36) requires knowledge of originaloutput bits of 1-bit CS encoder, b , at receiver. Here, we applya T block of training bit sequences which is predefined toboth transmitter and receiver parts. For simplicity, we derivea modification function for bit probabilities rather than LLRvalues as probabilities have a bounded magnitude. The flowof modification process is as follows: P [ apri ] b ( b ) (1) −→ Λ [ ext ] SD (2) −→ Λ [ mod ] SD (3) −→ P [ mod ] b ( b ) (1) The obtained bit probabilities in (33) are mapped to Λ [ ext ] SD from (6).(2) Λ [ ext ] SD are modified to Λ [ mod ] SD by using training bit se-quence in (36). The conditional probabilities in (36) arecalculated from samples Λ [ ext ] SD and training bits. In thisfashion, Λ [ ext ] SD are grouped in a number of histogrambins. Therefore, the precision of conditional probabilitiesdepends on the number of training blocks, T (Figure 5and 4).(3) Modified LLRs are mapped to modified bit probabilitiesfrom (7).The modification scatter diagram of bit probabilities isillustrated in Figure 5 and 4 for the first iteration of turbo-CSsetup in Section V-A and for T = 5 and T = 50 respectively.Comparison of these two figures indicates that accuracy of(36) through numerical calculation depends on the number ofsamples (training bit blocks). According to Figure 5 and 4,the relation between P [ apri ] b ( b ) and P [ mod ] b ( b ) can be describedby a shifted tangent hyperbolic function. Tangent hyperbolicfunction has also been applied as a tentative device for soft-decision making in iterative multi-user code-division multipleaccess decoders [38].We consider the following heuristic model as a probabilitytransform function: P [mod] b (+1) = tanh (cid:16) α (cid:16) P [apri] b (+1) − . (cid:17)(cid:17) + 12 (37)where α is a tuning factor defining the slope of the function.The tuning factor need to be optimized through trainingsequence for each turbo-CS iteration number. A numericalnon-linear regression method can be applied to find α for eachturbo-CS iteration with input training bit sequence. In thiswork, we use Levenberg-Marquardt nonlinear least squaresalgorithm as a method of curve fitting [39].We obtain optimized α for 3 turbo-iterations through T =50 blocks of training bit sequence with block size M = 500 .The decoding trajectory of turbo-CS decoder with refinedSISO 1-bit MPDQ is depicted in Figure 6. With this prob-ability refinement, the coding trajectory matches the EXITfunctions.VI. P ERFORMANCE OF TURBO -CS
DECODER
In this section, we briefly investigate the performance ofturbo-CS decoder. We assume that signal x ∈ R N is generated obtained values of Λ [ mod ] SD are true LLR values that need to begiven to APP decoder as Λ [ apri ] CD . Determining the conditionalprobability values in (36) requires knowledge of originaloutput bits of 1-bit CS encoder, b , at receiver. Here, we applya T block of training bit sequences which is predefined toboth transmitter and receiver parts. For simplicity, we derivea modification function for bit probabilities rather than LLRvalues as probabilities have a bounded magnitude. The flowof modification process is as follows: P [ apri ] b ( b ) (1) −→ Λ [ ext ] SD (2) −→ Λ [ mod ] SD (3) −→ P [ mod ] b ( b ) (1) The obtained bit probabilities in (33) are mapped to Λ [ ext ] SD from (6).(2) Λ [ ext ] SD are modified to Λ [ mod ] SD by using training bit se-quence in (36). The conditional probabilities in (36) arecalculated from samples Λ [ ext ] SD and training bits. In thisfashion, Λ [ ext ] SD are grouped in a number of histogrambins. Therefore, the precision of conditional probabilitiesdepends on the number of training blocks, T (Figure 5and 4).(3) Modified LLRs are mapped to modified bit probabilitiesfrom (7).The modification scatter diagram of bit probabilities isillustrated in Figure 5 and 4 for the first iteration of turbo-CSsetup in Section V-A and for T = 5 and T = 50 respectively.Comparison of these two figures indicates that accuracy of(36) through numerical calculation depends on the number ofsamples (training bit blocks). According to Figure 5 and 4,the relation between P [ apri ] b ( b ) and P [ mod ] b ( b ) can be describedby a shifted tangent hyperbolic function. Tangent hyperbolicfunction has also been applied as a tentative device for soft-decision making in iterative multi-user code-division multipleaccess decoders [38].We consider the following heuristic model as a probabilitytransform function: P [mod] b (+1) = tanh ! α ! P [apri] b (+1) − . "" + 12 (37)where α is a tuning factor defining the slope of the function.The tuning factor need to be optimized through trainingsequence for each turbo-CS iteration number. A numericalnon-linear regression method can be applied to find α for eachturbo-CS iteration with input training bit sequence. In thiswork, we use Levenberg-Marquardt nonlinear least squaresalgorithm as a method of curve fitting [39].We obtain optimized α for 3 turbo-iterations through T =50 blocks of training bit sequence with block size M = 500 .The decoding trajectory of turbo-CS decoder with refinedSISO 1-bit MPDQ is depicted in Figure 6. With this prob-ability refinement, the coding trajectory matches the EXITfunctions.VI. P ERFORMANCE OF TURBO -CS
DECODER
In this section, we briefly investigate the performance ofturbo-CS decoder. We assume that signal x ∈ R N is generated SISO(cid:0)1-bit(cid:0)MPDQRSC(1,(37,33)/23)Decoding(cid:0)trajectory I [ apri ] CD , I [ ext ] SD [bit] I [ a p r i ] S D , I [ e x t ] C D [ b it ] Fig. 6. EXIT chart of turbo-CS receiver with modified output probabilities atSNR = − [dB]. To obtain EXIT trajectory Λ [ mod ] SD is used instead of Λ [ ext ] SD . -3 -2 -1 0 1 2 3 4 5 6024681012 SNR [dB] R S N R [ d B ] Itr. 1Itr. 2Itr. 3Itr. 4Itr. 5Itr. 6SNR improv.RSNR improv.Fig. 7. Performance of turbo-CS decoder for 6 iterations and for differentSNR scenarios ( N = 1000 , M = 500 , ρ = 0 . ) based on the distribution in (1) where N = 1000 and ρ = 0 . .The measuring matrix is Φ ∈ R M × N with i.i.d zero-meanGaussian entries with variance /M where M = 500 . Notethat Φ is known by the turbo-CS decoder.We apply a recursive systematic rate 1/3 convolutionalencoder with polynomial coefficients G = , , $ as thechannel encoder. The coded bits then pass through an AWGNchannel.For turbo-CS tuning purpose, a sequence of T = 50 predefined bit-blocks are input to the channel decoder. Foreach SNR the tuning factor α in (37) is derived for 6 turbo-CS iterations. Fig. 6. EXIT chart of turbo-CS receiver with modified output probabilities atSNR = − [dB]. To obtain EXIT trajectory Λ [ mod ] SD is used instead of Λ [ ext ] SD . obtained values of Λ [ mod ] SD are true LLR values that need to begiven to APP decoder as Λ [ apri ] CD . Determining the conditionalprobability values in (36) requires knowledge of originaloutput bits of 1-bit CS encoder, b , at receiver. Here, we applya T block of training bit sequences which is predefined toboth transmitter and receiver parts. For simplicity, we derivea modification function for bit probabilities rather than LLRvalues as probabilities have a bounded magnitude. The flowof modification process is as follows: P [ apri ] b ( b ) (1) −→ Λ [ ext ] SD (2) −→ Λ [ mod ] SD (3) −→ P [ mod ] b ( b ) (1) The obtained bit probabilities in (33) are mapped to Λ [ ext ] SD from (6).(2) Λ [ ext ] SD are modified to Λ [ mod ] SD by using training bit se-quence in (36). The conditional probabilities in (36) arecalculated from samples Λ [ ext ] SD and training bits. In thisfashion, Λ [ ext ] SD are grouped in a number of histogrambins. Therefore, the precision of conditional probabilitiesdepends on the number of training blocks, T (Figure 5and 4).(3) Modified LLRs are mapped to modified bit probabilitiesfrom (7).The modification scatter diagram of bit probabilities isillustrated in Figure 5 and 4 for the first iteration of turbo-CSsetup in Section V-A and for T = 5 and T = 50 respectively.Comparison of these two figures indicates that accuracy of(36) through numerical calculation depends on the number ofsamples (training bit blocks). According to Figure 5 and 4,the relation between P [ apri ] b ( b ) and P [ mod ] b ( b ) can be describedby a shifted tangent hyperbolic function. Tangent hyperbolicfunction has also been applied as a tentative device for soft-decision making in iterative multi-user code-division multipleaccess decoders [38].We consider the following heuristic model as a probabilitytransform function: P [mod] b (+1) = tanh ! α ! P [apri] b (+1) − . "" + 12 (37)where α is a tuning factor defining the slope of the function.The tuning factor need to be optimized through trainingsequence for each turbo-CS iteration number. A numericalnon-linear regression method can be applied to find α for eachturbo-CS iteration with input training bit sequence. In thiswork, we use Levenberg-Marquardt nonlinear least squaresalgorithm as a method of curve fitting [39].We obtain optimized α for 3 turbo-iterations through T =50 blocks of training bit sequence with block size M = 500 .The decoding trajectory of turbo-CS decoder with refinedSISO 1-bit MPDQ is depicted in Figure 6. With this prob-ability refinement, the coding trajectory matches the EXITfunctions.VI. P ERFORMANCE OF TURBO -CS
DECODER
In this section, we briefly investigate the performance ofturbo-CS decoder. We assume that signal x ∈ R N is generated SISO(cid:0)1-bit(cid:0)MPDQRSC(1,(37,33)/23)Decoding(cid:0)trajectory I [ apri ] CD , I [ ext ] SD [bit] I [ a p r i ] S D , I [ e x t ] C D [ b it ] Fig. 6. EXIT chart of turbo-CS receiver with modified output probabilities atSNR = − [dB]. To obtain EXIT trajectory Λ [ mod ] SD is used instead of Λ [ ext ] SD . -3 -2 -1 0 1 2 3 4 5 6024681012 SNR [dB] R S N R [ d B ] Itr. 1Itr. 2Itr. 3Itr. 4Itr. 5Itr. 6SNR improv.RSNR improv.Fig. 7. Performance of turbo-CS decoder for 6 iterations and for differentSNR scenarios ( N = 1000 , M = 500 , ρ = 0 . ) based on the distribution in (1) where N = 1000 and ρ = 0 . .The measuring matrix is Φ ∈ R M × N with i.i.d zero-meanGaussian entries with variance /M where M = 500 . Notethat Φ is known by the turbo-CS decoder.We apply a recursive systematic rate 1/3 convolutionalencoder with polynomial coefficients G = , , $ as thechannel encoder. The coded bits then pass through an AWGNchannel.For turbo-CS tuning purpose, a sequence of T = 50 predefined bit-blocks are input to the channel decoder. Foreach SNR the tuning factor α in (37) is derived for 6 turbo-CS iterations. Fig. 7. Performance of turbo-CS decoder for 6 iterations and for differentSNR scenarios ( N = 1000 , M = 500 , ρ = 0 . ) based on the distribution in (1) where N = 1000 and ρ = 0 . .The measuring matrix is Φ ∈ R M × N with i.i.d zero-meanGaussian entries with variance /M where M = 500 . Notethat Φ is known by the turbo-CS decoder.We apply a recursive systematic rate 1/3 convolutionalencoder with polynomial coefficients G = (cid:0) , , (cid:1) as thechannel encoder. The coded bits then pass through an AWGNchannel.For turbo-CS tuning purpose, a sequence of T = 50 predefined bit-blocks are input to the channel decoder. Foreach SNR the tuning factor α in (37) is derived for 6 turbo-CS iterations. Then turbo-CS receiver is applied to reconstruct the signalthrough 6 turbo-iterations. The maximum number of inneriterations is set to . We measure the performance of thesignal reconstruction by received signal to noise ratio (RSNR)which is defined byRSNR = E (cid:16) (cid:107) x (cid:107) (cid:17) E (cid:16) (cid:107) ˆ x − x (cid:107) (cid:17) (38)where (cid:107) x (cid:107) denotes the Euclidean norm of vector x .Figure 7, illustrates the performance of turbo-CS receiverfor 6 iterations over 4000 Monte Carlo realizations for differ-ent SNRs. As it is depicted in the figure, turbo-CS decoderimproves the quality of reconstructed signal by more than dB at an SNR of − dB in comparison to non-iterative decoder(first iteration). Considering a fixed RSNR, we improved theSNR range by more than 5 dB when the turbo-CS decoderhas converged (above RSNR of dB).What we can also notice here is the typical turbo-decodereffects; Firstly, over iterations the performance improvementdiminishes. Secondly, the slope of the curve is much steeperthan our previous paper [13], creating a turbo-cliff perfor-mance where the system either converges or it does not.VII. C ONCLUSION
In this paper, we introduced a serial concatenated encodingscheme with an iterative source-channel decoding method. Thesystem model is designed for transmission of sparse signalsover an AWGN channel which we refer to as a turbo-CSencoder. We proposed SISO MPDQ as a soft-in/soft-out sourcedecoder that can be concatenated with an APP decoder todecode sparse signals in an iterative fashion, which we referto as turbo-CS decoder.We have shown, for the first time, the EXIT chart analysisof the turbo-CS decoder, we modified the soft-outputs of SISOMPDQ to be of the correct distribution required by the APPdecoder.We considered the case where the sparsity level of thetransmitted signal is stochastic and unknown to the turbo-CSdecoder. The numerical experiment shows more than dBimprovement in the signal reconstruction performance (RSNR)can be achieved (at a channel SNR of − dB) over thenon-iterative (first iteration) decoder. Alternatively, more than dB of channel SNR can be improved for a fixed signalreconstruction performance of RSNR = 10 dB.A CKNOWLEDGEMENT
The authors would like to thank Sundeep Rangan and Ulug-bek S. Kamilov for providing us with their MPDQ softwareand answers to our questions on their method, which greatlyhelped us in determining this result. In addition, thanks toIngmar Land for useful discussions and sage advice.R
EFERENCES[1] J. Bobin, J.-L. Starck, and R. Ottensamer, “Compressed sensing inastronomy,”
IEEE Trans. Signal Process. , vol. 2, no. 5, pp. 718–726,2008. [2] M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The applicationof compressed sensing for rapid MR imaging,”
Magnetic resonance inmedicine , vol. 58, no. 6, pp. 1182–1195, 2007.[3] J. Haupt, W. U. Bajwa, M. Rabbat, and R. Nowak, “Compressed sensingfor networked data,”
IEEE Signal Process. Mag. , vol. 25, no. 2, pp. 92–101, 2008.[4] D. L. Donoho, “Compressed sensing,”
IEEE Trans. Inf. Theory , vol. 52,no. 4, pp. 1289–1306, Apr. 2006.[5] E. J. Candès and M. Wakin, “An introduction to compressive sampling,”
IEEE Signal Process. Mag. , vol. 25, no. 2, pp. 21–30, Mar. 2008.[6] K. Sayood and J. C. Borkenhagen, “Use of residual redundancy in thedesign of joint source/channel coders,”
IEEE Trans. Commun. , vol. 39,no. 6, pp. 838–846, 1991.[7] C. Guillemot and P. Siohan, “Joint source-channel decoding of variable-length codes with soft information: A survey,”
J. Appl. Signal Process.EURASIP , vol. 2005, pp. 906–927, 2005.[8] P. T. Boufounos and R. G. Baraniuk, “1-bit compressive sensing,” in
Proc. Annual Conf. Inf. Sciences Syst. (CISS) , Princeton, NJ, Mar. 2008,pp. 16–21.[9] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors,”
IEEE Trans. Inf. Theory , vol. 59, no. 4, pp. 2082–2102, Apr. 2013.[10] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limiterror-correcting coding and decoding: Turbo-codes. 1,” in
Proc. IEEEInt. Conf. Commun. (ICC) , vol. 2, Geneva, Switzerland, May 1993, pp.1064–1070.[11] C. Berrou and A. Glavieux, “Near optimum error correcting coding anddecoding: Turbo-codes,”
IEEE Trans. Commun. , vol. 44, no. 10, pp.1261–1271, 1996.[12] J. Hagenauer, “Source-controlled channel decoding,”
IEEE Trans. Com-mun. , vol. 43, no. 9, pp. 2449–2457, Sep. 1995.[13] A. Movahed and M. C. Reed, “Iterative detection for compressivesensing: Turbo CS,” in
Proc. IEEE Int. Conf. Commun. (ICC) , Sydney,Australia, Jun. 2014, pp. 4518–4523.[14] U. S. Kamilov, V. K. Goyal, and S. Rangan, “Message-passing de-quantization with applications to compressed sensing,”
IEEE Trans.Signal Process. , vol. 60, no. 12, pp. 6270–6281, 2012.[15] E. J. Candès and T. Tao, “Decoding by linear programming,”
IEEETrans. Inf. Theory , vol. 51, no. 12, pp. 4203–4215, 2005.[16] F. Zhang and H. D. Pfister, “Compressed sensing and linear codes overreal numbers,” in
Proc. IEEE Inf. Theory Applicat. Workshop , Jan. 2008,pp. 558–561.[17] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,”
IEEE Trans.Signal Process. , vol. 56, no. 6, pp. 2346–2356, Jun. 2008.[18] M. W. Seeger and H. Nickisch, “Compressed sensing and bayesianexperimental design,” in
Proc. 25th int. conf. Mach. Learning (ACM) ,2008, pp. 912–919.[19] D. Baron, S. Sarvotham, and R. G. Baraniuk, “Bayesian compressivesensing via belief propagation,”
IEEE Trans. Signal Process. , vol. 58,no. 1, pp. 269–280, 2010.[20] S. Rangan, “Generalized approximate message passing for estimationwith random linear mixing,” arXiv preprint arXiv:1010.5141 , 2010.[21] J. N. Laska and R. G. Baraniuk, “Regime change: Bit-depth versusmeasurement-rate in compressive sensing,”
IEEE Trans. Signal Process. ,vol. 60, no. 7, pp. 3496–3505, Jul. 2012.[22] J. N. Laska, Z. Wen, W. Yin, and R. G. Baraniuk, “Trust, but verify: Fastand accurate signal recovery from 1-bit compressive measurements,”
IEEE Trans. Signal Process. , vol. 59, no. 11, pp. 5289–5301, Nov. 2011.[23] L. Schmalen, M. Adrat, T. Clevorn, and P. Vary, “EXIT chart basedsystem design for iterative source-channel decoding with fixed-lengthcodes,”
IEEE Trans. Commun. , vol. 59, no. 9, pp. 2406–2413, Sep. 2011.[24] J. Garcia-Frias and J. D. Villasenor, “Joint turbo decoding and estimationof hidden markov sources,”
IEEE J. Sel. Areas Commun. , vol. 19, no. 9,pp. 1671–1679, 2001.[25] A. Shirazinia, S. Chatterjee, and M. Skoglund, “Joint source-channelvector quantization for compressed sensing,”
IEEE Trans. Signal Pro-cess. , vol. 62, no. 14, pp. 3667–3681, 1991.[26] P. Schniter, “Turbo reconstruction of structured sparse signals,” in , 2010, pp. 1–6.[27] E. J. Candès, “The restricted isometry property and its implications forcompressed sensing,”
Comptes Rendus Mathematique , vol. 346, no. 9,pp. 589–592, May 2008.[28] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “A soft-inputsoft-output app module for iterative decoding of concatenated codes,”
IEEE Commun. Lett. , vol. 1, no. 1, pp. 22–24, 1997.[29] J. Hagenauer, E. Offer, and L. Papke, “Iterative decoding of binary blockand convolutional codes,”
IEEE Trans. Inf. Theory , vol. 42, no. 2, pp.429–445, 1996. [30] L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding oflinear codes for minimizing symbol error rate,”
IEEE Trans. Inf. Theory ,vol. 20, no. 2, pp. 284–287, Mar. 1974.[31] C. M. Bishop et al. , Pattern recognition and machine learning . springerNew York, 2006, vol. 4, no. 4.[32] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algo-rithms for compressed sensing,”
Proceedings of the National Academyof Sciences , vol. 106, no. 45, pp. 18 914–18 919, 2009.[33] J. Boutros and G. Caire, “Iterative multiuser joint decoding: Unifiedframework and asymptotic analysis,”
IEEE Trans. Inf. Theory , vol. 48,no. 7, pp. 1772–1793, 2002.[34] N. L. Johnson, S. Kotz, and N. Balakrishnan,
Continuous univariatedistributions. Vol. 1 . Wiley New York, 1995, vol. 2. [35] S. Ten Brink, “Convergence behavior of iteratively decoded parallelconcatenated codes,”
IEEE Trans. Commun. , vol. 49, no. 10, pp. 1727–1737, 2001.[36] S. ten Brink, “Convergence of iterative decoding,”
Electronics Letters ,vol. 35, no. 10, pp. 806–808, 1999.[37] J. Hagenauer, “The exit chart-introduction to extrinsic information trans-fer in iterative processing,” in
Proc. 12th European Signal ProcessingConference (EUSIPCO) . Citeseer, 2004, pp. 1541–1548.[38] D. Divsalar, M. K. Simon, and D. Raphaeli, “Improved parallel inter-ference cancellation for cdma,”
IEEE Trans. Commun. , vol. 46, no. 2,pp. 258–268, 1998.[39] G. A. Seber and A. J. Lee,