Communications using Sparse Signals
aa r X i v : . [ c s . I T ] F e b Communications using Sparse Signals
Madhusudan Kumar Sinha ∗ , Arun Pachai Kannu † Department of Electrical EngineeringIndian Institute of Technology, MadrasChennai, Tamil Nadu - 600036Email: ∗ [email protected], † [email protected] Abstract —Inspired by compressive sensing principles, we pro-pose novel error control coding techniques for communicationsystems. The information bits are encoded in the support andthe non-zero entries of a sparse signal. By selecting a dictionarymatrix with suitable dimensions, the codeword for transmissionis obtained by multiplying the dictionary matrix with the sparsesignal. Specifically, the codewords are obtained from the sparselinear combinations of the columns of the dictionary matrix.At the decoder, we employ variations of greedy sparse signalrecovery algorithms. Using Gold code sequences and mutuallyunbiased bases from quantum information theory as dictionarymatrices, we study the block error rate (BLER) performance ofthe proposed scheme in the AWGN channel. Our results showthat the proposed scheme has a comparable and competitiveperformance with respect to the several widely used linear codes,for very small to moderate block lengths. In addition, our codingscheme extends straightforwardly to multi-user scenarios suchas, multiple access channel, broadcast channel and interferencechannel. In these multi-user channels, if the users are groupedsuch that they have similar channel gains and noise levels, theoverall BLER performance of our proposed scheme will coincidewith an equivalent single user scenario.
Index Terms —sparse signal recovery, error control coding,mutually unbiased bases, Gold codes, multi-user communications
I. I
NTRODUCTION
Shannon’s seminal paper on information theory establishedthe existence of information encoding and decoding techniqueswhich guarantee almost error free communications acrossnoisy channels [1]. Extensive work has been carried out todevelop such efficient error control coding techniques foradditive white Gaussian noise (AWGN) channels [2]. Linearcodes such as convolutional codes, turbo codes and LDPCcodes are widely used in various communication systemstoday. Maximum likelihood decoding in AWGN channels boilsdown to finding the codeword in the codebook which is closestto the received signal [3]. Hence the distance properties of thecodewords in the codebook play an important role in the errorperformance of the coding scheme.Code division multiple access (CDMA) is a communicationtechnique developed for multi-user wireless systems [4]. Eachuser is given a specific code or a sequence from a large set ofsequences. Users multiply their information bearing symbolswith the sequences assigned to them in a spreading operation.The received signal is superposition of signals from all theusers. When the receiver does despreading operation on thereceived signal using the sequence of a given user (whichis equivalent to finding inner product), the interference from other users are suppressed to a large extent, if the correlationbetween the sequences is small. There is extensive literatureon finding a large set of codes/sequences with good corre-lation properties. For instance, Gold codes [5] and Zadoff-Chu sequences [6], [7] are well-known for their correlationproperties and widely used in wireless systems. In addition,quantum information theory also provides ways to constructa large set of sequences with small correlation among them.Such constructions are referred as mutually unbiased bases(MUB) [8] and symmetric, informationally complete, positiveoperator valued measure (SIC-POVM) [9]. SIC-POVM isclosely related to construction of equi-angular lines.Compressive sensing techniques address the problem ofrecovering a sparse signal from an under-determined systemof noisy linear measurements [10]. Suppose x is an L -dimensional signal/vector with sparsity level K such that only K entries in x are non-zero. Using the sensing matrix A having size N × L with K < N < L , and the noisy linearmeasurement vector y = Ax + v , the goal is to recover thesupport and non-zero entries of the K -sparse signal x from y . There is extensive literature on the sparse signal recoveryalgorithms, such as greedy matching pursuit based algorithms[11], [12], convex programming based algorithms [13], [14],approximate message passing algorithms [15] and their deepnetworks based implementations [16]. The performance ofthe sparse signal recovery algorithms depend on the mutualcoherence parameter of the sensing matrix [17], which isdirectly related to the correlation among its columns. Smallerthe correlation among the columns of A , better is the sparsesignal recovery performance.There are inherent connections between error controlcodes, CDMA sequences and compressive sensing, as theyall require small correlation among codewords/spreading-sequences/sensing-matrix-columns. There is a vast literatureon connecting compressive sensing concepts with commu-nication techniques. We have highlighted some of theseworks here. In [18], error control coding techniques havebeen developed for the case of sparse noise vector (whichmodels impulse noise environments). Codes/sequences withsmall correlation have been used with sparse signal recoverytechniques for a massive random access application in [19].Techniques to compute the sparse Fourier transform usingthe parity check matrix of an LDPC code as the sensingmatrix along with peeling decoder have been developed in[20]. Spatial modulation in multi-antenna systems encodesthe information partially by activating a subset of antennas chosen from a large set [21]. Index modulation in OFDMsystems encode information partially in choosing a subset ofsub-carriers among the available sub-carriers to send the datasymbols [22].In this paper, we develop a new error control codingscheme using sequences with low correlation and compressivesensing concepts. We develop a subblock sparse coding (SSC)scheme where information bits are non-linearly encoded ina K -sparse signal x of length L , with the non-zero entrieschosen from an M -ary constellation. The SSC scheme carriesroughly K log LK bits in the support of x and K log M bitsin the non-zero entries. The codeword s of length N to betransmitted across the channel is obtained by multiplying x with a dictionary matrix A (of size N × L ). The columns ofthe dictionary matrix are chosen from the set of low correlationsequences from CDMA or quantum information theory. Fromthe noisy observation y = s + v = Ax + v , we recover thesparse signal x (and hence the information bits) using a novelgreedy match and decode (MAD) algorithm and its variations.With Gold codes from CDMA and mutually unbiased basesfrom quantum information theory as the dictionary matrices,the codeword error rate performance of the proposed SSCencoding scheme with MAD decoding in AWGN channelis comparable and competitive with the widely used binarylinear codes for very small ( N = 8 ) to moderate codewordlengths ( N = 128 ). In addition, the proposed error controlcoding scheme extends easily to multi-user channels, such asmultiple-access channel (MAC), broadcast channel (BC) andinterference channels (IC). SSC scheme transmitting B bits toa single user using a K -sparse signal can be easily modifiedto transmit a total of B bits to P users with P ≤ K , in aMAC or BC or IC. In addition, the overall error performancein the multi-user channel will be same as that of an equivalentsingle user scenario.While index modulation in OFDM and spatial modulationtechniques encode information partially in choosing a subsetfrom available sub-carriers/antennas, these techniques requirean underlying error control coding scheme to ensure smallprobability of error. On the other hand, our SSC encoding andMAD decoding is a new error control scheme by itself.Our work has connections to the work on non-linear codesby Kerdock and Preparata [23], [24]. Kerdock’s work gaveconstructions for a large set of sequences with good dis-tance/correlation properties and used them directly as code-words. Spherical codes [25] also aim at developing a largeset of codewords with good distance properties. In our work,we generate codewords using sparse linear combinations ofsequences with good correlation properties. Because of theselinear combinations, the number of codewords in our SSCscheme is larger when compared to that of Kerdock codesand spherical codes of similar lengths. This increase in thenumber of codewords has two benefits, increase in the datarate and decrease in the overall energy per bit.Our SSC encoding scheme has a non-linear component(mapping from bits to sparse signal) and a linear component(mapping from sparse signal to codeword). Due to the linearpart, the recovery of sparse signal from the observation can beaccomplished using simple decoding techniques. At the same time, the non-linear component in the encoding procedureenables direct extensions of the scheme to the multi-userchannels.The paper is organized as follows. In Section II, we describethe proposed sparse signal based encoding and decoding tech-niques. In Section III, we present the details of constructingdictionary matrices using Gold codes and complex mutuallyunbiased bases. In Section IV, we present simulation studieson the error performance of the proposed schemes in AWGNchannel and compare with some of the existing error controlcodes. In Section V, we discuss the details of extendingthe proposed coding techniques to multi-user scenarios. InSection VI, we give concluding remarks and directions forfuture work.II. E NCODING AND D ECODING S CHEMES
A. Sparse Coding
Consider a dictionary matrix A of size N × L , with L ≥ N .The codewords for messages are obtained using sparse linearcombinations of columns of the matrix A . We discuss thedetails of sparse encoding procedure below.
1) Encoding Procedure:
Fix the sparsity level as K with K ≤ N . Choose a subset S ⊂ { , · · · , L } of size |S| = K .Let us denote S = { α , · · · , α K } with ≤ α k ≤ L .Let Q = { β , · · · , β K } be an ordered set of K symbolschosen (allowing repetitions) from a M − ary constellation,with alphabet set B = { b , · · · , b M } . Denoting i th columnof A by a i , a codeword of length N is obtained as s = K X k =1 β k a α k . (1)Consider the sparse vector x of length L , with its i th entrygiven as x i = (cid:26) β k if i = α k if i / ∈ S (2)Now, the codeword in (1) can be represented as s = Ax . (3)The information is encoded in the support set S of the sparsesignal x and its non-zero entries given by the set Q . Let C denote the set of all possible codewords of the form (3), witha fixed sparsity level K . Total number of codewords we cangenerate is |C| = M K × (cid:0) LK (cid:1) . The total number of bits thatcan be transmitted in a block of N channel uses is N b = K ⌊ log M ⌋ + ⌊ log (cid:18) LK (cid:19) ⌋ . (4)In this paper, base of log( · ) is , unless specified explicitlyotherwise. We define the code rate of the encoding schemein units of bits per real dimension (bpd) as the number ofbits transmitted per real dimension utilized. If A is a realmatrix and the modulation symbols in Q are chosen froma real constellation (such as PAM, BPSK), the code rate inbpd is N b N . On the other hand, if A is a complex matrixand/or the constellation symbols are complex, the code ratein bpd is N b N . In our encoding process, we also allow the special case of M = 1 , for which β k = +1 , ∀ k . The averageenergy per bit of our sparse encoding scheme can be givenas E b = 1 N b |C| X s ∈C k s k . This proposed coding scheme is anon-linear code and can be considered as a generalization oforthogonal FSK. Note that, with the special case of A beinga DFT matrix and setting K = 1 and M = 1 , our encodingprocess leads to an orthogonal FSK scheme. An interestinganalogy is that the columns of the dictionary matrix can becompared to the words in a dictionary of a language. With thisanalogy, codewords are equivalent to sentences in a language,as they are obtained by using sparse combinations of wordsand different codewords/sentences convey different messages.
2) Decoding Procedure:
The received signal is modeled as y = s + v , = Ax + v , (5)where v is additive noise. Information bits can be retrieved byrecovering the sparse signal x from the observation y . Sparsesignal recovery can be done using greedy techniques [11],[26], [27] or convex programming based techniques [13]. Inthis paper, we consider a simple greedy algorithm which werefer as match and decode (MAD) described in Algorithm 1.MAD algorithm takes the dictionary matrix A , the observation y , sparsity level K as inputs and produce an estimate ˆ x ( K ) ofthe sparse signal x . It is ensured that the estimate ˆ x ( K ) (of size L ) has exactly K non-zero entries from the constellation set B . Any sparse signal ˆ x (of size L ) with at most K non-zeroentries from set B can also be given as partial information tothe MAD algorithm. If no partial information is available, ˆ x is set as .We note that, the correlation of residual with the columnsof dictionary matrix in (6) needs to be computed only for oneiteration. For the subsequent iterations, from (8), we have therecursion, h r ( t +1) , a i i = h r ( t ) , a i i − b ˆ m h a ˆ i , a i i , where b ˆ m and ˆ i denote the symbol and the active column detected in theprevious iteration. We can store the symmetric gram matrix A ∗ A , to get the values of h a ˆ i , a i i needed in the recursion.Intuitively, the first iteration of MAD algorithm is themost error prone, since it faces the interference from all theundetected columns. To improve on MAD performance, weconsider a variation, referred as parallel MAD. In the firstiteration, we choose T candidates for the active column, bytaking the top T metrics (7), and perform MAD decodingfor each of these T candidates, resulting in T differentestimates for the sparse signal. Among these T estimates,we select the one with the smallest Euclidean distance to theobservation, inspired by the decoder for white Gaussian noise.The mathematical details are described in Algorithm 2 forcompleteness.Main computationally intensive step in MAD (and parallelMAD) is computing the correlation between the observationand the columns of dictionary matrix in (6), which amounts tocomputing the matrix multiplication A ∗ y . Depending on thechoice of A , efficient matrix multiplication techniques may bedeveloped.
3) Performance Guarantees:
With s being the transmitcodeword and ˆ s being the codeword recovered by the decoding Algorithm 1
Match and Decode Algorithm Input:
Get the observarion y , dictionary matrix A , spar-sity level K and any partially recovered sparse signal ˆ x . Initialize:
Let the ordered sets ˆ S and ˆ Q denote the thesupport and the corresponding non-zero entries of partialinformation vector ˆ x . If ˆ x = , ˆ S and ˆ Q are empty sets.Initialize the iteration counter t = | ˆ S| , the residual r ( t ) = y − A ˆ x and the estimate ˆ x ( t ) = ˆ x . Match:
Correlate the residual with the columns of thedictionary matrix and the constellation symbols as givenbelow. c i = h r ( t ) , a i i , i ∈ { , · · · , L } & i / ∈ ˆ S (6) p i,m = Real { c i b ∗ m } − | b m | , b m ∈ B (7) Decode:
Detect the active column and the correspondingnon-zero entry. (ˆ i, ˆ m ) = arg max i/ ∈ ˆ S ≤ m ≤ M p i,m Update:
Update the recovered sparse signal information, ˆ S = ˆ S ∪ ˆ i , ˆ Q = ˆ Q ∪ b ˆ m , ˆ x ( t +1) = ˆ x ( t ) + b ˆ m e ˆ i . (Here e n denotes n th standard basis). Update the residual r ( t +1) = r ( t ) − b ˆ m a ˆ i , (8)and increment the counter t = t + 1 . Stopping condition: If t < K , repeat the above stepsMatch, Decode and Update. Else go to Step Ouptut. Output:
Recovered sparse signal is ˆ x ( K ) and the recov-ered codeword ˆ s = A ˆ x ( K ) . Algorithm 2
Parallel Match and Decode Algorithm Given the dictionary matrix A and the observation vector y , compute c i = h y , a i i , i = 1 , · · · , L and p i,m = Real { c i b ∗ m } − | b m | , b m ∈ B . Initialize parallel path index n = 1 ; Initialize D = ∅ . Choose a candidate for active column and the correspond-ing non-zero entry: (ˆ i n , ˆ m n ) = arg max ( i/ ∈D ,m ) p i,m . Run MAD algorithm with inputs ( A , y , K ) and priorinformation on sparse signal ˆ x = b ˆ m n e ˆ i n . Denote therecovered sparse signal output of MAD as ˆ x n . Update D = D ∪ ˆ i n and n = n + 1 ; If n ≤ T , go back toStep 3. Final output ˘ x = arg min ˆ x n ;1 ≤ n ≤ T k y − A ˆ x n k .algorithm, the event { s = ˆ s } results in a block error (at leastone of the bits in the block is decoded in error). The mutualcoherence of the dictionary matrix A defined below, µ = max p = q h a p , a q ik a p kk a q k , (9)plays an important role in the recovery performance of MADalgorithm. Lemma 1:
In the absence of noise ( v = ), and for the caseof M = 1 , MAD algorithm recover the codeword perfectly, if K < ( µ − + 1) . Proof:
This result has already been established for or-thogonal matching pursuit (OMP) algorithm in [27]. We note that MAD differs from OMP in certain aspects. For the M = 1 case, MAD algorithm sets the non-zero entries as unity.This is better than OMP, which uses least squares estimatesfor the non-zero entries in each iteration. In computing theresidual, MAD subtracts out the detected columns from theobservation. This is better than OMP, which projects theobservation onto the orthogonal complement of the detectedcolumns, possibly reducing the signal component from yet-to-be-detected columns. Hence, MAD recovery will be at leastas good as OMP recovery, when v = and M = 1 .The exact support recovery performance of OMP in thepresence of bounded noise and Gaussian noise are character-ized in [26]. The same results hold true for MAD algorithmas well, when M = 1 . When M > , the error event can alsohappen due to incorrect decoding of the modulation symbolsin the constellation. Characterizing that error event will dependon the exact constellation shape and this analysis can becarried out in a future work. B. Subblock Sparse Coding
A drawback of the sparse coding scheme in Section II-A isthat a large look-up table is needed to map the information bitsto the sparse signals. In order to eliminate the look-up table,we propose a subblock sparse coding (SSC) scheme describedbelow. In this scheme, we partition the dictionary matrix A into K subblocks such that A = [ A · · · A K ] with k th subblock A k having a size of N × L k and K X k =1 L k = L . Thispartitioning, in general, can be done in an arbitrary manner.Later in this section, we present a partitioning technique withthe aim of maximizing the number of information bits encodedby the scheme. SSC technique transmits N b number of bits (in N channel uses), where N b = K ⌊ log M ⌋ + K X k =1 ⌊ log L k ⌋ . (10)From a bit stream of length N b , we take the first K ⌊ log M ⌋ bits and obtain K modulation symbols Q = { β , · · · , β K } from an M -ary constellation. Now, we segment the remainingbit stream into K strings with k th string b k having a length of ⌊ log L k ⌋ . Let N k denote the unsigned integer corresponding tothe bit string b k . Now, from each subblock A k , we select thecolumn indexed by the number N k + 1 , for ≤ k ≤ K . Notethat, based on this procedure, the column indices chosen fromthe original matrix A is given by α k = N k + 1 + P k − i =1 L i , ≤ k ≤ K . With support set S = { α , · · · , α K } andthe modulation symbols in Q , the codeword s is obtainedas given in (1). By this subblock encoding procedure, weavoid the exhaustive look-up table needed to map the bitsto the codewords. However, the dictionary matrix A needsto be stored. In addition, the number of bits transmitted ina block for SSC scheme will be less than that of sparsecoding scheme discussed in Section II-A. For the SSC scheme,the MAD algorithm can be modified to discard the subblockcorresponding to each detected column from the subsequentiterations. With the aim of maximizing P Kk =1 ⌊ log L k ⌋ , we describea procedure for truncating and partitioning a matrix with ¯ L columns into K subblocks (where K ≤ ¯ L ) with k th subblockhaving L k columns, such that each L k is a power of ,and P k L k = L ≤ ¯ L . The algorithm has K iterations andthe n th iteration with n ∈ { , · · · , K } , involves partitioningthe dictionary matrix into n subblocks, with lengths givenby { L ( n )1 , · · · , L ( n ) n } . In the first iteration n = 1 , we set L (1)1 = 2 ⌊ log ¯ L ⌋ . In the iteration n , we make use of thesubblock lengths obtained in the previous iteration, arrangedin the ascending order such that L ( n − ≤ L ( n − · · · ≤ L ( n − n − . We compute the number of remaining columns fromthe iteration n − as r ( n − = ¯ L − P n − k =1 L ( n − k . If r ( n − ≥ L ( n − n − , then we create a new subblock of length L ( n ) n = 2 ⌊ log r ( n − ⌋ and retain all the previous subblocksso that L ( n )1 = L ( n − , · · · , L ( n ) n − = L ( n − n − . On the otherhand, if r ( n − < L ( n − n − , we split the largest subblockfrom the previous iteration into two equal parts so that L ( n ) n − = L ( n − n − , L ( n ) n = L ( n − n − and the remaining subblocksare retained so that L ( n )1 = L ( n − , · · · , L ( n ) n − = L ( n − n − . Wesort the lengths { L ( n )1 , · · · , L ( n ) n } in ascending order, for usein the subsequent iteration. At the end of iteration K , weget the required lengths for K subblocks. Using mathematicalinduction, we can argue that the above procedure is optimalin maximizing P Kk =1 ⌊ log L k ⌋ .III. D ICTIONARY M ATRIX C ONSTRUCTION
The choice of the dictionary matrix A plays a vital role inthe block error performance. It is desirable that the dictionarymatrix has a large number of columns (as the number ofinformation bits increases with L , for fixed N and K ) withsmall correlation among the columns (for good sparse signalrecovery performance). In this paper, we consider dictionarymatrix constructions using Gold code sequences from CDMAliterature and mutually unbiased bases from quantum informa-tion theory. A. Gold Codes
Gold codes are binary sequences with alphabets {± } .Considering lengths of the form N = 2 n − , where n is any positive integer, there are n + 1 Gold sequences.By considering all the circular shifts of these sequences, weget n − sequences. When dictionary matrix columns areconstructed with these n − sequences normalized to unitnorm, the resulting mutual coherence µ is given by [5], µ = n +12 N , n is odd, n +22 N , n is even. (11)We note that odd value of n leads to relatively smaller mutualcoherence. We can add any column of the identity matrix tothe Gold code dictionary matrix, to get a total of L = 2 n columns (which is a power of 2), with the mutual coherencesame as (11). Storing such a dictionary matrix will require N ( N + 1) bits. For N = 127 , this Gold code dictionarymatrix size is approximately . MB.
B. Mutually Unbiased Bases
Two orthonormal bases U and U of the N -dimensionalinner product space C N are called mutually unbiased if andonly if |h x , y i| = √ N for any x ∈ U and y ∈ U . A setof m orthonormal bases of C N is called mutually unbiasedif all bases in the set are pairwise mutually unbiased. Let Q ( N ) denote the maximum number of orthonormal bases of C N , which are pairwise mutually unbiased. In [8], it has beenshown that Q ( N ) ≤ N (excluding the standard basis), withequality if N is a prime power. Explicit constructions are alsogiven in [8] for getting N MUB in N -dimensional complexvector space C N , if N is a prime power.If N is a power of 2, the N MUB unitary matrices { U , · · · , U N } have the following properties. • The entries in all the N unitary matrices belong to the set { +1 √ N , − √ N , + i √ N , − i √ N } . This follows from the constructionof MUB given in [8]. Storing all these N unitary matriceswill require N bits. For N = 128 , this storagerequirement is approximately . MB. • For N up to 256, we find that the inner products h x , y i ∈{ +1 √ N , − √ N , + i √ N , − i √ N } when x ∈ U i and y ∈ U j for i = j . We conjecture that this property holds true when N is any power of 2.We construct dictionary matrix using N MUB as A =[ U · · · U N ] . In this case, L = N and the correspondingmutual coherence µ is √ N . In addition, when N is a powerof , we can always split A into K subblocks with size of eachsubblock L k being a power of and each L k ≥ N K . When theentries in A belong to { +1 , − , + j, − j } except for a commonscaling factor of √ N , the inner products h a i , y i needed in theMAD algorithm involve only additions (no multiplications).IV. S IMULATION R ESULTS
We study the performance of the block error rate (BLER),also referred as codeword error rate, for the proposed en-coding and decoding schemes in additive white Gaussiannoise channels. For the complex MUB dictionary matrix,the non-zero entries of the sparse signal are chosen fromQPSK constellation. For real Gold code dictionary matrix, weconsider BPSK constellation. When the non-zero entries in the K -sparse signal x are uncorrelated, it easily follows that, theexpected energy of the codeword s = Ax is E s = K , whenthe columns of dictionary matrix are of unit norm. Energyper bit E b is obtained by dividing E s by the total number ofbits conveyed by the sparse signal x . With N denoting thevariance of the Gaussian noise per real dimension, we plot theBLER versus E b /N of the proposed schemes. A. Study of Proposed Schemes
In this section, we study various combinations of theproposed encoding and decoding schemes to understand theirimpact on the performance. In this subsection, we restrict ourattention to complex MUB dictionary matrix with N = 64 .In Fig. 1, we compare the performance of sparse coding(SC) scheme and the subblock sparse coding scheme, with Eb/N0 -4 -3 -2 -1 B L E R Complex MUB dictionary matrix with N=64
K=1, SCK=3, SCK=5, SCK=1, SSCK=3, SSCK=5, SSC
Fig. 1: Impact of the Encoding Scheme.MAD decoding. Since SC scheme allows all the (cid:0) LK (cid:1) possi-bilities for the support set, the number of bits N b per blockof SC scheme will be larger than SSC scheme. Specifically,with complex MUB dictionary matrix of length N = 64 , forsparsity levels K = 1 , , , the values of N b for SC schemeare, respectively, , , and the corresponding values forSSC scheme are , , . Hence, when E b /N of the twoschemes are same, SSC scheme has a larger value for noisevariance parameter N . On the other hand, the MAD decoderfaces a larger search space to detect each active column ofSC scheme, while in SSC scheme, once an active column isdetected, the entire subblock can be removed from subsequentiterations. These two effects counteract each other and we findthat the BLER versus E b /N performance of both SC and SSCschemes are quite close to each other. But SC scheme hashigher code rates at the expense of extensive look up tableneeded to map the information bits to the (support of the)sparse signal.In Fig. 2, we compare the performance of sparse codingscheme with MAD decoder and OMP algorithm [27]. In theOMP algorithm, in each iteration, after an active column isidentified based on the magnitude of correlation with theresidual, the least squares estimates of the non-zero entriesare quantized to the nearest constellation points. On theother hand, MAD decoder utilizes the finite alphabet size ofthe non-zero entries by jointly decoding the active columnand the corresponding constellation point. In addition, theOMP approach of projecting the residuals onto the orthogonalcomplement of the detected columns in each iteration leadsto reduction of signal components from the yet-to-be detectedactive columns. On the other hand, MAD simply subtracts outthe detected columns without affecting the yet-to-be detectedactive columns. Due to these reasons, the proposed MADdecoder provides better performance, when compared to theOMP algorithm. Eb/N0 -4 -3 -2 -1 B L E R Complex MUB dictionary matrix with N=64
K=1, MADK=3, MADK=1, OMPK=3, OMP
Fig. 2: Impact of the Decoding Algorithm.
Eb/N0 -4 -3 -2 -1 B L E R Complex MUB dictionary matrix with N=64
K=1, MAD K=3, MAD K=5, MAD K=1, parallel MAD K=3, parallel MAD K=5, parallel MAD
Fig. 3: Comparing MAD with parallel MAD.In Fig. 3, we compare the performance of MAD and parallelMAD decoding for SSC scheme. The probability of selecting awrong column in the first iteration of MAD decoding increaseswith sparsity level K due to interference from K − remainingcolumns, especially when K is close to µ . Parallel MADovercomes this problem by selecting T candidates for theactive column in the first iteration and subsequently running T parallel MAD decoders. In all our simulations, we set thevalue of T equal to the sparsity level K . The results showsignificant gains of parallel MAD over MAD when K = 5 with complex MUB dictionary matrix of size N = 64 .We consider introducing random phases to the columns ofthe complex MUB dictionary matrix based on the following Eb/N0 -4 -3 -2 -1 B L E R Complex MUB dictionary matrix with N=64
K=1, zero phaseK=3, zero phaseK=5, zero phaseK=1, random phaseK=3, random phaseK=5, random phase
Fig. 4: Impact of Random Phase.reasoning. With i being an index of one of the active columnsfrom the sparse signal support set S , consider the inner product h y , a i i = β i + P k ∈S ,k = i β k h a k , a i i + h v , a i i . MAD decoderis prone to error when the net interference from other activecolumns has high magnitude. For the complex MUB dictionarymatrix with N = 64 , the inner product between any two non-orthogonal columns belong to the set { +1 √ N , − √ N , + i √ N , − i √ N } .Due to this property, there are many possible support sets S , for which the interference terms can add coherently toresult in a high magnitude. To overcome this problem, weintroduce a random phase to each column of the dictionarymatrix to minimize the constructive addition of interferingterms. Specifically, the random phase dictionary matrix isobtained as AΦ where A is the original (zero phase) MUBdictionary matrix and Φ is a diagonal matrix with randomdiagonal entries exp { jθ i } , with θ i ’s being i.i.d. uniform over [0 , π ] . In Fig. 4, the comparison shows that random phasematrix has gains over zero phase matrix, when the sparsityleve K is close to µ . In fact, in Figures 1 and 3, the plotscorresponding to K = 5 involve random phase MUB matrix. B. Comparison with Existing Error Control Codes
In this subsection, we study the performance of SSC schemewith parallel MAD decoding for various code rates and alsocompare with some of the existing error control codes. Inthe conventional error control coding terminology, an ( n, k ) coding scheme takes k information bits and maps it into a(real) codeword of length n , with the ratio kn being referredas the code rate of the scheme. Hence, the SSC scheme canbe compared directly with conventional error control codingschemes with identical code rates (measured in bits per real di-mension). To match with the conventional notation, we denotethe SSC scheme conveying N b bits using real dictionary matrixwith column length N as an ( N, N b ) coding scheme, whilewith complex dictionary matrix, we have (2 N, N b ) scheme. Eb/N0 -4 -3 -2 -1 B L E R (16,8), K=1(32,18), K=2(64,31), K=3(128,58), K=5 Fig. 5: Complex MUB dictionary with code rate ≈ . Eb/N0 -4 -3 -2 -1 B L E R (127,15), K=1(127,28), K=2(127,40), K=3(127,52), K=4(127,63), K=5 Fig. 6: Gold Code Dictionary matrix with length N = 127 .In Fig. 5, we compare the performance of complex MUBdictionary matrices with various values of N and K , whilesetting the code rate (bits per real dimension) approximatelyequal to half. We see that, for low E b /N values, smaller N performs better. On the other hand, when small BLER isdesired, larger block lengths are better. Here random phase isintroduced when K ≥ µ . In Fig. 6, we study the performanceof Gold code dictionary matrix with N = 127 for variousvalues of code rates, by varying the sparsity level K .We present performance comparison with some of theexisting error control coding schemes with code rates closeto half, for very small block lengths in Fig. 7, and moderateblock lengths in Fig. 8. In Fig. 7, the BLER performance of Eb/N0 -4 -3 -2 B L E R (20,11) Golay-based(20,11) LTE RM(16,8) SSC (MUB) (32,18) SSC (MUB) Fig. 7: Comparsion with very small block length codes.
Eb/N0 -4 -3 -2 -1 B L E R TBCCTurbo 16-statesBinary LDPC (CCSDS)LDPC BG2 (5G-NR) (127,63) SSC (Gold code)
Fig. 8: Comparison with exisiting (128 , Codes. (20 , Golay based code and (20 , Reed Muller (RM)code used in the LTE standard are taken from the plots givenin [28]. We compare these codes with our (16 , and (32 , SSC schemes (complex MUB dictionary matrix with K = 1 and K = 2 respectively). We find that our schemes performbetter than the RM code used in the LTE standard. In Fig. 8,using the plots given in [29], we compare the performanceof our (127 , SSC scheme (Gold code dictionary matrixwith K = 5 ) with some of the existing (128 , error controlcodes: tail biting convolutional code (TBCC) with constraintlength 14, binary LDPC codes used in the CCSDS standard,LDPC codes (base graph 2) from 5G-NR standard and Turbocode with 16 states. More details about these existing codes are given in [29]. Our proposed scheme performs comparableto the binary LDPC code from the CCSDS standard.V. A PPLICATION IN M ULTI -U SER S CENARIOS
In this Section, we discuss on how the SSC encoding andMAD decoding scheme can be used in multi-user scenarios.First, we note that linear codes are naturally unsuited formulti-user scenarios. To illustrate this, consider the simplecase of two users who employ the same linear code. If s is the codeword sent by the user-1 and s is the codewordsent by the user-2, then their algebraic sum s + s is anothervalid codeword for both users. This makes it impossible torecover the individual codewords from the superposition. Inorder to overcome this problem, power-domain non-orthogonalmultiple access (NOMA) techniques in the literature [30] focuson grouping the users such that user-2 has much smallerchannel gain than user-1. In this case, the superposition can berepresented as s + h s with | h | ≪ , facilitating successiveinterference cancellation based decoding [30]. With this group-ing, the data rate for user-2 will be very small compared to thatof user-1. On the other hand, the SSC encoding technique inSection II-B provides a straightforward way to communicatein multi-user scenarios. A. Encoding for Multiple Users
Using the SSC scheme with sparsity level K described inSection II-B, we can support P -user multiple access channel,or P -user broadcast channel or P -user interference channel[31], [32], for any P ≤ K . First, we illustrate how the SSCscheme can be employed to generate the codeword of each userbased on the user’s information bits. As before, we partitionthe dictionary matrix A into K subblocks, with subblock A k having L k number of columns. These K subblocks aredivided among P users, with A i = { A i, , · · · , A i,K i } ⊂{ A , · · · , A K } denoting the ordered set of K i subblocksassigned to user- i . Note that A i ∩ A j = ∅ if i = j and K X i =1 K i = K . The codeword for user- i is obtained as s i = K i X k =1 β i,k a i,k (12)where symbols { β i, , · · · , β i,K i } are chosen from M i -aryconstellation and the column a i,k is chosen from the subblock A i,k for ≤ k ≤ K i . Denoting the number of columns in A i,k as L i,k , the total number of bits that can be conveyedfor user- i is N b i = K i ⌊ log M i ⌋ + K i X i =1 ⌊ log L i,k ⌋ . (13)Now, we will see how these codewords can be used in MAC,BC and IC.
1) Multiple Access Channel:
Multiple access channel isequivalent to an uplink scenario in a cellular network, where P users are sending their data to a single receiver. In theMAC, the encoding is done independently by each user,which coincides with the SSC based procedure in (12). Theobservation at the receiver is y = P X i =1 s i + v , (14) = P X i =1 K i X k =1 β i,k a i,k + v . (15)The decoding is done jointly at the receiver, which can bedone using the MAD algorithm (and parallel MAD), whichrecovers the support of the active columns { a i,k } and thecorresponding modulation symbols β i,k for each user, usingthe received signal y in (15).If all the users employ the same constellation with M i = M, ∀ i , then the superposition of codewords from P users in(15) will have identical structure to the codeword generatedbased on the SSC encoding procedure (with sparsity level K = P i K i ) from Section II-B. As a result, the total number ofbits of all the users P Pi =1 N b i (from (13)) will be equal to N b from (10). In addition, the overall BLER performance of thisMAC channel (probability that all the bits of all the users aredecoded correctly with respect to the overall energy spent perbit) will coincide with the performance of the correspondingsingle user case. Note that, the performance of this single usercase in AWGN has already been studied in Section IV.Now, considering the case where user- i has channel gain h i , the received signal is y = P X i =1 h i g i s i + v , (16)where g i denotes the transmit gain (power control) employedby user- i . If the gains g i are chosen such that h i g i = 1 , ∀ i , thenthe AWGN performance of the above MAC model (16) willcoincide with AWGN performance of the corresponding singleuser case. This implies that, if we group the users such that | h i | are (approximately) equal, then power control gains | g i | canbe (approximately) same. Hence, in our SSC based scheme forMAC, grouping users with similar channel gains and dividingthe dictionary matrix into (approximately) equal sizes amongthe users is beneficial. This is in contrast with the power-domain NOMA techniques in the literature [30], where a highchannel gain user is typically grouped with low channel gainuser. If different users have different channel gains, differentpower constraints and different rate requirements, then thereare open issues in the SSC based scheme, such as segmentingthe dictionary matrix among users, choosing the constellationsize for each user and the modifications required for the MADdecoding.
2) Broadcast Channel:
Broadcast channel is similar to thedownlink scenario in a cellular network, where the base stationtransmits respective information messages to P users. Here,encoding is done jointly at the base station and the decodingis done by each user separately. It is well known that for degraded broadcast channel (for instance, Gaussian broadcastchannel), superposition coding is optimal [31], [32]. Once thecodebooks of all the users are designed (jointly), superpositioncoding simply chooses the codeword for p th user from hiscodebook using his information bits (and independent of theother users’ codewords) and then transmits the sum of thecodewords of all the users. The SSC encoding procedure in(12) can mimic the superposition coding, in a straightforwardway. Once the dictionary matrix is chosen, subblocks aresegmented and allotted among the users, the codeword foreach user can be obtained based on his own information bitsas given by (12), and the transmitter sends the sum of all theusers’ codewords as s = P X i =1 s i . (17)Note that the above sum (17) resembles the codeword gener-ation of single user scenario Section II-B. Received signal atthe user- i is given by y i = s + n i , (18)where n i is the noise at the user- i , with variance σ i . MADdecoding from Section II-B can be employed by each user,which recovers the active columns present in s and thecorresponding modulation symbols. Hence, in this approach,each user recovers the information sent to other users inaddition to his own information. The user with the highestnoise variance ( arg max i σ i ) will have the worst error per-formance (assuming the noise distributions are same exceptfor the variance). On the other hand, if all the users havesame noise variance, the performance (the probability that allthe users received all their bits correctly) will coincide withthe corresponding single user scenario (with the same noisevariance). If the users’ channel quality is asymmetric, andthere is private information for good channel quality users(which the low channel quality users should not be able todecode), then developing sparse coding based techniques isan open problem.
3) Interference Channel:
In the interference channel, thereare P transmitters and P receivers. Each transmitter sendsinformation to a corresponding intended receiver. With i th transmitter generating codeword as in (12), the received signalat the i th receiver is given by y i = s i + X j =1 , ··· ,P j = i h i,j s j + n i , (19)where h i,j denotes the channel gain from j th transmitter tothe i th receiver. Without loss of generality, we have taken h i,i = 1 . MAD decoding employed at the i th receiver recoversthe codewords of all the transmitters. Again, if | h i,j | = 1 , ∀ i, j ,and the noise statistics are identical across all the receivers,then the decoding performance (successful recovery of allthe codewords) of all the receivers will coincide with thecorresponding single user case. In the strong interference | h i,j | ≫ regime [31], [32], MAD decoding can be modifiedto first decode the messages of all the interfering users (byrestricting the correlations to the subblocks of interfering users), cancel the interference, and then proceed to find theactive columns in the subblocks of the intended user. In the weak interference | h i,j | ≫ regime [31], [32], MAD decodercan first find the message of the intended user directly byrestricting the correlations to the subblocks of the intendeduser. Detailed study of the SSC-MAD based techniques in thestrong and weak interference regimes can be explored in afuture work.Let us illustrate the gains of using our SSC schemes inmulti-user channels when compared to using conventionalerror control codes in orthogonal multiple access fashion.Using the (20 , Golay based code (which gives the bestperformance among very short length codes from Fig. 7) inorthogonal multiple access, each user gets a code rate of / and achieves individual BLER of − at E b /N of around . dB. Using our (127 , SSC scheme (Gold code dictionarymatrix with K = 5 ) shown in Fig. 8, we can transmit a totalof bits using real dimensions to (up to) 5 differentusers in MAC or BC or IC. If all the receivers in the multi-user channel have the same AWGN variance, our scheme willget an overall (accounting all the users together) code rate of / and achieve overall BLER of − at overall E b /N of dB. Hence, our scheme used in multi-user channels gives . dB gain over one of the best codes for very short blocklengths. VI. C ONCLUDING R EMARKS
In this Section, we present the limitations of the proposedencoding techniques, various directions for future work andgive final remarks on our work.
A. Limitations of the Sparse Coding Techniques
1) The SSC-MAD scheme can have non-zero probabilityof error, even in the absence of noise. This is because, eachactive column interferes with the other active columns, causingthe MAD decoder to select an inactive column. Even when
K < ( µ − + 1) , MAD decoder can make error in decodingthe constellation symbols { β i } for M > , which in turncauses error in recovering the support set S .2) SSC scheme can have a large alphabet size. When theblock size N is a power of , the complex MUB matrices havealphabet size of , as noted in Section III. With sparsity level K encoding and M -ary constellation symbols, the alphabetsize of SSC codeword (1) can be up to KM . On the otherhand, the binary linear codes have alphabet size of . However,alphabet size may not be a major issue in OFDM basedcommunication systems. Even when the input is binary, theoutput of the FFT block in OFDM transmitter resemblesGaussian symbols (for large FFT sizes) [33].3) From Lemma 1, we infer that the sparsity level shouldbe of the order of µ for good recovery performance. Withblock length N , the mutual coherence of MUB and Gold codedictionary matrices are of the order of √ N . With the sparsitylevel K = γ √ N for some fixed γ , the bits per dimension ofthe SSC and SC schemes ( N b /N ) converge to as N → ∞ .Hence, our sparse coding based schemes are not suitable forlarge block lengths with high code rates. B. Directions for Future Work
1) MAD decoding involves computation of inner productsof the observation vector with all the columns of dictionarymatrix. Efficient ways to compute these inner products forMUB and Gold code dictionary matrices can be explored infuture.2) Sufficient conditions for successful support recovery forOMP algorithm in the presence of noise, exist in the literature[26]. Some of these conditions can be modified to give boundson the error performance of MAD algorithm, especially, whenthe constellation size M = 1 . On the other hand, obtainingtight bounds on the probability of block error for MADdecoder in Gaussian noise is a challenging problem, especially,for the constellation size M > .3) The SSC scheme has a penalty in the total number of bits N b transmitted in a block (10) when compared to the sparsecoding scheme (4), for K > . Other simple and efficientways to map the information bits to the support set which givehigher number of bits than SSC scheme can be developed ina future work.4) MAD algorithm is inspired by the matching pursuit basedsparse signal recovery algorithms. There are also convex pro-gramming based algorithms for sparse signal recovery, such as ℓ norm minimization [13] and dantzig selector [14]. To solvethese convex programs, there are several iterative techniquesavailable in the literature, such as ISTA [34], FISTA [35] andAMP [15]. We can study the decoding performance of the SSCscheme with these other sparse signal recovery algorithms.In addition, using the deep unfolding principle, some of theiterative sparse signal recovery algorithms such as AMP, havebeen implemented using deep learning architectures [16]. Suchdeep learning architectures can be employed at the receiver todecode the SSC scheme.5) As discussed in Section V, there are several avenuesfor further investigation of sparse coding based techniquesin multi-user scenarios, especially, when there is asymmetryin the channel conditions, power constraints and rate require-ments among users.6) We can explore other options for the dictionary matrixconstruction. There are constructions available in the literaturefor real MUB matrices under some conditions on the blocklength N [36], which give a real (binary) dictionary matrixwith N columns and mutual coherence √ N . Real MUBoccupies only half the number of real dimensions for thesame block length N when compared to the complex MUB.So real MUB will achieve the same code rate (bpd) usinga smaller sparsity level K and hence its BLER performancewill be better than the corresponding complex MUB. Thereare also approximate MUB [37] constructions, which can givedictionary matrix with more than N columns at the expenseof mutual coherence being higher than √ N . Other quantumdesigns such as SIC-POVM, approximate SIC-POVM andZadoff-Chu sequences from CDMA can also be used forobtaining the dictionary matrix.7) Extensions of SSC encoder and MAD decoder forfrequency selective channels and single/multi-user MIMOchannels can be explored in future. 8) Applicability of sparse coding based techniques for thestorage systems can be explored in a future work. Note that,with MUB dictionary matrix and sparsity level K = 1 , thealphabets of the codewords are from a (rotated) QPSK con-stellation and hence can be represented using 2 bits. However,the sparse coding alphabet size increases with K , making itunsuitable for storage when K is large.9) Communications with sparse signal based encodingand matching/correlation based decoding is very simple andintuitive. Using this sparse signal based communication tomodel/understand some of the naturally occurring communi-cations, for instance, communications based on neurologicalsignals, can be explored in a future work. C. Final Remarks
We proposed SSC encoding scheme and MAD decod-ing scheme for communications, inspired by sparse signalprocessing. Our proposed technique is a non-linear codingscheme, whose implementation is easy to understand. WithMUB and Gold code dictionary matrices, the proposed schemegives competitive performance when compared to some of thecommonly used linear codes, for small block lengths and lowcode rates. Unlike linear codes, our proposed sparse codingbased techniques extend neatly to multi-user scenarios. Ourschemes can be straightforwardly used in applications wherethere are several users with small number of information bits totransmit-to and/or receive-from. Such applications can includecommunicating control information to/from several users in acellular network or in a vehicular communication system orcommunications in internet of things (IoT) applications.A
CKNOWLEDGMENT
We would like to thank our colleague Pradeep Sarvepallifor introducing us to the delightful world of mutually unbiasedbases. R
EFERENCES[1] C. E. Shannon, “A mathematical theory of communication,”
The Bellsystem technical journal , vol. 27, no. 3, pp. 379–423, 1948.[2] S. Lin and D. J. Costello,
Error control coding . Prentice hall, 2001.[3] U. Madhow,
Fundamentals of digital communication . CambridgeUniversity Press, 2008.[4] S. Verdu et al. , Multiuser detection . Cambridge university press, 1998.[5] R. Gold, “Optimal binary sequences for spread spectrum multiplexing(corresp.),”
IEEE Transactions on Information Theory , vol. 13, no. 4,pp. 619–621, 1967.[6] R. Frank, “Polyphase codes with good nonperiodic correlation proper-ties,”
IEEE Transactions on Information Theory , vol. 9, no. 1, pp. 43–45,1963.[7] D. Chu, “Polyphase codes with good periodic correlation properties(corresp.),”
IEEE Transactions on information theory , vol. 18, no. 4,pp. 531–532, 1972.[8] W. K. Wootters and B. D. Fields, “Optimal state-determination bymutually unbiased measurements,”
Annals of Physics , vol. 191, no. 2,pp. 363–381, 1989.[9] J. M. Renes, R. Blume-Kohout, A. J. Scott, and C. M. Caves, “Sym-metric informationally complete quantum measurements,”
Journal ofMathematical Physics , vol. 45, no. 6, pp. 2171–2180, 2004.[10] Y. C. Eldar and G. Kutyniok,
Compressed sensing: theory and applica-tions . Cambridge university press, 2012.[11] S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequencydictionaries,”
IEEE Transactions on signal processing , vol. 41, no. 12,pp. 3397–3415, 1993. [12] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matchingpursuit: Recursive function approximation with applications to waveletdecomposition,” in Proceedings of 27th Asilomar conference on signals,systems and computers , pp. 40–44, IEEE, 1993.[13] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decompositionby basis pursuit,”
SIAM review , vol. 43, no. 1, pp. 129–159, 2001.[14] E. Candes, T. Tao, et al. , “The Dantzig selector: Statistical estimationwhen p is much larger than n,”
The annals of Statistics , vol. 35, no. 6,pp. 2313–2351, 2007.[15] A. Maleki, “Approximate message passing algorithms for compressedsensing,” a degree of Doctor of Philosophy, Stanford University , 2011.[16] C. Metzler, A. Mousavi, and R. Baraniuk, “Learned D-AMP: Principledneural network based compressive image recovery,” in
Advances inNeural Information Processing Systems , pp. 1772–1783, 2017.[17] E. Candes and J. Romberg, “Sparsity and incoherence in compressivesampling,”
Inverse problems , vol. 23, no. 3, p. 969, 2007.[18] E. Candes, M. Rudelson, T. Tao, and R. Vershynin, “Error correction vialinear programming,” in , pp. 668–681, IEEE, 2005.[19] A. Jain, P. Sarvepalli, S. Bhashyam, and A. P. Kannu, “Algorithmsfor change detection with sparse signals,”
IEEE Transactions on SignalProcessing , vol. 68, pp. 1331–1345, 2020.[20] S. Pawar,
Pulse: Peeling-based ultra-low complexity algorithms forsparse signal estimation . PhD thesis, UC Berkeley, 2013.[21] R. Y. Mesleh, H. Haas, S. Sinanovic, C. W. Ahn, and S. Yun, “Spatialmodulation,”
IEEE Transactions on vehicular technology , vol. 57, no. 4,pp. 2228–2241, 2008.[22] R. Abu-Alhiga and H. Haas, “Subcarrier-index modulation OFDM,”in , pp. 177–181, IEEE, 2009.[23] A. M. Kerdock, “A class of low-rate nonlinear binary codes,”
Informa-tion and control , vol. 20, no. 2, pp. 182–187, 1972.[24] F. P. Preparata, “A class of optimum nonlinear double-error-correctingcodes,”
Information and Control , vol. 13, no. 4, pp. 378–400, 1968.[25] P. Delsarte, J.-M. Goethals, and J. J. Seidel, “Spherical codes anddesigns,” in
Geometry and Combinatorics , pp. 68–93, Elsevier, 1991.[26] T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signalrecovery with noise,”
IEEE Transactions on Information theory , vol. 57,no. 7, pp. 4680–4688, 2011.[27] J. A. Tropp, “Greed is good: Algorithmic results for sparse approx-imation,”
IEEE Transactions on Information theory , vol. 50, no. 10,pp. 2231–2242, 2004.[28] J. Van Wonterghem, A. Alloum, J. J. Boutros, and M. Moeneclaey,“On short-length error-correcting codes for 5G-NR,”
Ad Hoc Networks ,vol. 79, pp. 53–62, 2018.[29] M. C. Cos¸kun, G. Durisi, T. Jerkovits, G. Liva, W. Ryan, B. Stein,and F. Steiner, “Efficient error-correcting codes in the short blocklengthregime,”
Physical Communication , vol. 34, pp. 66–79, 2019.[30] Y. Saito, Y. Kishiyama, A. Benjebbour, T. Nakamura, A. Li, andK. Higuchi, “Non-orthogonal multiple access (NOMA) for cellularfuture radio access,” in , pp. 1–5, IEEE, 2013.[31] T. M. Cover,
Elements of information theory . John Wiley & Sons, 1999.[32] A. El Gamal and Y.-H. Kim,
Network information theory . Cambridgeuniversity press, 2011.[33] T. Jiang and Y. Wu, “An overview: Peak-to-average power ratio reductiontechniques for OFDM signals,”
IEEE Transactions on broadcasting ,vol. 54, no. 2, pp. 257–268, 2008.[34] I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholdingalgorithm for linear inverse problems with a sparsity constraint,”
Com-munications on Pure and Applied Mathematics: A Journal Issued by theCourant Institute of Mathematical Sciences , vol. 57, no. 11, pp. 1413–1457, 2004.[35] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algo-rithm for linear inverse problems,”
SIAM journal on imaging sciences ,vol. 2, no. 1, pp. 183–202, 2009.[36] P. J. Cameron and J. J. Seidel, “Quadratic forms over GF (2),” in
Geometry and Combinatorics , pp. 290–297, Elsevier, 1991.[37] A. Klappenecker, M. R¨otteler, I. E. Shparlinski, and A. Winterhof, “Onapproximately symmetric informationally complete positive operator-valued measures and related systems of quantum states,”