Sparse recovery guarantees for block orthogonal binary matrices constructed via Generalized Euler Squares
aa r X i v : . [ m a t h . C O ] J u l Sparse recovery guarantees for block orthogonal binary matricesconstructed via Generalized Euler Squares
Pradip Sasmal † , Phanindra Jampana and C. S. Sastry † Dept. of ECE, Indian Institute of Science, Bangalore, Indiaemail: † { pradipsasmal } @iisc.ac.in,Indian Institute of Technology, Hyderabad, Telangana. 502285, India.Email: { pjampana, csastry } @iith.ac.in Abstract
In recent times, the construction of deterministic matrices has gained popularity as analternative of random matrices as they provide guarantees for recovery of sparse signals. Inparticular, the construction of binary matrices has attained significance due to their potentialfor hardware-friendly implementation and appealing applications. Our present work aimsat constructing incoherent binary matrices consisting of orthogonal blocks with small blockcoherence. We show that the binary matrices constructed from Euler squares exhibit blockorthogonality and possess low block coherence. With a goal of obtaining better aspectratios, the present work generalizes the notion of Euler Squares and obtains a new class ofdeterministic binary matrices of more general size. For realizing the stated objectives, tobegin with, the paper revisits the connection of finite field theory to Euler Squares and theirconstruction. Using the stated connection, the work proposes Generalized Euler Squares(GES) and then presents a construction procedure. Binary matrices with low coherenceand general row-sizes are obtained, whose column size is in the maximum possible order.Finally, the paper shows that the special structure possessed by GES is helpful in resultingin block orthogonal structure with small block coherence, which supports the recovery ofblock sparse signals.
Keywords:
Compressed Sensing, Euler Square, Generalized Euler Square, Block sparsity.
1. Introduction
Recent developments at the intersection of algebra, probability and optimization theory,by the name of Compressed Sensing (CS), aim at providing sparse descriptions to linearsystems. These developments are found to have tremendous potential for several applications[7][20]. Sparse representations of a vector are a powerful analytic tool in many applicationareas such as image/signal processing and numerical computation [13], to name a few. Theneed for sparse representation arises from the fact that several real life applications demandexpressing data in terms of as few basis elements as possible. The developments of CS theorydepend typically on sparsity and incoherence [13][14].
Preprint submitted to Elsevier July 18, 2019 or minimizing the computational complexity associated with the matrix-vector multi-plication, it is desirable that a CS matrix has smaller density. Here, a CS matrix refers to amatrix that satisfies sparse recovery properties and density refers to the ratio of number ofnonzero entries to the total number of entries of the matrix. Sparse CS matrices, especiallybinary matrices, contribute to fast processing with low computational complexity in CS [20].The low coherence of a CS matrix, on the other hand, provides guarantees for recoveringsparse signals via basis pursuit (BP) and orthogonal matching pursuit (OMP). In case ofrecovery of block sparse signals, the block coherence of a CS matrix plays an importantrole [25]. To date, less attention has been paid to constructing binary matrices with smallcoherence which also possess low block coherence. Our present work is on the constructionof block orthogonal binary matrices with small coherence and block coherence so that theysupport recovery of sparse signals and block sparse signals simultaneously.In recent literature on deterministic CS matrices, several authors [4], [5], [6], [18], [19],[21], [22], [23], [24] have made pioneering contributions using novel ideas. Nevertheless,in most of these constructions, the associated CS matrices have been constructed for sizesthat are dictated by a certain family of prime numbers. Additionally, none of the theseconstructions are known to exhibit block orthogonal structure.Recently in [3], the authors have proposed a method to construct compressed sensingmatrices from Euler Square. In particular, given a positive integer m different from p, p fora prime p , the authors [3] have shown that it is possible to construct a binary sensing matrixof size m × c ( mµ ) using Euler Squares, where µ is the coherence of the matrix (the maximumoff-diagonal entry, in magnitude, of Gram matrix) and c ∈ [1 , . One of the objectives ofpresent work is to improve upon the aspect ratio (the ratio of column size to row size) ofthe CS matrices. To realize this objective, we propose a generalization of the concept ofthe Euler Square, namely, Generalized Euler Square (GES). To begin with, we construct anEuler Square of index p, k , where p is a prime or prime power, using polynomials of degreeat most one over a finite field of order p. Generalized Euler Square of index p, k, t are thenconstructed using polynomials of degree at most t. Using a composition rule a new GES fornon-prime sizes are obtained from the combination of two GES of prime sizes. Further, wepropose a methodology to construct compressed sensing matrices from GES. The matricesdesigned from GES show significant improvements in terms of column size. In particular,the binary matrix constructed from GES( n, k, t ) has n t number of orthogonal blocks, eachof size nk × n. We discuss the structure of the binary matrices generated from generalizedEuler Squares, and show that they possess block orthogonality. We also derive that binarymatrices constructed from Euler Square and GES possess low block coherence, providingthereby guarantees for recovering block sparse signals. The contributions of the presentwork may be summarized as follows: • presenting Euler Squares with a different perspective and associating them with poly-nomials of degree at most one over a finite field. • establishing block orthogonal structure of binary matrices constructed via Euler Squaresin [3] and showing their compliance with block sparse recovery properties2 introducing Generalized Euler Squares (GES) and providing their construction proce-dure • bringing significant increment in column size compared to the ones obtained via EulerSquares • establishing block orthogonal structure of the GES based binary matrices.The paper is organized in several sections. In section 2, we provide basics of CS theoryand Euler Squares. While in sections 3 and 4, we discuss respectively the block sparserecovery through Euler Square based matrices and block orthogonality, and construction ofEuler Square based matrices using polynomials of degree at most one over finite field. Insections 5 and 6, we introduce Generalized Euler Squares and their construction respectively.In sections 7 and 8, we present respectively the construction of Euler Squares for compositeorder and construction of binary matrices using GES. In sections 9 and 10, we present GESas a rectangular array and the recovery guarantees for block sparse signals via GES. Thepaper ends with concluding remarks in Section 11.
2. Basics of Compressed Sensing and Euler Squares:
The objective of compressed sensing is to recover x = { x i } Mi =1 ∈ R M from a few of itslinear measurements y ∈ R m through a stable and efficient reconstruction process via theconcept of sparsity. From the measurement vector y and the sensing mechanism, one gets asystem y = Φ x , where Φ is an m × M ( m < M ) measurement matrix. An excellent overviewof Compressed Sensing and the applicability of several sensing matrices may be seen in [16].Given the pair ( y, Φ), the problem of recovering x can be formulated as finding the spars-est solution (solution containing most number of zero entries) of linear system of equations y = Φ x . Sparsity is measured by k . k “norm”. k x k denotes the number of non-zero en-tries in x , that is, k x k = |{ j : x j = 0 }| , where k . k is neither a norm nor a quasi-norm.Now finding the sparsest solution can be formulated as the following minimization problem,generally denoted as P problem: P : min x k x k subject to Φ x = y. This is a combinatorial optimization problem and is known to be NP-hard [7]. One mayuse greedy methods and convex relaxation of P problem to recover the k − sparse signals(i.e., signals for which k x k ≤ k ). The convex relaxation of P problem can be posed as P problem [8] [11], which is defined as follows: P : min x k x k subject to Φ x = y. Candes and Tao [9] have introduced the following isometry condition on matrices Φ andhave established its important role in CS. An m × M matrix Φ is said to satisfy the Restricted3sometry Property(RIP) of order k with constant δ k (0 < δ k <
1) if for all vectors x ∈ R M with k x k ≤ k , we have(1 − δ k ) k x k ≤ k Φ x k ≤ (1 + δ k ) k x k . (1)Equivalently, for all vectors x ∈ R M with k x k = 1 and k x k ≤ k , one may rewrite (1) as(1 − δ k ) ≤ k Φ x k ≤ (1 + δ k ) . The following theorem [11] establishes the equivalence between P and P problems: Theorem 2.1.
Suppose an m × M matrix Φ has the (2 k, δ ) restricted isometry propertyfor some δ < √ − , then P and P have same k − sparse solution if P has a k − sparsesolution. The coherence µ (Φ) of a given matrix Φ is the largest absolute inner-product betweendifferent normalized columns of Φ, i.e., µ Φ = max ≤ i,j ≤ M, i = j | Φ Ti φ j |k Φ i k k Φ j k . Here, Φ k stands forthe k -th column in Φ. The following proposition [7] relates the RIP constant δ k and µ Φ . Proposition 2.2.
Suppose that φ , . . . , φ M are the unit norm columns of a matrix Φ withthe coherence µ Φ . Then Φ satisfies RIP of order k with constant δ k = ( k − µ Φ . The Orthogonal Matching Pursuit (OMP) algorithm and and the l − norm minimization(also called basis pursuit) are two widely studied CS reconstruction algorithms [15]. One ofthe important problems in CS theory deals with constructing CS matrices that satisfy theRIP for the largest possible range of k . It is known that the widest possible range of k is ofthe order m log( Mm ) [10], [12], [17]. However the only known matrices that satisfy the RIP forthis range are based on random constructions [11], [12]. Block sparse signals are sparse signals where the nonzero coefficients occur in clusters. Asignal x ∈ R M is viewed as a concatenation of R number of blocks of length d with M = Rd.
Denote the ℓ − th block as x [ ℓ ] , then x = ( x T [1] , x T [2] , . . . , x T [ R ]) , where x T [ ℓ ] = ( x ( ℓ − d +1 , x ( ℓ − d +2 , . . . , x ℓd ) . The measurements can then be written as y = Φ x = R X ℓ =1 Φ[ ℓ ] x [ ℓ ] , where Φ[ ℓ ] = [ φ ( ℓ − d +1 , φ ( ℓ − d +2 , . . . , φ ℓd ] and φ j is the j − th column of Φ . The vector x iscalled block k − sparse if x [ ℓ ] has nonzero Euclidean norm for at most k indices ℓ . When d = 1, block sparsity reduces to conventional sparsity. Define k x k , as k x k , = R X ℓ =1 I ( k x [ ℓ ] k > , where I ( · ) is the indicator function. A block k − sparse signal x is defined as the signal thatsatisfies k x k , ≤ k [25], by definition. 4 efinition 2.3. [25] Block coherence of a matrix Φ with normalized columns is defined as µ B Φ = 1 d max ℓ,ℓ = r λ max ( M T [ ℓ, r ] M [ ℓ, r ]) , where M [ ℓ, r ] = Φ T [ ℓ ]Φ[ r ] and λ max ( A ) is the largest eigen value of a positive semidefinitematrix A. It is known that 0 ≤ µ B Φ ≤ µ Φ [25]. Proposition 2.4. [25] If Φ consists of orthogonal blocks, that is, Φ T [ ℓ ]Φ[ ℓ ] = I d × d , ∀ ℓ, then µ B Φ ≤ d . Theorem 2.5. [25] A sufficient condition for the Block OMP to recover a block k − sparsesignal x , with each block length d , from y = Φ x is kd < µ − B Φ + d , where Φ is block orthogonal, that is, Φ T [ ℓ ]Φ[ ℓ ] = I d × d , ∀ ℓ. An Euler Square (ES) of order n , degree k and index n, k is a square array of n k − ads of numbers, ( a ij , a ij , . . . , a ijk ), where a ijr = 0 , , , . . . , n − r = 1 , , . . . , k ; i, j =1 , , . . . , n ; n > k ; a ipr = a iqr and a pjr = a qjr for p = q and ( a ijr , a ijs ) = ( a pqr , a pqs ) for i = p and j = q. In short we denote an Euler Square of order n , degree k and index n, k asES( n, k ). Harris F. MacNeish [1] has constructed Euler Squares for the following cases:1. Index p, p −
1, where p is a prime number2. Index p r , p r −
1, for p prime3. Index n, k , where n = 2 r p r p r . . . , p r l l for distinct odd primes p , p , . . . , p l . Here, k = min { r , p r , p r , . . . , p r l l } − . Lemma 2.6. [1] Let k ′ < k . Then the existence of the Euler Square of index n, k impliesthat the Euler Square of index n, k ′ exists. An Euler Square of degree one is called a Latin Square and of degree two a Graeco-LatinSquare [1, 2]. Euler squares are also called as mutually orthogonal Latin squares. In [26, 27],the authors have proposed generalization of latin squares and their orthogonality.
3. Block sparse signal recovery for matrices constructed from Euler Square
The authors of [3] have constructed binary matrix Φ nk of size nk × n with coherenceat most k , whenever ES( n, k ) exists. In this section, it is shown that the binary matricesobtained from Euler Squares also contain a block orthogonal structure.An Euler square can be observed to be an n × n square array of k -tuples with the followingproperties: 5 (ES 1) : each array entry is a k − tuple of numbers obtained from { , . . . , n − }• (ES 2) : there is no intersection between any two k − tuples on the same row and samecolumn • (ES 3) : there is at most one intersection between any two distinct k − tuples whichare not from the same row or columnHere intersection between two k − tuples denotes the number of indices where the corre-sponding entries of both the tuples are same.For example, the Euler Square of index 3 , , , , , , , , , , Theorem 3.1.
If ES ( n, k ) exists, then a sparse matrix of size nk × n exists which consistsof n orthonormal blocks, each of size nk × n. Proof.
From the construction procedure given in [3], a binary matrix Φ nk of size nk × n isobtained from ES( n, k ). Every column of Φ nk corresponds to a unique k − tuple of ES( n, k ).We arrange the columns of Φ nk to form a block orthogonal matrix. We form the ℓ − th block(of block size n ) by taking n columns of Φ nk corresponding to n k − tuples coming from ℓ − thcolumn of ES( n, k ). From the definition of Euler Squares, two distinct k − tuples belongingto same column of an Euler Square do not have any intersection. As a result, the innerproduct between any two distinct columns within a block is zero, implying thereby thateach block is orthogonal. Consequently, the block matrix √ k Φ nk has n orthonormal blocks,where each block is of size nk × n. For example, let Φ be the binary matrix constructed from the ES(3 , corresponding to first, second and third columns are , and respectively.Now it is easy to see that the blocks are orthogonal. Therefore Φ consists of 3 orthogonalblocks, where each block is of size 6 × . . Construction of Euler Squares using finite fields In [1], the author has used group theoretical results to construct Euler Squares. In [2],R. C. Bose has constructed Euler square using polynomials over finite field. In this section,we revisit the construction given in [2] and present a construction of Euler square usingpolynomials of degree at most one over finite fields.
Theorem 4.1.
Suppose p is prime prime power. Then ES( p, p − ) exists.Proof. Let D p = { P ij = f i x + f j : i, j = 1 , , . . . , p } denote polynomials of degree atmost one over a finite field F p = { f = 0 , f . . . , f p − } of order p. There are p numberof polynomials of degree at most one. Form the k − tuple, S k = ( f , . . . , f k ) , for 1 ≤ k ≤ p − . Evaluating a polynomial P ij of D p at every point of S k , we form an ordered k − tuple P ij ( S k ) = ( P ij ( f ) , . . . , P ij ( f k )) ∈ F kp . Let us denote S k = { P ij ( S k ) : i, j = 1 , . . . , p } ⊆ F kp . Itcan be noted that | S k | = p . To make P ij ( S k p ) a k − tuple of numbers on { , , · · · , p − } , we replace f i with its index i .We now claim that S k forms an Euler square of index p, k , where i and j denote rowand column indices respectively. To show that S k = { P ij ( S k ) = ( P ij ( f ) , . . . , P ij ( f k +1 ) } : i, j = 1 , . . . , p } forms an Euler Square of index p, k, we need to show that, for q, s =1 , . . . , k , P in ( f q ) = P im ( f q ) and P nj ( f q ) = P mj ( f q ) for n = m and ( P ij ( f q ) , P ij ( f s )) =( P nm ( f q ) , P nm ( f s )) for i = n and j = m. For n = m, P in = f i x + f n and P im = f i x + f m do not have any common root, which showsthat P in ( f q ) = P im ( f q ) . For n = m, P nj = f n x + f j and P mj = f m x + f j have one common rootat f = 0 , which shows that P nj ( f q ) = P mj ( f q ) , as 1 = q. For i = n and j = m, P ij and P nm can have at most one common root, which shows that ( P ij ( f q ) , P ij ( f s )) = ( P nm ( f q ) , P nm ( f s )) . Therefore, for prime or prime power p and k ≤ p −
1, one can construct ES( p, k ) usingpolynomials of degree at most one.
For constructing the Euler Square of index 3 ,
2, we consider the field F = Z = { , , } . Then, the set D = { P ij : i, j = 0 , , } consists of all polynomials of degree at most oneover Z . Note that | D | = 9 . Fix S = (1 ,
2) as the ordered 2 − tuple. Evaluating everypolynomial of D at every point of S , we get the set: S = { (0 , , (1 , , (2 , , , (2 , , (0 , , , (0 , , (1 , } ⊆ Z . Now it is easy to checkthat S forms an Euler square of index 3 , . Once ES( p, k ) is obtained for p being a prime or prime power, one can follow the com-position rule described in [1] to obtain ES( n, k ) for composite n = 2 r p r p r . . . , p r l l and k ≤ min { r , p r , p r , . . . , p r l l } −
1, where p , p , . . . , p l are distinct odd primes.7 . Generalized Euler Square In this section, we propose a generalization of Euler Squares.
Definition 5.1.
Generalized Euler Square (GES):A GES of index n, k, t , denoted GES( n, k, t ), with n > k > t , is a hyper-rectangle of n t +1 k − ads of numbers, ( a i i ...i t +1 , . . . , a i i ...i t +1 k ), where a i i ...i t +1 r ∈ { , , . . . , n − } ; r = 1 , , . . . , k ; 1 ≤ i s ≤ n for s = 1 , · · · , t + 1; a i ...i j − ui j +1 ...i t +1 r = a i ...i j − vi j +1 ...i t +1 r for u = v and for i x = j x , i x = j x ; 1 ≤ x = x ≤ t + 1 , ( a i i ...i t +1 r , . . . , a i i ...i t +1 r t +1 ) =( a j j ...j t +1 r , . . . , a j j ...j t +1 r t +1 ), where 1 ≤ r l ≤ k , 1 ≤ l ≤ k + 1. Remark 5.2.
It is easy to check that an Euler Square of index n, k is a GES of index n, k, . In line with Harris F. MacNeish’s construction [1] for Euler Square, we construct the Gen-eralized Euler Squares for the following cases: • Index p, p − , t , where p is a prime number • Index p r , p r − , t , for p prime • Index n, k, t , where n = 2 r p r p r . . . , p r l l for distinct odd primes p , p , . . . , p l . Here, k + 1 equals the least of the numbers 2 r , p r , p r , . . . , p r l l . In the next section, we use higher degree polynomials over finite field for constructing GES.
6. Construction of Generalized Euler Squares
One of the main objectives of proposing Generalized Euler Square is to obtain a binarymatrix possessing a larger number of columns. However, this comes at the cost of increasednumber of intersections between the k − tuples. Therefore, in GES we allow more thanone intersections in order to produce large number of k − tuples. The intersection betweenany two distinct k − tuples is the number of common roots between the two correspondingpolynomials. The use of polynomials of higher degree allows for more intersections betweenthe k − tuples. p, k, t where p is a prime or prime power Consider the polynomials of degree at most t over a finite field F p = { f = 0 , f , . . . , f p − } of order p. Form an ordered k − tuple S k = ( f , ...., f k ) . Let us denote the set of polynomialsof degree at most t as D t . As there are p t +1 number of polynomials of degree at most t , we can write D t = { P ti i ...i t +1 = P t +1 j =1 f i j x j − : f i j ∈ F p , ≤ i j ≤ p } . Evaluating apolynomial P ti i ...i r +1 at every point of S k , we form an ordered k − tuple P ti i ...i t +1 ( S k ) =( P ti i ...i t +1 ( f ) , . . . , P ti i ...i t +1 ( f k )) ∈ F kp . Let S tk = { P ti i ...i t +1 ( S k ) : P ti i ...i t +1 ∈ D pt } ⊆ F kp . Now | S tk | = p t +1 . Similar to the Euler Square case, in order to make P ti i ...i r +1 ( S k ) a k − tuple ofnumbers on { , · · · , p − } , we replace f i with its index i . It will be shown next that the k -tuples in the set S tk form a GES( n, k, t ). 8or u = v, P ti ...i j − ui j +1 ...i t +1 and P ti ...i j − vi j +1 ...i t +1 can have either no common root when j = 1 or 0 ∈ F p as its common root when 1 < j ≤ t +1. This shows that, P ti ...i j − ui j +1 ...i t +1 ( S k )and P ti ...i j − vi j +1 ...i t +1 ( S k ) have no intersection as 0 / ∈ S k . For i x = j x , i x = j x ; 1 ≤ x = x ≤ t + 1, P ti i ...i t +1 and P tj j ...j t +1 have at most t number of common roots. This provesthat S tk forms GES( n, k, t ).
7. Construction of GES for composite order
In this section, we construct GES for composite dimensions. We now describe a procedureto combine two GES to produce a new GES. The resulting GES can have more general rowsizes which are different from prime or prime powers.Let p ′ and p ′′ be two primes or prime powers. Then, with 1 ≤ t < k < p ′′ ≤ p ′ , we canobtain one GES of index p ′ , k, t and another GES of index p ′′ , k, t from the previous con-struction. Let { ( c ′ i ,i ,...,i t +1 , , c ′ i ,i ,...,i t +1 , . . . , c ′ i ,i ,...,i t +1 ,k ) } p ′ i s =1 denote the GES( p ′ , k, t ) and { ( c ′′ j ,j ,...,j t +1 , , c ′′ j ,j ,...,j t +1 , . . . , c ′′ j ,j ,...,j t +1 ,k ) } p ′′ j s =1 denote GES( p ′′ , k, t ) . Let us define that c i + p ′ ( j − ,i + p ′ ( j − ,...,i t +1 + p ′ ( j t +1 − ,r = c ′ i ,i ,...,i t +1 ,r + p ′ ( c ′′ j ,j ,...,j t +1 ,r − . Take m s = i s + p ′ ( j s − . It is clear that1. 1 ≤ m s ≤ p ′ p ′′ as 1 ≤ i s ≤ p ′ and 1 ≤ j s ≤ p ′′
2. 0 ≤ c m ,m ,...,m t +1 ,r ≤ p ′ p ′′ − ≤ c ′ i ,i ,...,i t +1 ,r ≤ p ′ − ≤ c ′′ j ,j ,...,j t +1 ,r ≤ p ′′ − . It is now shown that the k -ads consisting of elements c m ,m ,...,m t +1 ,r for r = 1 , , · · · , k for m i = 1 , , · · · , p ′ p ′′ , i = 1 , , · · · , t + 1 form GES( p ′ p ′′ , k, t ).1. m as = m bs implies that i as + p ′ ( j as − = i bs + p ′ ( j bs − i as = i bs or j as = j bs or both. By definition, for i as = i bs , c ′ i ,i ,...,i s − ,i as ,i s +1 ,...,i t +1 ,r = c ′ i ,i ,...,i s − ,i bs ,i s +1 ,...,i t +1 ,r and for j as = j bs , c ′′ j ,j ,...,j s − ,j as ,j s +1 ,...,j t +1 ,r = c ′′ j ,j ,...,j s − ,j bs ,i j +1 ,...,j t +1 ,r . Also note that | c ′ i ,i ,...,i s − ,i as ,i s +1 ,...,i t +1 ,r − c ′ i ,i ,...,i s − ,i bs ,i s +1 ,...,i t +1 ,r | < p ′ . Now, if we have m as = m bs ,c m ,m ,...,m s − ,m as ,m s +1 ,...,m t +1 ,r = c m ,m ,...,m s − ,m bs ,m s +1 ,...,m t +1 ,r .
2. Suppose that m as = m bs and m a ˜ s = m b ˜ s . Now m as = m bs can happen for i as = i bs or j as = j bs or for both whereas m a ˜ s = m b ˜ s can occur if i a ˜ s = i b ˜ s or j a ˜ s = j b ˜ s or both. Hence m as = m bs and m a ˜ s = m b ˜ s can happen for nine possible pairs of combinations which can be formedby taking one case from occurrence of m as = m bs and another case from the occurrenceof m a ˜ s = m b ˜ s . For i as = i bs or i a ˜ s = i b ˜ s or both,( c ′ i a ,...,i as − ,i as ,i as +1 ,...,i a ˜ s − ,i a ˜ s ,i a ˜ s +1 ,...,i at +1 ,r z ) t +1 z =1 = ( c ′ i b ,...,i bs − ,i bs ,i bs +1 ,...,i b ˜ s − ,i b ˜ s ,i b ˜ s +1 ,...,i bt +1 ,r z ) t +1 z =1 . Similarly for j as = j bs or j a ˜ s = j b ˜ s or both,( c ′′ j a ,...,j as − ,j as ,j as +1 ,...,j b ˜ s − ,j b ˜ s ,j b ˜ s +1 ,...,j at +1 ,r z ) t +1 z =1 = ( c ′′ j b ,...,j bs − ,j bs ,j bs +1 ,...,j b ˜ s − ,j b ˜ s ,j b ˜ s +1 ,...,j bt +1 ,r z ) t +1 z =1 . c m a ,...,m as − ,m as ,m as +1 ,...,m a ˜ s − ,m a ˜ s ,m a ˜ s +1 ,...,m at +1 ,r z ) t +1 z =1 = ( c m b ,...,m bs − ,m bs ,m bs +1 ,...,m b ˜ s − ,m b ˜ s ,m b ˜ s +1 ,...,m bt +1 ,r z ) t +1 z =1 . A natural extension of the above composition can be stated as follows:
Theorem 7.1.
Suppose n = 2 r p r p r . . . , p r l l for distinct odd primes p , p , . . . , p l and t Lemma 7.2. Let t < k ′ < k . Then the existence of GES( n, k, t ) implies that GES( n, k ′ , t )exists. 8. Construction of binary matrices via GES In this section, a construction of CS matrices from GES( n, k, t ) is proposed. Let us repre-sent n t +1 number of k − tuples as { ( t j , . . . , t jk ) : j = 1 , . . . , n r +1 } , obtained from GES( n, k, t ).Note that, 0 ≤ t ji ≤ n − , for all j = 1 , . . . , n r +1 and i = 1 , . . . , k. From a k − tuple ( t j , . . . , t jk ) , we form a binary vector v j of length nk such that v j ( i ) = ( , if i = ( l − n + t jl + 1 , ≤ l ≤ k , elsewhereUsing n t +1 number of k − tuples, n t +1 binary vectors, each of length nk , are obtained. Bytaking the binary vectors as columns, we obtain a binary matrix Φ( n, k, t ) of size nk × n t +1 . We treat Φ( n, k, t ) as the GES matrix of index n, k, t. 1. Φ( n, k, t ) has k number of blocks with each block size being n. 2. Each column of Φ( n, k, t ) has exactly k number of ones and contains a single 1 in eachblock.3. As intersection between any two distinct k − tuples of GES( n, k, t ) is at most t, thenon-zero overlap between any two distinct columns of Φ( n, k, t ) is at most t. Lemma 8.1. The coherence µ Φ( n,k,r ) of Φ( n, k, r ) is at most equal to rk .Proof. Each column of Φ( n, k, r ) has exactly k number of ones, which implies ℓ − norm ofeach column is √ k. Also the non-zero overlap between any two distinct columns of Φ( n, k, r )is at most r. So, the absolute value of the inner product between any two distinct columnsis at most r. Hence the coherence µ Φ( n,k,r ) of Φ( n, k, r ) is at most rk . emark 8.2. The maximum possible column size of any binary matrix is ( mt +1 )( kt +1 ) [19], where m is the row size, k is the number of ones in each column and t is the maximum overlapbetween any two columns. If m = nk, which is the case for Φ( n, k, t ) , and t is fixed, thenthe maximum possible column size is (cid:0) nkt +1 (cid:1)(cid:0) kt +1 (cid:1) = Θ( n t +1 ) , (2)where a = Θ( b ) implies that, there exist two constants c , c such that c b ≤ a ≤ c b . Hencethe column size of Φ( n, k, t ) is in the maximum possible order.From lemma 8.1 and Proposition 2.2, it follows that the matrix Φ( n, k, t ) so constructedsatisfies RIP. Theorem 8.3. The matrix Φ = √ k Φ( n, k, t ) satisfies RIP with δ k ′ = t ( k ′ − k for any k ′ < kt + 1 . 9. GES as rectangular arrayDefinition 9.1. (Rectangular Array:) A rectangular array is a two dimensional array of k -tuples with the following properties: • (GES 1) : each array entry is a k − tuple of numbers obtained from { , . . . , n − } , • (GES 2) : two distinct k − tuples from the same column do not intersect, • (GES 3) : two distinct k − tuples from same row have at most t − • (GES 4) : any two distinct k − tuples in the array can have at most t intersections. Theorem 9.2. For p being a prime or prime power, ES( p, p − ) is a p × p square matrixof ( p − − tuple such that • each entry is a ( p − − tuple of numbers obtained from { , . . . , p − } , • there is no intersection between any two ( p − − tuples on the same row and samecolumn, • there is exactly one intersection between any two distinct ( p − − tuples which are notfrom the same row or column.Proof. Consider the finite field F p = { f = 0 , f , . . . , f p − } where p is a prime or a primepower. Let S p be the collection of polynomials of degree at most 1, with zero being theconstant term. It is easy to check that the cardinality of S p is | S p | = p. For P ∈ S p , definethe set S pP = { P j = P + f j : j = 0 , . . . , p − } . Fix any ordered ( p − − tuple z ∈ F ( p p − k = p − 1. For simplicity, we consider z = ( f , . . . , f p − ). An ordered ( p − − tuple isformed after evaluating P j at each of the points of z , that is, d Pj := (cid:0) P j ( f ) , · · · , P j ( f p − ) (cid:1) .In order to make d Pj a ( p − − tuple of numbers on { , · · · , p − } , we replace f i with itsindex i . Now d Pj for j = 0 , . . . , p − | S p | = p, there are p suchcolumns. Therefore we get a matrix of size p × p with each entry being ( p − − tuples.Let us take two ( p − − tuples from same column. As they belong to same columnthe corresponding polynomials are of the form P + f i and P + f j for i = j and P being apolynomial of degree at most 1, with zero being the constant term. Since P + f i and P + f j does not share any common root, there is no intersection between the tuples coming fromsame column.Let us take two ( p − − tuples from same row. As they belong to same row the corre-sponding polynomials are of the form P + f i and P + f i for P and P being a polynomialof degree at most 1 with zero being the constant term. Since P + f i and P + f i share zeroas their only common root, there is no intersection between the tuples coming from samerow as the polynomials are not evaluated at zero while forming the tuples. Let us take two( p − − tuples from different row and column. As they belong to different row and differentcolumn the corresponding polynomials are of the form P + f i and P + f j for i = j with P and P being polynomials of degree at most 1 and zero being the constant term. Since P + f i and P + f j share exactly one nonzero common root, there is exactly one intersectionbetween the tuples coming from different row and different column. Example 9.3. Let us first take F = Z = { , , } and fix the ordered 5 − tuple Z = { , } . Now the set of polynomials of degree at most one with constant term zero is S = { , x, x } . Let us look at the arrangement of the polynomials in the construction of GES(3 , , x x x + 1 2 x + 12 x + 2 2 x + 2 . After evaluating the polynomials at Z , we get GES(3 , , 1) as a 3 × − tuples: (0 , 0) (1 , 2) (2 , , 1) (2 , 0) (0 , , 2) (0 , 1) (1 , . One may observe that GES(3 , , 1) satisfies the properties of theorem 9.2. Remark 9.4. Note that, the third condition of theorem 9.2 is stronger than (ES 3) given inSection 3. Later, we use this property while calculating block coherence of binary matricescoming Euler Square. 12 .1. GES( p, k, t ) as a rectangular array Consider a two dimensional matrix where the rows correspond to the p different values ofthe dimension (corresponding to the degree t in the polynomials) and the columns correspondto particular values for the other t dimensions (0 , , , · · · , t − k -tuples obtained by evaluating the p number of polynomials in the dimension t .The matrix obtained is of size p × p t , with each entry being a k − tuple, and satisfies (GES1),(GES 2), (GES 3) and(GES 4).Consider the finite field F p = { f = 0 , f , . . . , f p − } where p is a prime or a prime power.Let S p be the collection of polynomials of degree at most t (where t < p − S p is | S p | = p t . For P ∈ S p , define the set S pP = { P j = P + f j : j = 0 , . . . , p − } . Fix any ordered k − tuple z ∈ F kp with t < k ≤ p − 1. For simplicity, we consider z = ( f , . . . , f k ). An ordered k − tuple is formedafter evaluating P j at each of the points of z , that is, d Pj := (cid:0) P j ( f ) , · · · , P j ( f k ) (cid:1) . In orderto make d Pj a k − tuple of numbers on { , · · · , p − } , we replace f i with its index i . Now d Pj for j = 0 , . . . , p − | S p | = p t there are p t such columns. Thereforewe get a matrix of size p × p t with each entry being a k − tuple.Let us take two k − tuples from same column. As they belong to same column, thecorresponding polynomials are of the form P + f i and P + f j for i = j and P being apolynomial of degree at most t , with zero being the constant term. Since P + f i and P + f j does not share any common root, there is no intersection between the tuples coming fromsame column.Let us take two k − tuples from same row. As they belong to same row the correspondingpolynomials are of the form P + f i and P + f i for P and P being a polynomial of degreeat most t with zero being the constant term. Since P + f i and P + f i share at most t − t − k − tuples from different row and column. As they belong to different rowand column the corresponding polynomials are of the form P + f i and P + f j for i = j and P and P being a polynomial of degree at most r with zero being the constant term. Since P + f i and P + f j share at most r non zero common root, there is at most r intersectionbetween the tuples coming from different row and column. Example: Let us first take F = Z = { , , , , } and fix the ordered 5 − tuple Z = { , , , } . Let us look at the arrangement of the polynomials in the construction of GES(5 , , x x . . . x + 3 x x + 4 x x + 1 2 x + 1 . . . x + 3 x + 1 4 x + 4 x + 12 x + 2 2 x + 2 . . . x + 3 x + 2 4 x + 4 x + 23 x + 3 2 x + 3 . . . x + 3 x + 3 4 x + 4 x + 34 x + 4 2 x + 4 . . . x + 3 x + 4 4 x + 4 x + 4 . Z = { , , , } , we obtain GES(5 , , (0 , , , 0) (1 , , , 4) (1 , , , . . . (2 , , , 1) (3 , , , , , , 1) (2 , , , 0) (2 , , , . . . (3 , , , 2) (4 , , , , , , 2) (3 , , , 1) (3 , , , . . . (4 , , , 3) (0 , , , , , , 3) (4 , , , 2) (4 , , , . . . (0 , , , 4) (1 , , , , , , 4) (0 , , , 3) (0 , , , . . . (1 , , , 0) (2 , , , . p ′ p ′′ , k, t ) as rectangular array As in prime or prime power cases, one can arrange { ( c m ,m ,...,m t +1 , , c m ,m ,...,m t +1 , . . . , c m ,m ,...,m t +1 ,k ) } p ′ p ′′ m s =1 as a rectangular matrix of size p ′ p ′′ × ( p ′ p ′′ ) t which satisfies the same properties given before.We provide an equivalent form of GES( p ′ p ′′ , k, t ) as a rectangular matrix of size p ′ p ′′ × ( p ′ p ′′ ) t in the following way:Following the construction in subsection 9.1, let d Pi , for i = 1 , . . . , ( p ′ ) t , form i th columnof GES( p ′ , k, t ) and d Qj , for j = 1 , . . . , ( p ′′ ) t , form the j th column of GES( p ′′ , k, t ). Now acolumn d P,Qi,j is formed by d P,Qi,j = d Pi + ( d Qj − k ) p ′ , where k is a k − tuple of all ones. Since d Pi has p ′ number of k − tuples and d Qj has p ′′ number of k − tuples, one gets p ′ p ′′ such k − tuples in a column and each entry of d P,Qi,j liesin { , , . . . , p ′ p ′′ − } as each entry of d Pi lies in { , , . . . , p ′ − } and each entry of d Qj lies in { , , . . . , p ′′ − } . For every possible combination of i and j, we get ( p ′ p ′′ ) t columns.Therefore, we can obtain a rectangle D P,Q of size p ′ p ′′ × ( p ′ p ′′ ) t with entry being a k − tupleand each entry of the k − tuples lies between 0 and p ′ p ′′ − . For 1 ≤ ℓ ≤ k, let us denote the ℓ th entry of s th k − tuple of d Pi , ℓ th entry of n th k − tuple of d Qj and ℓ th entry of m th k − tupleof d P,Qi,j as d Pi ( s, ℓ ) , d Qj ( n, ℓ ) and d P,Qi,j ( m, ℓ ), respectively, where 1 ≤ s ≤ p ′ , ≤ n ≤ p ′′ and1 ≤ m ≤ p ′ p ′′ . Let m = m , then we get, d P,Qi,j ( m , ℓ ) = d P,Qi,j ( m , ℓ ) . It follows from the factthat d Pi ( s , ℓ ) = d Pi ( s , ℓ ) and d Pj ( n , ℓ ) = d Qj ( n , ℓ ) for s = s and n = n . Hence D P,Q satisfies (GES 2). Using properties of GES( p ′ , k, t ) and GES( p ′′ , k, t ) and from constructionprocedure, it is easy to check that D P,Q satisfies (GES 3) and (GES 4) too. Hence, D P,Q forms a GES( p ′ p ′′ , k, t ). Theorem 9.5. Suppose n = 2 r p r p r . . . , p r l l for distinct odd primes p , p , . . . , p l and t < k < min { r , p r , p r , . . . , p r l l } . Then GES( n, k, t ) exists, which can be represented as an n × n t matrix with each entry being a k − tuple of numbers taken from { , , . . . , n } and has thefollowing properties: • Two distinct k − tuples from the same column do not intersect. • Two distinct k − tuples from same row have at most t − intersections. • Any two distinct k − tuples in the array can have at most t intersections. 0. Recovery guarantees for block sparse signals via GES In this section, making use of the properties of GES, we show that the binary matricesconstructed from GES are capable of recovering block sparse signals. Theorem 10.1. If GES ( n, k, t ) exists, then a sparse matrix of size nk × n t +1 exists, whichconsists of n t orthonormal blocks, each of size nk × n .Proof. The binary matrix Φ( n, k, t ) of size nk × n t +1 is obtained from GES( n, k, t ). Ev-ery column of Φ( n, k, t ) corresponds to a unique k − tuple of GES( n, k, t ). We arrange thecolumns of Φ( n, k, t ) to form a block orthogonal matrix. We form the ℓ − th block (of blocksize nk × n ) by taking n columns of Φ( n, k, t ) corresponding to n k − tuples coming from ℓ − thcolumn of GES( n, k, t ). From Theorem 9.5, we know that two k − tuples belonging to samecolumn of a GES do not have any intersection. As a result, the inner product between anytwo different columns within a block is zero, implying thereby that each block is orthogonal.Consequently, the block matrix √ k Φ( n, k, t ) has n t orthonormal blocks, where each block isof size nk × n. In the case of generalized Euler square, the conditions given in Theorem 2.5 for thesuccessful recovery of block sparse signal of block size d via BOMP makes sense providedthe block coherence of √ k Φ( n, k, t ) is strictly less than d . In view of this, our next objectiveis to choose n, k and d such that the block coherence of √ k Φ( n, k, t ) becomes strictly lessthan d . For simplicity, we first establish the block coherence of Euler square matrices. Theorem 10.2. Suppose p is a prime or a power of prime, d ≤ p − and d divides p. Thena binary matrix of size p ( p − × p with block coherence p − exists, which consists of p d number of orthonormal blocks, each of size p ( p − × d .Proof. Recall that ES( n, k ) is same as GES( n, k, √ k Φ( n, k, 1) consisting of n d number of orthonormal blocks, where each blockis of size nk × d. Now take, k = p − . Let √ k Φ( n, k, ℓ ] denote the ℓ th block of √ k Φ( n, k, . From the construction of Euler square described in Theorem 9.2 and the properties of Eulersquare, it follows, for ℓ = q, that(i)When Φ( n, k, ℓ ] and Φ( n, k, q ] correspond to k − tuples coming from differentcolumns but from the same rows of ES( n, k ), we have( 1 √ k Φ( n, k, ℓ ]) T ( 1 √ k Φ( n, k, q ]) = 1 k · · · 11 0 1 1 · · · 11 1 0 1 · · · · · · d × d , n, k, ℓ ]) T Φ( n, k, q ] are zero and all off-diagonal entriesare one. The maximum eigen value of ( √ k Φ( n, k, ℓ ]) T ( √ k Φ( n, k, q ]) is d − k . ii) When Φ( n, k, ℓ ] and Φ( n, k, q ] correspond to k − tuples coming from differentcolumns and rows of ES( n, k ), we have( 1 √ k Φ( n, k, ℓ ]) T ( 1 √ k Φ( n, k, q ]) = 1 k · · · 11 1 1 1 · · · 11 1 1 1 · · · · · · d × d , that is, (Φ( n, k, ℓ ]) T Φ( n, k, q ] is an all one matrix. The maximum eigen value of( √ k Φ( n, k, ℓ ]) T ( √ k Φ( n, k, q ]) is dk . (iii) When Φ( n, k, ℓ ] and Φ( n, k, q ] correspond to k − tuples coming from the samecolumn of ES( n, k ), we have( 1 √ k Φ( n, k, ℓ ]) T ( 1 √ k Φ( n, k, q ]) = . Here, denotes the zero matrix.Consequently, the block coherence µ B √ k Φ( n,k, is ddk = k . Remark 10.3. The block coherence of √ p − Φ( p, p − , 1) is at most p − , which can also beobtained from the fact that the coherence of Φ( p, p − , 1) is at most p − . The significanceof the Theorem 10.2 is that it uses the fact that there is exactly one intersection betweenany two distinct ( p − − tuples which are not from the same row or column and proves thatthe block coherence of √ p − Φ( p, p − , 1) is exactly p − . Theorem 10.4. Suppose, GES( n, k, ) exists, then for d < k and n being a multiple of d ,a sparse matrix of size nk × n exists which consists of n d orthonormal blocks, each of size nk × d. Then the block coherence of √ k Φ( n, k, is at most k . Proof. The proof follows from the Theorem 10.1 and the fact that the coherence of √ k Φ( n, k, k .Now from Theorem 2.5, the following theorem follows immediately. Theorem 10.5. A sufficient condition for the BOMP to recover a block s − sparse signal x ,with each block length d , from y = √ k Φ( n, k, x is s < (cid:18) kd (cid:19) , provided d ≤ k. Now we derive the expression for block coherence of binary matrices constructed fromGES. Theorem 10.6. Suppose, GES( n, k, t ) exists, then for d ≤ ⌊ kt ⌋ and n being a multiple of d ,a sparse matrix of size nk × n t +1 exists which consists of n t +1 d orthonormal blocks, each ofsize nk × d. Then the block coherence of √ k Φ( p, p − , t ) is at most tk . Proof. The proof follows from the Theorem 10.1 and the fact that the coherence of √ k Φ( n, k, r )is at most rk .Now from Theorem 2.5, the following theorem follows immediately. Theorem 10.7. A sufficient condition for the BOMP to recover a block s − sparse signal x ,with nk × d as size of each block, from y = √ k Φ( n, k, r ) x is s < (cid:18) kdr (cid:19) , provided d ≤ ⌊ kr ⌋ . 11. Concluding Remarks : In our present work, we have constructed block orthogonal binary matrices via EulerSquares with low block coherence which supports recovery of block sparse signals. We havealso introduced and constructed Generalized Euler Squares (GES) by evaluating higherdegree polynomials over a finite field. The binary matrices constructed from GES have beenshown to possess better aspect ratio compared to their counterparts generated using EulerSquares. Finally, block orthogonal structure of the GES based binary matrices has beenestablished. 12. Acknowledgments The first author is thankful for the support that he receives from Science and EngineeringResearch Board (SERB), Government of India (PDF/2017/002966). ReferencesReferences [1] Harris F. MacNeish, “Euler Squares,” Ann. Math., vol-23, 221-227, 1922.[2] R. C. Bose, “ On the application of the properties of Galois fields to the problem of construction ofHyper- Graeco-latin squares,” Sankhya, The Indian Journal of Statistics, Vol.3, Part 4, 1938.[3] R. R. Naidu, P. Jampana and C. S. Sastry, “Deterministic Compressed Sensing Matrices: Constructionvia Euler Squares and Applications,” in IEEE Transactions on Signal Processing, vol. 64, no. 14, pp.3566-3575, July 15, 2016. 4] P. Indyk, “Explicit constructions for compressed sensing of sparse signals,” in Proc. ACM-SIAM Symp.Discrete Algorithms, pp. 30-33, 2008.[5] S.Li and G. Ge, “Deterministic Construction of Sparse Sensing Matrices via Finite Geometry,” IEEETransactions on Signal Processing, VOL. 62, NO. 11, JUNE 1, 2014.[6] W. Z. Lu, K. Kpalma and J. Ronisn, “Sparse binary matrices of LDPC codes for compressed sensing,”in Data Compression Conference, Snowbird, Utah, USA, pp. 405-405, 2012.[7] J. Bourgain, S. Dilworth, K. Ford, S. Konyagin and D. Kutzarova, “Explicit constructions of RIPmatrices and related problems,” Duke Math. J. 159, 145-185, 2011.[8] E. Candes, “The restricted isometry property and its implications for compressed sensing,” ComptesRendus Mathematique, Vol. 346, pp. 589-592, 2008.[9] E. Candes, J. Romberg and T. Tao, “Stable signal recovery from incomplete and inaccurate mea-surements,” Comm. Pure and Appl. Math, 59, 1207-1223, 2006.[10] Ronald A. DeVore, “Deterministic constructions of compressed sensing matrices,” Journal of Complex-ity, Volume 23,pp 918-925, 2007.[11] E. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Inform. Theory 51, 42-4215,2005.[12] R. Baraniuk, M. Davenpor, R. De Vore, and M. Wakin, “A Simple Proof of the Restricted IsometryProperty for Random Matrices,” Constructive Approximation, 28(3),253-263, 2008.[13] A. M. Bruckstein, D. L. Donoho and M. Elad, “From sparse solutions of systems of equations to sparsemodeling of signals and images,” SIAM Review, Vol. 51, No. 1, pp: 34-81, 2009.[14] B.S. Kashin and V.N. Temlyakov, “A remark on compressed sensing,” Matematicheskie Zametki, Vol.82, No. 6, pp. 829837, 2007.[15] J. A. Tropp and S. J. Wright, “Computational methods for sparse solution of linear inverse problems,”Proceedings of the IEEE, vol. 98, no. 6, pp.948958, 2010.[16] E. Candes and M.B. Wakin, “An introduction to Compressive sampling,” Signal Processing Magazine,pp:21-30, March 2008.[17] B.S. Kashin, “Widths of certain finite-dimensional sets and classes of smooth functions,” Izv. Akad.Nauk SSSR, Ser.Mat. 41 (1977), 334 351; English transl. in Math. USSR IZV. 11, 317-333, 1978.[18] S. Li, F. Gao, G. Ge, and S. Zhang, “Deterministic construction of compressed sensing matrices viaalgebraic curves,” IEEE Trans. Inf. Theory, vol. 58, 5035-5041, 2012.[19] A. Amini and F. Marvasti, “Deterministic construction of binary, bipolar and ternary compressedsensing matrices,” IEEE Trans. Inf. Theory, vol. 57, 2360-2370, 2011.[20] A. Gilbert and P. Indyk, “Sparse recovery using sparsematrices,” Proc. IEEE, vol. 98, no. 6, pp. 937-947,2010.[21] B. Adcock, A. Hansen, C. Poon and B. Roman, “Breaking the coherence barrier: A new theory forcompressed sensing” arXiv:1302.0561v3, 2013.[22] D. Bryant and P. Cathain, “An asymptotic existence result on compressed sensing matrices”,arXiv:1403.2807v1, 2014.[23] M. Fickus, D. G. Mixon and J. C. Tremain, “Steiner equiangular tight frames”, Linear Algebra Appl.,436(5):10141027, 2012.[24] A. G. Dimakis, R. Smarandache, and P. O. Vontobel,“LDPC codes for compressed sensing,” IEEETrans. Inf. Theory, vol. 58, no. 5, pp. 3093-3114, May 2012.[25] Y. C. Eldar, P. Kuppinger and H. Bolcskei, “Block-Sparse Signals: Uncertainty Relations and EfficientRecovery,” in IEEE Transactions on Signal Processing, vol. 58, no. 6, pp. 3042-3054, June 2010.[26] A. Hedayat and E. Seiden, “F-Square and Orthogonal F-Squares Design: A Generalization of LatinSquare and Orthogonal Latin Squares Design,” The Annals of Mathematical Statistics Vol. 41, No. 6,pp. 2035-2044, Dec., 1970.[27] X. Shen, Y.Z.Cai, C.L.Liu, and C. P.Kruskal, “Generalized latin squares I ’” Discrete Applied Mathe-matics, Volume 25, Issues 12, Pp. 155-178, October 1989.4] P. Indyk, “Explicit constructions for compressed sensing of sparse signals,” in Proc. ACM-SIAM Symp.Discrete Algorithms, pp. 30-33, 2008.[5] S.Li and G. Ge, “Deterministic Construction of Sparse Sensing Matrices via Finite Geometry,” IEEETransactions on Signal Processing, VOL. 62, NO. 11, JUNE 1, 2014.[6] W. Z. Lu, K. Kpalma and J. Ronisn, “Sparse binary matrices of LDPC codes for compressed sensing,”in Data Compression Conference, Snowbird, Utah, USA, pp. 405-405, 2012.[7] J. Bourgain, S. Dilworth, K. Ford, S. Konyagin and D. Kutzarova, “Explicit constructions of RIPmatrices and related problems,” Duke Math. J. 159, 145-185, 2011.[8] E. Candes, “The restricted isometry property and its implications for compressed sensing,” ComptesRendus Mathematique, Vol. 346, pp. 589-592, 2008.[9] E. Candes, J. Romberg and T. Tao, “Stable signal recovery from incomplete and inaccurate mea-surements,” Comm. Pure and Appl. Math, 59, 1207-1223, 2006.[10] Ronald A. DeVore, “Deterministic constructions of compressed sensing matrices,” Journal of Complex-ity, Volume 23,pp 918-925, 2007.[11] E. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Inform. Theory 51, 42-4215,2005.[12] R. Baraniuk, M. Davenpor, R. De Vore, and M. Wakin, “A Simple Proof of the Restricted IsometryProperty for Random Matrices,” Constructive Approximation, 28(3),253-263, 2008.[13] A. M. Bruckstein, D. L. Donoho and M. Elad, “From sparse solutions of systems of equations to sparsemodeling of signals and images,” SIAM Review, Vol. 51, No. 1, pp: 34-81, 2009.[14] B.S. Kashin and V.N. Temlyakov, “A remark on compressed sensing,” Matematicheskie Zametki, Vol.82, No. 6, pp. 829837, 2007.[15] J. A. Tropp and S. J. Wright, “Computational methods for sparse solution of linear inverse problems,”Proceedings of the IEEE, vol. 98, no. 6, pp.948958, 2010.[16] E. Candes and M.B. Wakin, “An introduction to Compressive sampling,” Signal Processing Magazine,pp:21-30, March 2008.[17] B.S. Kashin, “Widths of certain finite-dimensional sets and classes of smooth functions,” Izv. Akad.Nauk SSSR, Ser.Mat. 41 (1977), 334 351; English transl. in Math. USSR IZV. 11, 317-333, 1978.[18] S. Li, F. Gao, G. Ge, and S. Zhang, “Deterministic construction of compressed sensing matrices viaalgebraic curves,” IEEE Trans. Inf. Theory, vol. 58, 5035-5041, 2012.[19] A. Amini and F. Marvasti, “Deterministic construction of binary, bipolar and ternary compressedsensing matrices,” IEEE Trans. Inf. Theory, vol. 57, 2360-2370, 2011.[20] A. Gilbert and P. Indyk, “Sparse recovery using sparsematrices,” Proc. IEEE, vol. 98, no. 6, pp. 937-947,2010.[21] B. Adcock, A. Hansen, C. Poon and B. Roman, “Breaking the coherence barrier: A new theory forcompressed sensing” arXiv:1302.0561v3, 2013.[22] D. Bryant and P. Cathain, “An asymptotic existence result on compressed sensing matrices”,arXiv:1403.2807v1, 2014.[23] M. Fickus, D. G. Mixon and J. C. Tremain, “Steiner equiangular tight frames”, Linear Algebra Appl.,436(5):10141027, 2012.[24] A. G. Dimakis, R. Smarandache, and P. O. Vontobel,“LDPC codes for compressed sensing,” IEEETrans. Inf. Theory, vol. 58, no. 5, pp. 3093-3114, May 2012.[25] Y. C. Eldar, P. Kuppinger and H. Bolcskei, “Block-Sparse Signals: Uncertainty Relations and EfficientRecovery,” in IEEE Transactions on Signal Processing, vol. 58, no. 6, pp. 3042-3054, June 2010.[26] A. Hedayat and E. Seiden, “F-Square and Orthogonal F-Squares Design: A Generalization of LatinSquare and Orthogonal Latin Squares Design,” The Annals of Mathematical Statistics Vol. 41, No. 6,pp. 2035-2044, Dec., 1970.[27] X. Shen, Y.Z.Cai, C.L.Liu, and C. P.Kruskal, “Generalized latin squares I ’” Discrete Applied Mathe-matics, Volume 25, Issues 12, Pp. 155-178, October 1989.