Binary Subspace Chirps
BBinary Subspace Chirps
Tefjol Pllaha * , Olav Tirkkonen * , and Robert Calderbank † * Department of Communications and Networking, Aalto University, Finland † Department of Electrical and Computer Engineering, Duke University, NC, USAEmails: {tefjol.pllaha, olav.tirkkonen}@aalto.fi, [email protected]
Abstract
We describe in details the interplay between binary symplectic geometry and quantumcomputation, with the ultimate goal of constructing highly structured codebooks. TheBinary Chirps (BCs) are Complex Grassmannian Lines in N = 2 m dimensions used indeterministic compressed sensing and random/unsourced multiple access in wireless net-works. Their entries are fourth roots of unity and can be described in terms of secondorder Reed-Muller codes. The Binary Subspace Chirps (BSSCs) are a unique collectionof BCs of ranks ranging from r = 0 to r = m , embedded in N dimensions according toan on-off pattern determined by a rank r binary subspace. This yields a codebook thatis asymptotically 2.38 times larger than the codebook of BCs, has the same minimumchordal distance as the codebook of BCs, and the alphabet is minimally extended from {± , ± i } to {± , ± i, } . Equivalently, we show that BSSCs are stabilizer states, and wecharacterize them as columns of a well-controlled collection of Clifford matrices. By con-struction, the BSSCs inherit all the properties of BCs, which in turn makes them goodcandidates for a variety of applications. For applications in wireless communication, weuse the rich algebraic structure of BSSCs to construct a low complexity decoding algorithmthat is reliable against Gaussian noise. In simulations, BSSCs exhibit an error probabilitycomparable or slightly lower than BCs, both for single-user and multi-user transmissions. Codebooks of complex projective (Grassmann) lines, or tight frames, have found applicationin multiple problems of interest for communications and information processing, such as codedivision multiple access sequence design [1], precoding for multi-antenna transmissions [2] andnetwork coding [3]. Contemporary interest in such codes arise, e.g., from deterministic com-pressed sensing [4–8], virtual full-duplex communication [9], mmWave communication [10], andrandom access [11].One of the challenges/promises of 5G wireless communication is to enable massive machine-type communications (mMTC) in the Internet of Things (IoT), in which a massive number oflow-cost devices sporadically and randomly access the network [12]. In this scenario, users areassigned a unique signature sequence which they transmit whenever active [13]. A twin use-caseis unsourced multiple access, where a large number of messages is transmitted infrequently.Polyanskiy [12] proposed a framework in which communication occurs in blocks of N channeluses, and the task of a receiver is to identify correctly L active users (messages) out of B , withone regime of interest being N = 30 , , L = 250 , and B = 100 . Ever since its introduction,there have been several follow-up works [11,14–17], extensions to a massive MIMO scenario [18]where the base station has a very large number of antennas, and a discussion on the fundamentallimits on what is possible [19].Given the massive number of to-be-supported (to-be-encoded) users (messages), the designcriteria are fundamentally different and one simply cannot rely on classical multiple-access1 a r X i v : . [ c s . I T ] F e b igure 1. Interplay of binary world and complex world. Prior art is depicted in yellow. The contributions of thispaper are depicted in green. See also [21, 24]. channel solutions. For instance, interference is unavoidable since it is impossible to haveorthogonal signatures/codewords. Additionally, given that there is a small number of activeuser, the interference is limited. Thus, the challenge becomes to design highly structuredcodebooks of large cardinality along with a reliable and low-complexity decoding algorithm.Codebooks of Binary Chirps (BCs) [5, 20] provide such highly structured Grassmannianline codebook in N = 2 m dimensions with additional desirable properties. All entries comefrom a small alphabet, being a fourth root of unity, and can be described in terms of sec-ond order Reed-Muller (RM) codes. RM codes have the fascinating property that a Walsh-Hadamard measurement cuts in half the solution space. This yields a single-user decodingcomplexity of O ( N log N ) , coming from the Walsh-Hadamard transform and number of re-quired measurements. Additionally, the number of codewords is reasonably large, growing as m ( m +3) / = √ N N , while the minimum chordal distance is / √ .We expand the BC codebook to the codebook of Binary Subspace Chirps (BSSCs) bycollectively considering all BCs in S = 2 r dimensions, r = 0 , . . . , m , in N = 2 m dimensions.That is, given a BC in S = 2 r dimensions, we embed it in N = 2 m dimensions via a unique on-off pattern determined by a rank r binary subspace. Thus, a BSSC is characterized by a sparsity r , a BC part parametrized by a binary symmetric matrix S r ∈ Sym( r ; 2) and a binary vector b ∈ F m , and a unique on-off pattern parametrized by a rank r binary subspace H ∈ G ( m, r ; 2) ;see (79) for the formal definition. The codebook of BSSCs inherits all the desirable propertiesof BCs, and in addition, it has asymptotically about 2.384 more codewords. Thus, an activedevice with a rank r signature will transmit α/ √ r , α ∈ {± , ± i } during time slots determinedby the rank r subspace H , and it will be silent otherwise. This resembles the model of [9], inwhich active devices can also be used (to listen) as receivers during the off-slots.Given the structure of BSSCs, a unified rank, on-off pattern, and BCs part (in this order)estimation technique is needed. In [21], a reliable on-off pattern detection was proposed, whichmade use of a Weyl-type transform [22] on m qubit diagonal Pauli matrices; see (103). Thealgorithm can be described with the common language of symplectic geometry and quantumcomputation. The key insight here is to view BSSCs as common eigenvectors of maximalsets of commuting Pauli matrices, commonly referred as stabilizer groups . Indeed, we showthat BSSCs are nothing else but stabilizer states [23], and their sparsity is determined bythe diagonal portion of the corresponding stabilizer group; see Corollaries 2 and 3. We alsoshow that each BSSC is a column of a unique Clifford matrix (86), which itself is the commoneigenspace of a unique stabilizer group (101); see also Theorem 1. The interplay between thebinary world and the complex world is depicted in Figure 1.2aking use of these structural results, the on-off pattern detection of [21] can be general-ized to recover the BC part of the BSSC, this time by using the Weyl-type transform on theoff-diagonal part of the corresponding stabilizer group. This yields a single-user BSSC recon-struction as described in Algorithm 2. In [24], we added Orthogonal Matching Pursuit (OMP)to obtain a multi-user BSSCs reconstruction (see Algorithm 3) with reliable performance whenthere is a small number of active users. As the number of active users increases, so does theinterference, which has a quite destructive effect on the on-off pattern. However, state-of-the-art solutions for BCs [8, 17, 25] such as slotting and patching, can be used to reduce theinterference. Preliminary simulations show that BSSCs exhibit a lower error probability thanBCs. This is because BSSCs have fewer closest neighbors on average than BCs. In addition,BSSCs are uniformly distributed over the sphere, which makes them optimal when dealingwith Gaussian noise.Throughout, the decoding complexity is kept at bay from the underlying symplectic geom-etry. The sparsity, the BC part, and the on-off pattern of a BSSC can be described in terms ofthe Bruhat decomposition (22) of a symplectic matrix. Indeed, the unique Clifford matrix (86)of which a BSSC is a column, is parametrized by a coset representative (24) as described inLemma 1. In turn, such coset representative determines a unique stabilizer group (101). Weuse this interplay to reconstruct a BSSC by reconstructing the stabilizer group that stabilizesthe given BSSC. This alone reduces the complexity from O ( N ) to O ( N log N ) .The paper is organized as follows. In Section 2 we review the basics of binary symplecticgeometry and quantum computation. In order to obtain a unique parametrization of BSSCs, weuse Schubert cells and the Bruhat decomposition of the symplectic group. In Section 3 we liftthe Bruhat decomposition of the symplectic group to obtain a decomposition of the Cliffordgroup. Additionally, we parametrize those Clifford matrices whose columns are BSSCs. InSection 4 we give the formal definition of BSSCs, along with their algebraic and geometricproperties. In Sections 5 and 6 we present reliable low complexity decoding algorithms, anddiscuss simulation results. We end the paper with some conclusions and directions futureresearch. All vectors, binary or complex, will be columns. F denotes the binary field, GL( m ; 2) denotesthe group of binary m × m invertible matrices, and Sym( m ; 2) denotes the group of binary m × m symmetric matrices. We will denote matrices (resp., vectors) with upper case (resp., lowercase) bold letters. A T will denote the transpose and A − T will denote the inverse transposed.cs ( A ) and rs ( A ) will denote the column space and the row space of A respectively. Sinceall our vectors are columns, we will typically deal with column spaces, except when we workwith notions from quantum computation, where row spaces are customary. I m will denotethe m × m matrix (complex or binary). G ( m, r ; 2) ∼ = GL( m ; 2) / GL( r ; 2) denotes the binaryGrassmannian, that is, the set of all r -dimensional subspaces of F m . U ( N ) denotes the set ofunitary N × N complex matrices and † will denote the conjugate transpose of a matrix. In this section we will introduce all preliminary notions needed for navigating the connectionbetween the m dimensional binary world and the m dimensional complex world, as depictedin Figure 1. The primary bridge used here is the well-known homomorphism (62) and theBruhat decomposition of the symplectic group. We focus on cosets of the symplectic groupmodulo the semidirect product GL( m ; 2) (cid:111) Sym( m ; 2) . These cosets are characterized by arank r = 0 , . . . , m and a binary subspace H ∈ G ( m, r ; 2) , which we will think of as the columnspace of an m × r binary matrix in column reduced echelon form. We will use Schubert cells asa formal and systematic approach. This also provides a framework for describing well-knownfacts from binary symplectic geometry (e.g., Remark 4). Finally, Subsection 2.3 discussescommon notions from quantum computation. 3 .1 Schubert Cells Here we discuss the Schubert decomposition of the Grassmannian G ( m, r ; 2) with the respectto the standard flag { } = V ⊂ V ⊂ · · · ⊂ V m , (1)where V i = span { e , . . . , e i } and { e , . . . , e m } is the standard basis of F m . Fix a set of r indices I = { i , . . . , i r } ⊂ { , . . . , m } , which, without loss of generality, we assume to be in increasingorder. The Schubert cell C I is the set of all m × r matrices that have in leading positions ( i j , j ) , on the left, right, and above each leading position, and every other entry is free. Thisis simply the set of all binary matrices in column reduced echelon form with leading positions I . By counting the number of free entries in each column one concludes that dim C I = r (cid:88) j =1 ( m − i j ) − ( r − j ) . (2)Fix H ∈ G ( m, r ; 2) , and think of it as the column space of a m × r matrix H . Aftercolumn operations, it will belong to some cell C I , and to emphasize this fact, we will denoteit as H I . Schubert cells have a well-known duality theory which we outline next. Let (cid:103) H I be such that ( H I ) T (cid:103) H I = 0 . Of course cs ( (cid:103) H I ) ∈ G ( m, m − r ; 2) . Let (cid:101) I := { , . . . , m } \ I and put (cid:102) C I := { (cid:103) H I | H I ∈ C I } . There is a bijection between { (cid:102) C I } |I| = r and {C (cid:101) J } |J | = r ,realized by reverting the rows and columns of (cid:103) H I and identifying (cid:101) J = { i , . . . , i m − r } with (cid:98) J := { m − i + 1 , . . . , m − i m − r + 1 } . With this identification, we will denote H (cid:101) I the uniqueelement of cell C (cid:101) I that is equivalent with (cid:103) H I , obtained by reverting the rows and columns of (cid:103) H I : H (cid:101) I = P ad ,m (cid:103) H I P ad ,m − r , (3)where P ad is the antidiagonal matrix in respective dimensions.Each cell has a distinguished element: I I ∈ C I will denote the identity matrix I m restrictedto I , that is, the unique element in C I that has all the free entries 0. Note that I I has as j thcolumn the i j th column of I m , and thus its non-zero rows form I r . In particular if |I| = m then I I = I m . We also have I (cid:101) I ∈ C (cid:101) I . With this notation one easily verifies that ( I I ) T H I = I r , ( I I ) T I (cid:101) I = 0 , ( (cid:103) H I ) T I (cid:101) I = I m − r . (4)In addition, H I can be completed to an invertible matrix P I := (cid:2) H I I (cid:101) I (cid:3) ∈ GL( m ; 2) . (5)Note that when I I is completed to an invertible matrix it gives rise to a permutation matrix.Next, (4) along with the default equality ( H I ) T (cid:103) H I = 0 implies that P − T I = (cid:104) I I (cid:103) H I (cid:105) . (6)Let us describe this framework with an example. Example 1.
Let m = 3 and r = 2 . Then C { , } = u v , (cid:94) C { , } = uv , C (cid:100) { } ∼ = C { } = vu , C { , } = u
00 1 , (cid:94) C { , } = u , C (cid:100) { } ∼ = C { } = u , C { , } = , (cid:94) C { , } = , C (cid:100) { } ∼ = C { } = . I = { , } . Then C I is constructed directly by definition, that is, in columnreduced echelon form with leading positions and . Whereas, (cid:102) C I is constructed so that ( H I ) T (cid:103) H I = 0 . Then we revert the rows and columns (only rows in this case) to obtain thelast object where we identify { } = (cid:94) { , } with (cid:100) { } ≡ { } .In this case, as we see from above, there is only one free bit. This yields two subspaces/-matrices H I , which when completed to an invertible matrix as in (5) yield P u =0 = , P u =1 = . (7)Then one directly computes P − T u =0 = , P − T u =1 = . (8)Compare (8) with (6); the first two columns are obviously I I , whereas the last column isprecisely C (cid:100) { } ≡ C { } with rows reverted. Note here that when all the free bits are zero thenthe resulting P is simply a permutation matrix, and in this case P − T = P . We first briefly describe the symplectic structure of F m via the symplectic bilinear form (cid:104) a , b | c , d (cid:105) s := b T c + a T d . (9)One is naturally interested in automorphisms that preserve such symplectic structure. It followsdirectly by the definition that a m × m matrix F preserves (cid:104) • | • (cid:105) s iff FΩF T = Ω where Ω = (cid:20) m I m I m m (cid:21) . (10)We will denote the group of all such symplectic matrices F with Sp(2 m ; 2) . Equivalently, F = (cid:20) A BC D (cid:21) ∈ Sp(2 m ; 2) (11)iff AB T , CD T ∈ Sym( m ; 2) and AD T + BC T = I m . It is well-known that | Sp(2 m ; 2) | = 2 m m (cid:89) i =1 (4 i − . (12)Consider the row space H := rs [ A B ] of the m × m upper half of a symplectic matrix F ∈ Sp(2 m ; 2) . Because AB T is symmetric one has [ A B ] Ω [ A B ] T = and thus (cid:104) x | y (cid:105) s = 0 for all x , y ∈ H . We will denote ( • ) ⊥ s the dual with respect to the symplectic inner prod-uct (9). It follows that H ⊆ H ⊥ , that is, H is self-orthogonal or totally isotropic . Moreover, H is maximal totally isotropic because dim H = m and thus H = H ⊥ . The set of all self-dual/maximal totally isotropic subspaces is commonly referred as the Lagrangian Grassman-nian L (2 m, m ; 2) ⊂ G (2 m, m ; 2) . It is well-known that |L (2 m, m ) | = m (cid:89) i =1 (2 i + 1) . (13) In this specific case there is no need for identification, but this is only a coincidence. For different choicesof I one needs a true identification. Bruhat decom-position of Sp(2 m ; 2) . While the decomposition holds in a general group-theoretic setting [26],here we give a rather elementary approach; see also [27]. We start the decomposition by writing Sp(2 m ; 2) = m (cid:91) r =0 C r , (14)where C r = (cid:26) F = (cid:20) A BC D (cid:21) ∈ Sp(2 m ; 2) (cid:12)(cid:12)(cid:12)(cid:12) rank C = r (cid:27) . (15)In Sp(2 m ; 2) there are two distinguished subgroups: S D := (cid:26) F D ( P ) = (cid:20) P 00 P − T (cid:21) (cid:12)(cid:12)(cid:12)(cid:12) P ∈ GL( m ; 2) (cid:27) , (16) S U := (cid:26) F U ( S ) = (cid:20) I S0 I (cid:21) (cid:12)(cid:12)(cid:12)(cid:12) S ∈ Sym( m ; 2) (cid:27) . (17)Let P be the semidirect product of S D and S U , that is, P = { F D ( P ) F U ( S ) | P ∈ GL( m ; 2) , S ∈ Sym( m ; 2) } . (18)Note that the order of the multiplication doesn’t matter since F D ( P ) F U ( S ) = F U ( PSP T ) F D ( P ) , (19)and PSP T is again symmetric. It is straightforward to verify that P = C , and that in general C r = { F F Ω ( r ) F | F , F ∈ P} , (20)where F Ω ( r ) = (cid:20) I m |− r I m | r I m | r I m |− r (cid:21) , (21)with I m | r being the block matrix with I r in upper left corner and 0 else and I m |− r = I m − I m | r .Note here that Ω = F Ω ( m ) and ΩF Ω ( r ) Ω = F Ω ( m − r ) . Then it follows by (20) (and by (19))that every F ∈ Sp(2 m ; 2) can be written as F = F D ( P ) F U ( S ) F Ω ( r ) F U ( S ) F D ( P ) . (22)The above constitutes the Bruhat decomposition of a symplectic matrix; see also [24, 28]. Remark 1.
It was shown in [29] that a symplectic matrix F can be decomposed as F = F D ( P ) F T U ( S ) ΩF Ω ( r ) F U ( S ) F D ( P ) . (23)If we, instead, decompose ΩF as in (23) and insert Ω = I m between F D ( P ) and F T U ( S ) ,we see that (23) is reduced to (22). This reduction from a seven-component decomposition toa five-component decomposition is beneficial in quantum circuits design [28, 30].In what follows we will focus on the right action of P on Sp(2 m ; 2) , that is, the right cosetsin the quotient group Sp(2 m ; 2) / P . It is an immediate consequence of (22) and (19) that acoset representative will look like F D ( P ) F U ( S ) F Ω ( r ) , (24)for some rank r , invertible P , and symmetric S . However, two different invertibles P may yieldrepresentatives of the same coset. We make this precise below.6 emma 1. A right coset in
Sp(2 m ; 2) / P is uniquely characterized by a rank r , an r × r symmetric matrix S r ∈ Sym( r ) , and a r -dimensional subspace H in F m .Proof. Write a coset representative F as in (24). This immediately determines r . Next, write S in a block form S = (cid:20) S r XX T S m − r (cid:21) , (25)where S r , S m − r are symmetric. Denote (cid:102) S r , (cid:98) S m − r ∈ Sym( m ; 2) the matrices that have S r and S m − r in upper left and lower right corner respectively and 0 otherwise. Put also (cid:101) X = (cid:20) I r T I m − r (cid:21) . (26)With this notation we have F U ( S ) F Ω ( r ) = F U ( (cid:102) S r ) F Ω ( r ) F U ( (cid:98) S m − r ) F D ( (cid:101) X ) . (27)In other words F U ( S ) F Ω ( r ) and F U ( (cid:102) S r ) F Ω ( r ) belong to the same coset. Now consider aninvertible (cid:101) P = (cid:20) P r
00 P m − r (cid:21) . (28)It is also straightforward to verify that F U ( (cid:102) S r ) F Ω ( r ) F D ( (cid:101) P ) = F U ( (cid:102) S r ) F D ( (cid:98) P ) F Ω ( r )= F D ( (cid:98) P ) F U ( (cid:98) P − (cid:102) S r (cid:98) P − T ) F Ω ( r ) , where (cid:98) P = (cid:20) P − T r
00 P m − r (cid:21) , (29)and the second equality follows by (19). Thus F D ( P ) F U ( S ) F Ω ( r ) , where P := P (cid:98) P , S := (cid:98) P − (cid:102) S r (cid:98) P − T (30)represents the same coset. Note that the transformation (30) doesn’t change the column space C (that is, the lower left corner of F ), which is an r -dimensional subspace in F m . (cid:4) Next, using Schubert cells will choose a canonical coset representative. We will use thesame notation as in the above lemma. Let r and (cid:102) S r be as above. To choose P , think of the r -dimensional subspace H from the above lemma as the column space of a matrix H , whichbelongs to some Schubert cell C I . We will use the coset representative F O ( P I , S r ) := F D ( P I ) F U ( (cid:102) S r ) F Ω ( r ) , (31)where P I is as in (5).Let F ∈ Sp(2 m ; 2) be in block from as in (11), and assume it is written as F = F D ( P − T ) F U ( (cid:102) S r ) F Ω ( r ) F D ( M ) F U ( S ) . (32)Multiplying both sides of (32) on the left with F D ( P T ) and on the right with F U ( S ) , and thencomparing respective blocks we obtain P T A = ( (cid:102) S r + I m |− r ) M , (33) P T AS = P T B + I m | r M − T , (34) P − C = I m | r M , (35) P − CS = P − D + I m |− r M − T , (36)7 lgorithm 1 Bruhat Decomposition of Symplectic Matrix
Input:
A symplectic matrix F .1. Block decompose F to A , B , C , D as in (11).2. r = rank ( C ) .3. Find P as in (5) from cs ( C ) .4. M up is the first r rows of P − C .5. M lo is the last m − r rows of P T A .6. M = (cid:20) M up M lo (cid:21) .7. (cid:102) S r = P T AM − + I m |− r .8. S r is the upper left r × r block of (cid:102) S r .9. N up is the first r rows of P − D + I m |− r M − T .10. N lo is the last m − r rows of P T B − I m | r M − T .11. S = M − (cid:20) N up N lo (cid:21) . Output: r, P , S r , M , S which we can solve for M , (cid:102) S r and S , while assuming that we know F (and implicitly P whichcan be determined by the column space of the lower-left block of F ). First we find M . Forthis, recall that (cid:102) S r has nonzero entries only on the upper left r × r block. Thus, it followsby (33) that the last m − r rows of M coincide with the last m − r rows of P T A . Similarly, itfollows from (35) that the first r rows of M coincide with the first r rows of P − C . With M in hand we have (cid:102) S r = P T AM − + I m |− r . (37)By using (35) in (36) we see that the first r rows of MS coincide with first r rows of P − CS .Similarly, by using (33) in (34), we see that the last m − r rows of MS coincide with the last m − r rows of P T AS . Multiplication with M − yields S . We collect everything in Algorithm 1,which gives not only the Bruhat decomposition but also a canonical coset representative.We end this section with a few remarks. Remark 2.
One can follow an analogous path by considering left action of P on Sp(2 m ; 2) .This follows most directly by the observation that if F = F D ( P ) F U ( S ) F Ω ( r ) is a right cosetrepresentative then F − = F Ω ( r ) F U ( S ) F D ( P − ) is a left coset representative. Remark 3.
Note that for the extremal case r = m , a coset representative as in (31) iscompletely determined by a symmetric matrix S ∈ Sym( m ; 2) , since in this case, as one wouldrecall, P I = I I = I m . Remark 4.
Directly from the definition we have |P| = | GL( m ; 2) | · | Sym( m ; 2) | = 2 m m (cid:89) i =1 (2 i − , (38)which combined with (12) yields | Sp(2 m ; 2) / P| = m (cid:89) i =1 (2 i + 1) = |L (2 m, m ) | . (39)The above is of course not a coincidence. Indeed, Sp(2 m ; 2) acts transitively from the righton L (2 m, m ) . Next, consider rs (cid:2) m I m (cid:3) ∈ L (2 m, m ) . If a symplectic matrix F as in (11)fixes this space, then C = and A is invertible. Additionally, because F is symplectic to startwith, we obtain D = A − T and AB T =: S is symmetric. Thus B = SA T , and F ∈ P . That is,8 is the stabilizer (in a group action terminology) of rs (cid:2) m I m (cid:3) ∈ L (2 m, m ) . The mapping Sp(2 m ; 2) / P −→ L (2 m, m ) , given by F O ( P I , S r ) (cid:55)−→ rs (cid:104) I m | r P I T ( I m | r (cid:102) S r + I m |− r ) P − I (cid:105) (40)is well-defined (because, as mentioned, the upper half of a symplectic matrix is maximalisotropic). It is also injective, and thus bijective due to cardinality reasons. Of course onecan have many bijections but we choose this one due to Theorem 1. Fix N = 2 m , and let { e , e } be the standard basis of C , which is commonly referred as the computational basis . For v = ( v , . . . , v m ) ∈ F m set e v := e v ⊗ · · · ⊗ e v m . Then { e v | v ∈ F m } is the standard basis of C N ∼ = ( C ) ⊗ m . The Pauli matrices are I , σ x = (cid:20) (cid:21) , σ z = (cid:20) − (cid:21) , σ y = iσ x σ z . (41)For a , b ∈ F m put D ( a , b ) := σ a x σ b z ⊗ · · · ⊗ σ a m x σ b m z . (42)Directly by definition we have D ( a , ) e v = e v + a and D ( , b ) e v = ( − b T v e v , (43)and thus, the former is a permutation matrix whereas the latter is a diagonal matrix. Then D ( a , b ) D ( c , d ) = ( − b T c D ( a + c , b + d ) . (44)Thanks to (44) we have D ( a , b ) D ( c , d ) = ( − b T c + a T d D ( c , d ) D ( a , b ) . (45)In turn, D ( a , b ) and D ( c , d ) commute iff (cid:104) a , b | c , d (cid:105) s := b T c + a T d = 0 , (46)that is, iff ( a , b ) and ( c , d ) are orthogonal with respect to the symplectic inner product (9).Also thanks to (44), the set HW N := { i k D ( a , b ) | a , b ∈ F m , k = 0 , , , } (47)is a subgroup of U ( N ) and is called the Heisenberg-Weyl group. We will also call its elementsPauli matrices as well. Directly from the definition, we have a surjective homomorphism ofgroups Ψ N : HW N −→ F m , i k D ( a , b ) (cid:55)−→ ( a , b ) . (48)Its kernel is ker Ψ N = {± I N , ± i I N } ∼ = Z . We will denote HW ∗ N := HW N / ker Ψ N the projective Heisenberg-Weyl group, and the induced isomorphism Ψ ∗ N .Note that (cid:104) • | • (cid:105) s defines a nondegenerate bilinear form in F m that translates commutativ-ity in HW N to orthogonality in F m . A commutative subgroup S ⊂ HW N is called a stabilizergroup if − I N / ∈ S . Thus, for a stabilizer S , thanks to (46) we have Ψ N ( S ) ⊆ Ψ N ( S ) ⊥ s [31, 32].In addition, because Ψ N restricted to a stabilizer is an isomorphism, we have that |S| = 2 r iff dim Ψ N ( S ) = r . We will think of Ψ N ( S ) as the row space of a full rank matrix [ A B ] whereboth A and B are r × m binary matrices. We will write E ( A , B ) := { E ( x T A , x T B ) | x ∈ F r } , (49)9here E ( a , b ) := i a T b D ( a , b ) . Combining this with (43) and (44) we obtain E ( a , b ) = i a T b (cid:88) v ∈ F m ( − b T v e v + a e v T . (50)Next, if rs [ A B ] is self-orthogonal in F m then E ( A , B ) is a stabilizer. Moreover, Ψ N ( E ( A , B )) = rs [ A B ] , which yields a one-to-one correspondence between stabilizers in HW N and self-orthogonal subspaces in F m . It also follows that a maximal stabilizer musthave m elements. Thus there is a one-to-one correspondence between maximal stabilizersand Lagrangian Grassmannians L (2 m, m ) ⊂ G (2 m, m ) . Of particular interest are maximalstabilizers X N := E ( I m , m ) = { E ( a , ) | a ∈ F m } , (51) Z N := E ( m , I m ) = { E ( , b ) | b ∈ F m } , (52)which we naturally identify with X N := Ψ N ( X N ) = rs [ I m m ] and Z N := Ψ N ( Z N ) = rs [ m I m ] .What follows holds in general for any stabilizer, but for our purposes, we need only focus onthe maximal ones. Let S = E ( A , B ) ⊂ HW N be a maximal stabilizer and let { E , . . . , E m } bean independent generating set of S (that is, span { Ψ N ( E ) , . . . , Ψ N ( E m ) } = Ψ N ( S ) ). Considerthe complex vector space [33] V ( S ) := { v ∈ C N | E i v = v , i = 1 , . . . , m } . (53)It is well-known (see, e.g., [34]) that dim V ( S ) = 2 m / |S| = 1 . A unit norm vector that generatesit is called stabilizer state , and with a slight abuse of notation is also denoted by V ( S ) . Becausewe are disregarding scalars, it is beneficial to think of a stabilizer state as Grassmannian line ,that is, V ( S ) ∈ G ( C N , . Next, Π S := m (cid:89) i =1 I N + E i N (cid:88) E ∈S E (54)is a projection onto V ( S ) .Given a stabilizer as above, for any d ∈ F m , { ( − d E , . . . , ( − d m E m } also describes astabilizer S d . Similarly to (54) put Π S d := m (cid:89) i =1 I N + ( − d i E i
2= 1 N (cid:88) x ∈ F m ( − d T x E ( x T A , x T B ) . (55)It is readily verified that { Π S d | d ∈ F m } are pair-wise orthogonal, and a stabilizer groupdetermines a resolution of the identity I N = (cid:88) d ∈ F m Π S d . (56)Thus every such projection determines a one-dimensional subspace which with another abuseof notation (see also Remark 5 below) we call a stabilizer state. Remark 5.
A stabilizer state as in (53) is the unit norm vector that is fixed by the stabilizer.Now for every E ∈ S = E ( A , B ) there exists a unique x ∈ F m such that E = E ( x T A , x T B ) .For d ∈ F m , consider the map χ d : E ( x T A , x T B ) (cid:55)−→ ( − d T x . Then Π S d projects onto V ( S d ) := { v ∈ C N | Ev = χ d ( E ) v for all E ∈ S d } , (57)that is, the state that under the action of E is scaled by χ d ( E ) . Then of course V ( S ) = V ( S ) where ∈ F m . In addition, the map χ d is a linear character of S , which has led to non-binaryconsiderations [35]. 10 emark 6. Let { E , . . . , E m } be an independent generating set of a maximal stabilizer S andconsider S d . By [34, Prop. 10.4] it follows that for each i = 1 , . . . , m , there exists G i ∈ HW N such that G † i E i G i = − E i and G † i E j G i = E j for i (cid:54) = j . Now put G d := G d · · · G d m m . Then G † d Π S G d = Π S d . (58)It follows that { V ( S d ) | d ∈ F m } is an orthonormal basis of C N . In [36] the authors used asimilar insight to construct maximal sets of mutually unbiased bases . The Clifford group in N dimensions [37] is defined to be the normalizer of HW N in U ( N ) modulo U (1) : Cliff N = { G ∈ U ( N ) | G HW N G † = HW N } / U (1) . (59)The reason one quotients out U (1) ∼ = { α I N | | α | = 1 } , is to obtain a finite group. In this case HW ∗ N is a normal subgroup of Cliff N .Let { e , . . . , e m } be the standard basis of F m , and consider G ∈ Cliff N . Let c i ∈ F m besuch that GE ( e i ) G † = ± E ( c i ) . (60)Then the matrix F G whose i th row is c i is a symplectic matrix such that GE ( c ) G † = ± E ( c T F G ) (61)for all c ∈ F m . Based on (61) we obtain a group homomorphism Φ : Cliff N −→ Sp(2 m ; 2) , G (cid:55)−→ F G , (62)with kernel ker Φ = HW ∗ N [30]. This map is also surjective; see Section 3.1 where specificpreimages are given. From (12) and (48) ( |HW ∗ N | = 2 m ) follows that | Cliff N | = 2 m +2 m m (cid:89) i =1 (4 i − . (63) Remark 7.
Since Φ is a homomorphism we have that Φ( G † ) = F − G and as a consequence G † E ( c ) G = ± E ( c T F − G ) . We will make use of this simple observation later on to determinewhen a column of G is an eigenvector of E ( c ) . This interplay with symplectic geometryprovides an exponential complexity reduction in various applications. Here, we will focus onefficiently computing common eigenspaces of maximal stabilizers.The phase and Hadamard matrices G P = (cid:20) i (cid:21) and H = 1 √ (cid:20) − (cid:21) (64)are easily seen to be in the Clifford group Cliff . Some authors also include ( G P H ) =exp( πi/ I [32], which in our case would disappear as a scalar quotient. Thus (63) differs bya factor of 1/8 of what is commonly considered as the cardinality of the Clifford group; seeA003956 at oeis.org. For our purposes the phases are irrelevant. In this section we will make use of the Bruhat decomposition of
Sp(2 m ; 2) to obtain a decom-position of Cliff N . To do so we will use the surjectivity of Φ from (62) and determine preimages11f coset representatives from (31). The preimages of symplectic matrices F D ( P ) , F U ( S ) , and F Ω ( r ) under Φ are the unitary permutation matrix G D ( P ) := e v (cid:55)−→ e P T v , (65) G U ( S ) := diag (cid:16) i v T Sv mod 4 (cid:17) v ∈ F m , (66) G Ω ( r ) := ( H ) ⊗ r ⊗ I m − r , (67)respectively. We refer the reader to [30, Appendix I] for details. Remark 8.
Note that directly by the definition of the Hadamard matrix we have H N := G Ω ( m ) = 1 √ m [( − v T w ] v , w ∈ F m . (68)Whereas, for any r = 1 , . . . , m , one straightforwardly computes G Ω ( r ) · Z ( m, r ) = [( − v T w · f ( v , w , r )] v , w ∈ F m , (69)where Z ( m, r ) := I r ⊗ σ ⊗ m − rz is the diagonal Pauli that acts as σ z on the last m − r qubits,and f ( v , w , r ) = m (cid:89) i = r +1 (1 + v i + w i ) . (70)Note that the value of f will be 1 precisely when v and w coincide in their last m − r coordinatesand 0 otherwise. It follows that f is identically 1 when r = m and f is the Kronecker function δ v , w when r = 0 . We will use f to determine the sparsity or rank of a Clifford matrix/stabilizerstate. Of course r = m corresponds to fully occupied objects with only nonzero entries; seealso Remarks 12 and 13 for the extreme cases of r = 0 , . Example 2 (Example 1 continued) . Let us reconsider the invertible matrices from (7). Recallthat there we had m = 3 , r = 2 . Here we will construct the Cliffords corresponding to thecanonical coset representative (40), with S r = × . For the case u = 0 one computes G D ( P T u =0 ) as in (66), and multiplies it (from the right) by G Ω (2) as in (67) (we will omit / √ ) and and then by Z (3 , to obtain G u =0 = + 0 + 0 + 0 + 0+ 0 − − − − − − − − − −
0+ 0 − − − − − − . (71)As mentioned, (65) by definition yields a permutation matrix. Thus G u =0 is nothing else but G Ω (2) = H ⊗ H ⊗ I , with its rows permuted accordingly, and a possible sign introduced toits columns by the diagonal matrix Z (3 ,
2) = I ⊗ σ z . Similarly, for the case u = 1 , one obtains G u =1 = + 0 + 0 + 0 + 0+ 0 − − − − − − − − − − − − + 0 + 0 − −
0+ 0 − − . (72)We will discuss how the { + , − , } patterns are correlated later on. See Section 4.1 for why we consider the transpose instead of P itself. Sp(2 m ; 2) already gives a decomposition of Cliff N . However, in order to have a concise approach onehas to be a bit careful. In this section we will write G = Φ − ( F ) , where the equality is takenmodulo the center HW ∗ N (cid:110)Z . In other words, we will disregard the central part of Φ − ( F ) andconsider only the Clifford part. The cyclic group Z of order 8 comes into play to accommodate8th roots of unity coming out of products G U ( S ) G Ω ( r ) . This setup, yet again, confirms theimportance of Cliff ∗ N := { exp( iπk/ G | k ∈ Z , G ∈ Cliff N } . (73)Let G = { G D ( P ) G U ( S ) | P ∈ GL( m ; 2) , S ∈ Sym( m ; 2) } be the preimage of P from (18). Forobvious reasons, it is referred as the Hadamard-free group; see also [38]. As for the case of thesymplectic group, this group acts from the right on matrices of the form G D ( P ) G U ( S ) G Ω ( r ) G U ( S ) G D ( P ) (74)and thus, a coset representative would look like G F := G D ( P ) G U ( S ) G Ω ( r ) , (75)One is interested on coset representatives. To understand this, it is enough to understandthe preimage of generators of GL( m ; 2) and Sym( m ; 2) . Let us start with the former, whichcan be generated by two elements [39]. Namely, it can be generated P := I m + E where E is the elementary (binary) matrix with 1 in position (1 , and 0 elsewhere and the cyclicpermutation matrix Π cycl acting as the permutation (12 · · · m ) . It is of interest to consider alarger set of generators. Let Π i,j be a transposition matrix. Then of course P along with allthe Π i,j generate GL( m ; 2) . While Π i,j swaps dimensions i and j in F m , it is easily seen that Φ − ( F D (Π i,j )) swaps the tensor dimensions i and j in ( C ) ⊗ m . Moreover Φ − ( F D ( P )) = ⊗ I N − . (76)The above × matrix is known in quantum computation as the controlled-NOT (CNOT)quantum gate. In itself, the CNOT gate is of form G D ( P ) where P = P − = (cid:20) (cid:21) . (77)For Sym( m ; 2) we consider matrices S v := v T v where v ∈ F m is a vector with at most twonon-zero entries. Then Φ − ( F U ( S v )) = 1 √ I N + i E ( , v )) . (78)Note that when v has exactly one non-zero entry in position j , the j th tensor dimension willcontain the phase matrix G P = exp( − iπ/ I + iσ z ) form (64). On the other hand, when v has exactly two non-zero entries (78) gives rise to G CZ ( G P ⊗ G P ) in tensor dimensions i and j , where G CZ = diag (1 , , , − . The latter is known in quantum computation as thecontrolled-Z (CZ) quantum gate, and it is of form G U ( I ) .In conclusion, the Bruhat decomposition of Sp(2 m ; 2) directly yields some fundamentalquantum gates as described above. Similar ideas were used in [28] where the depth of stabilizercircuits was considered. Classically, there exist several decompositions of the symplectic group,which in principle would yield a decomposition of the Clifford group. Particularly importantin quantum computation is the decomposition into symplectic transvections [40]; see [41, 42]for a detailed description and [43] for using these ideas to sample the Clifford group.13 Binary Subspace Chirps
Binary subspace chirps (BSSCs) were introduced in [21] as a generalization of binary chirps(BCs) [5]. In this section we describe the geometric and algebraic features of BSSCs, anduse their structure to develop a reconstruction algorithm. For each ≤ r ≤ m , subspace H ∈ G ( m, r ; 2) , and symmetric S r ∈ Sym( r ) we will define a unit norm vector in C N asfollows. Let H I ∈ C I be such that cs ( H I ) = H , as described in Section 2.1. Then H I iscompleted to an invertible P := P I as in (5). For all b , a ∈ F m define w H, S r b ( a ) = 1 √ r i a T P − T SP − a +2 b T P − a f ( b , P − a , r ) , (79)where S ∈ Sym( m ; 2) is the matrix with S r on the upper left corner and 0 elsewhere, f is asin (70), and the arithmetic in the exponent is done modulo 4. To avoid heavy notation howeverwe will omit the upper scripts. Then we define a binary subspace chirp to be w b := (cid:2) w b ( a ) (cid:3) a ∈ F m ∈ C N . (80)Note that when r = m we have P = I m and f is the identically 1 function. Thus, we obtainthe binary chirps [5].Directly from the definition (and the definition of f ) it follows that w b ( a ) (cid:54) = 0 preciselywhen b and P − a coincide in their last m − r coordinates. Making use of the structure of P as in (5) we may conclude that w b ( a ) (cid:54) = 0 iff (cid:103) H I T a = b m − r , (81)where b m − r ∈ F m − r consists of the last m − r coordinates of b . Note that since rank H I = r it follows that (81) has r solutions, and this in turn implies that w b has r non-zero entries.In particular w b is a unit norm vector. Concretely, making use (4) we see that the solutionspace of (81) is given by { (cid:101) x := I (cid:101) I b m − r + H I x | x ∈ F r } . (82)We say that the rank r determines the sparsity of w b and the subspace H (equivalently H I )determines the on-off pattern of w b . Remark 9.
Fix a subspace chirp w b , and write b T = [ b T r b T m − r ] . Then w b ( a ) (cid:54) = 0 iff a is asin (82) for some x ∈ F r . Making use of (6) and (4) we obtain P − a = (cid:20) xb m − r (cid:21) , (83)and as a consequence a T P − T SP − a = x T S r x where S r is the (symmetric) upper-left r × r block of S . Thus the nonzero entries of w b are of the form w b ( x ) = ( − wt( b m − r ) √ r i x T S r x +2 b T r x (84)for x ∈ F r . Note that there is a slight abuse of notation where we have identified x with P − a (thanks to (83) and the fact that b is fixed). Above, the function wt( • ) is just the Hammingweight which counts the number of non-zero entries in a binary vector. We conclude that the on-pattern of a rank r binary subspace chirp is just a binary chirp in r dimensions; compare(84) with [5, Eq. (5)]. It follows that all lower-rank chirps are embedded in m dimensions,which along with all the chirps in m dimensions yield all the binary subspace chirps. Asdiscussed, the embeddings are determined by subspaces.14 .1 Algebraic Structure of BSSCs In what follows we fix a rank r , invertible P , and symmetric S . Recall that S contains a r × r symmetric in its upper left corner and 0 otherwise. Next, let F := F Ω ( r ) F U ( S ) F D ( P T ) andlet G F = G D ( P T ) G U ( S ) G Ω ( r ) . With this notation we have Φ( G F ) = F . Recall also that { e a | a ∈ F m } is the standard basis of C N . With a substitution u := P − a we have w b = 1 √ r (cid:88) a ∈ F m w b ( a ) e a = 1 √ r (cid:88) u ∈ F m i u T Su ( − b T u f ( b , u , r ) e Pu = G D ( P T ) · √ r (cid:88) u ∈ F m i u T Su ( − b T u f ( b , u , r ) e u = G D ( P T ) G U ( S ) G Ω ( r ) Z ( m, r ) e b (85) = G F · Z ( m, r ) e b , (86)where (85) follows by (69). Note that in (86), the diagonal Pauli Z ( m, r ) only ever introducesan additional sign on columns of G F . Thus, the binary subspace chirp w b is nothing else butthe b th column of G F , up to a sign. However, as mentioned, for our practical purposes a sign(or even a complex unit) is irrelevant. Example 3 (Examples 1 and 2 continued) . Let us consider the case u = 0 , and for simplicity,let us set the symmetric S to be the zero matrix , so that G U ( S ) is the identity matrix.The on-off pattern of the resulting BSSCs is governed by the r = 2 dimensional subspace H = { T , T , T , T } = cs ( H { , } ) . The above argument tells us that these BSSCsare precisely the columns of G u =0 from (71). One verifies this directly using the definition (79).Moreover, the structure of the on-off patterns is completely determined by (82). Indeed, we seein (71) two on-off patterns: one determined by H (if I (cid:101) I b m − r ∈ H ) and one determined by itscoset (if I (cid:101) I b m − r / ∈ H ). In our specific case, we have I = { , } , and thus I (cid:101) I = I { } = 010 T .Thus I (cid:101) I b m − r ∈ H iff b m − r = 0 iff b ∈ { T , T , T , T } , which corresponds to columns { , , , } . Additionally, within each of these columns, the on-off pattern is again governed by H . Indeed, the non-zero entries in these columns are in positions/rows indexed by H , that is, { , , , } – precisely as described by (81). Since cosets form a partition it follows immediatelythat columns indexed by different cosets are orthogonal. Orthogonality of columns within eachcoset is a bit more delicate to see directly. We will further discuss the general structure ofon-off patterns in Section 4.2.Equation (40) gives a one-to-one correspondence between canonical coset representativesand maximal stabilizers. Above we mentioned that BSSCs are columns of Clifford matricesparametrized by such coset representatives. The last piece of the puzzle is found by simulta-neously diagonalizing the commuting matrices of a maximal stabilizer. We make this precisein the following. Theorem 1.
Let F and G F be as above. The set { w b | b ∈ F m } consisting of the columnsof G F is the common eigenspace of the maximal stabilizer E ( I m | r P T , ( I m | r S + I m |− r ) P − ) from (40) .Proof. Consider the matrix G := G F parametrized by the symplectic matrix F , and recallthat w b is the b th column of G F . It follows from Remark 7 that the columns of G are theeigenspace of E ( x , y ) iff G † E ( x , y ) G = ± E ([ x , y ] T F − ) (87)is diagonal. Recall also that E ( x , y ) is diagonal iff x = , and observe that F Ω ( r ) − = F Ω ( r ) .Thus, G Ω ( r ) will be the common eigenspace of the maximal stabilizer S iff ± E ([ x y ] T F Ω ( r )) G U ( S ) does not affect the on-off pattern at all. There are exactly / cosets since H has dimensions 2.
15s diagonal for all E ( x , y ) ∈ S . Then it is easy to see that such maximal stabilizer is E ( I m | r , I m |− r ) . Next, if w is an eigenvector of E ( c ) then Gw = ± GE ( c ) w = ± GE ( c ) G † Gw = ± E ( c T Φ( G )) Gw implies that Gw is an eigenvector of E ( c T Φ( G )) . The proof is concluded by computing [ I m | r I m |− r ] F U ( S ) F D ( P T ) . (cid:4) Remark 10.
Note that for r = m one has E ( I m | r , I m |− r ) = E ( I m , m ) and G Ω ( r ) = H N .Thus the above theorem covers the well-known fact that H N is the common eigenspace ofthe maximal stabilizer X N = E ( I m , m ) . It is also well-known that the standard basis of C N (that is, I N ) is the common eigenspace of the maximal stabilizer Z N = E ( m , I m ) of diagonalPaulis. This is of course consistent with the aforesaid fact since [ m I m ] Ω = [ I m m ] and H N = Φ − ( Ω ) . In this extremal case we also have P I = I m and (cid:102) S r = S ∈ Sym( m ; 2) . So theabove theorem also covers [44, Lem. 11] which (in the language of this paper) says that thecommon eigenspace of E ( I m , S ) is G U ( S ) H N . Remark 11.
Theorem 1 is a closed form realization of a more general fact. Let S be a maximalstabilizer and let S = rs [ A B ] ⊂ F m be its corresponding isotropic subspace. Consider alsothe diagonal Paulis Z N and its corresponding subspace Z N = rs [ m I m ] . Then, by [30,Alg. 1] there exists G ∈ Cliff N such that G S G † = Z N . In other words, G † simultaneouslydiagonalizes S , and moreover, the respective diagonal is a Pauli. In the symplectic domain, itfollows by [30, Thm. 25] that there are precisely m ( m +1) / symplectic solutions to the equation [ A B ] F = [ m I m ] . Corollary 1.
The set of all binary subspace chirps V BSSC consists of m (cid:81) mr =1 (2 r + 1) unitnorm vectors, while the set of all binary chirps V BC consists of m ( m +3) / unit norm vectors.The ratio of codebook cardinalities is |V BSSC | / |V BC | ≈ . .Proof. By Lemma 1 we need only consider coset representatives (31), which are in bijection (40)with the set of maximal stabilizers L (2 m, m ) . Now the first statement follows by (13).Next, recall that the binary chirps correspond to those binary subspace chirps for which r = m , and thus, P = I m . In other words they are parametrized by a symmetric matrix S ∈ Sym( m ; 2) and b ∈ F m . (cid:4) The size of the codebook V BSSC can be also straightforwardly deduced from the charac-terization of Lemma 1. Indeed, each rank zero BSSC has precisely = 1 non-zero entry.Thus there are m rank zero BSSCs, parametrized only by a column index b ∈ F m . On theother hand, rank r BSSCs are characterized by S ∈ Sym( r ; 2) and H ∈ G ( m, r ; 2) . Of course, | Sym( r ; 2) | = 2 r ( r +1) / . Whereas the size of the Grassmannian is given by the 2-binomialcoefficient, that is |G ( m, r ; 2) | = (cid:18) mr (cid:19) = r − (cid:89) i =0 − m − i − i +1 . (88)Thus, we have |V BSSC | = 2 m · m (cid:88) r =0 r ( r +1) / (cid:18) mr (cid:19) = 2 m · m (cid:89) r =1 (2 r + 1) , (89)where the last equality is simply the 2-binomial theorem [45]. Corollary 2.
Each binary subspace chirp is a stabilizer state. The converse is also true.Proof.
Stabilizer states can be defined equivalently as the orbit of e under the action of Cliff N ;see [46] for instance. Then the first statement follows by (86). The converse is true due tocardinalities. (cid:4) a) r = 0 . (b) r = 1 . (c) r = 2 (BCs). Figure 2.
BSSCs in N = 4 dimensions. White = 0, Blue = 1, Cyan = -1, Red = i , Magenta = − i . Corollary 3.
Let S be a maximal stabilizer. Then the stabilizer state V ( S ) is a rank r BSSCiff |S ∩ Z N | = 2 m − r .Proof. By Corollary 2 we know that V ( S ) is a BSSC of rank r , which in turn is stabilized bythe maximal stabilizer of Theorem 1. Such stabilizer has precisely m − r diagonal Paulis; seealso (101). (cid:4) We mentioned that the extremal case r = m gives the codebook V BC . Before discussinggeneral on-off patterns, we consider the lower-end extremal cases r = 0 , . Remark 12.
Let r = 0 . In this case we again have P = I m and S = m . In addition f ( v , w ,
0) = δ v , w . Thus, from (79) we see that w b ( a ) (cid:54) = 0 iff a = b , in which case we have w b ( a ) = ( − wt( b ) . This can also be seen from (86). Indeed, since G Ω (0) = I N we have G F = I N . Note also that Z ( m,
0) = σ z ⊗ · · · ⊗ σ z = E ( , ) is the common eigenspace of themaximal stabilizer E ( I m , I m ) , as established by Theorem 1. Remark 13.
Let r = 1 . In this case, either S = m or S = e e T , where e ∈ F m isthe first standard basis vector. It follows that, up to a Pauli matrix, G U ( S ) is either I N orthe transvection ( I N + iZ ) / √ where Z = E ( , e ) has σ z on the first qubit and identityelsewhere; see also (78). Similarly G Ω (1) = ( X + Z ) / √ is another transvection. Thus, rankone BSSCs are columns of transvections, permuted by some Clifford permutation G D ( P ) .See [41, 42] for more on transvections transvections. Example 4.
Let m = 2 . There are (cid:0) (cid:1) one dimensional spaces in F m and there are two × symmetric matrices. Thus there are · · BSSCs of rank r = 1 in N = 2 m = 4 dimensions, as depicted in Figure 2b. Furthermore, there are eight × symmetric matrices,and these yield
32 = 2 · BCs, as depicted in Figure 2c. Along with BSSCs of rank 0depicted on Figure 2a, we have in total
60 = 4 + 24 + 32 = 4 · · BSSCs in N = 4 dimensions,as given by (89). As discussed, for S r ∈ Sym( r ) and H ∈ G ( m, r ; 2) we obtain a unitary matrix U H, S r ( a , b ) = (cid:2) w b ( a ) (cid:3) a , b ∈ U ( N ) . (90)We will omit the subscripts when the context is clear. We know from (86) that such a matrixis, up to a diagonal Pauli, an element of Cliff N . The subspace H determines the sparsity of U .Indeed, we see from (82) that the on-off pattern is supported either by H or by a coset of it.Thus, the on-off patterns of different columns are either equal or disjoint. It also follows thatin U there are m − r different on-off patterns, each of which repeat r times.In [47] it was shown that BCs form a group under coordinate-wise multiplication. Whereas,we can immediately see that this is not the case for BSSCs. For instance, if one considers two17SSCs with disjoint on-off patterns then they coordinate-wise multiply to ∈ C N . When twoBSSCs have the same on-off pattern the coordinate-wise multiplication can be determined asfollows. Let w and w be two columns of U with the same on-off pattern, indexed by b and b respectively. Let (cid:101) r = m − r . In such case, again by (82), we must have b , (cid:101) r = b , (cid:101) r , that is,they are equal in their last (cid:101) r coordinates. Recall also that the non-zero coordinates of a BSSCare determined by (83). We have that r w ( a ) w ( a ) = ( − x T Sx +( b ,r + b ,r ) T x , (91)where x ∈ F r is such that P − a = (cid:20) xb , (cid:101) r (cid:21) = (cid:20) xb , (cid:101) r (cid:21) . (92)The matrix P above corresponds to H as usual. Next, the map x (cid:55)−→ x T Sx is additive modulo2, and thus it is of form x (cid:55)−→ b T S x for some b S ∈ F m . It follows that r w ( a ) w ( a ) = ( − ( b S + b ,r + b ,r ) T x . (93)Then it is easy to see that the right-hand-side of (93) is, up to a sign, a column of G D ( P T ) G Ω ( r ) .With a similar argument, when two BSSCs with the same on-off pattern, but differentsymmetric matrices S and S , are coordinate-wise multiplied, we obtain, up to sign, a columnof G D ( P T ) G U ( S + S ) G Ω ( r ) . In all cases, the “up to sign” is determined by wt( b (cid:101) r ) , thatis, the Hamming weight of the last (cid:101) r coordinates of the column index. The latter is in turnprecisely captured by Z ( m, r ) ; see also (84) and (86).Also with a similar argument, one determines the conjugate of BSSCs and the coordinate-wise multiplication of BSSCs with H ∈ G ( m, r ) and H ∈ G ( m, r ) . Without diving indetails, in this case the on-off pattern will be determined by H ∩ H and of course the sparsitywill be r = dim H ∩ H .In particular, we have proved the following. Theorem 2.
The set V BSSC is closed with respect to coordinate-wise conjugation. The set V BSSC ∪ { N } is closed with respect to coordinate-wise multiplication. The set of all BSSCs ofgiven sparsity r and on-off pattern is isomorphic to Sym( r ; 2) . As discussed, the codebooks V BSSC and V BC are codebooks in G ( N, . The cardinalites weredetermined in Corollary 1. In order to have a complete comparison, one needs to also considerthe relevant metric, which for Grassmannian lines codebooks is the chordal distance d c ( w , w ) = (cid:113) − | w † w | . (94)Then the minimum distance of a codebook is the minimum over all different codewords.Fix a BC w ∈ V BC parametrized by S ∈ Sym( m ) , and let w range among m BCsparametrized by S ∈ Sym( m ) . Then [5, 44, 47] | w † w | = (cid:40) / r , r times , , m − r times , (95)where r = rank ( S − S ) . It follows immediately that | w † w | ≤ / √ , and thus the minimumdistance of the codebook V BC is / √ .For BSSCs we have the following. Proposition 1.
The codebook V BSSC has minimum distance / √ . roof. As before, it is sufficient to show that | w † w | ≤ / √ for all w , w ∈ V BSSC . ByTheorem 2, w † is again (the transpose of) a BSSC. Then, the inner product w † w is relatedto the coordinate-wise multiplication of w † and w , which as we saw, is either the zero vectoror some other BSSC. In addition, we know from Remark 9 that the non-zero entries of BSSCsare lower dimensional BCs. Now the result follows. (cid:4) The codebooks V BC and V BSSC have the same minimum distance, and by Corollary 1the latter is 2.384 bigger. Thus, from a coding prospective the codebook V BSSC provides aclear improvement. Additionally, we will see next that V BSSC can be decoded with similarcomplexity as V BC . For these reasons, V BSSC is an optimal candidate for extending V BC alsofrom a communication prospective. The alphabet of V BC is {± , ± i } whereas the alphabet of V BSSC is {± , ± i } ∪ { } , which is a minimal extension from the implementation complexityprospective. Corollary 4.
Let G j = G U ( S j ) H N ∈ Cliff N for j = 1 , and S j ∈ Sym( m ; 2) . Then G = G † G has sparsity r where r = rank ( S + S ) and its on-off pattern is determined by H = rs ( S + S ) .Proof. Recall that G j constitutes all the BCs parametrized by S j . Then the statement followsdirectly by (95). (cid:4) Remark 14.
The vector space of symmetric matrices can be written in terms of chain ofnested subspaces, referred in literature as Delsarte-Goethals sets,
DG( m, ⊂ DG( m, ⊂ · · · ⊂ DG( m, ( m − / (96)with the property that every nonzero matrix in DG( m, r ) has rank at least m − r [48, 49].For application in deterministic compressed sensing, random access, and quantum computationsee [5,44,50]. Since DG( m, r ) is a vector space, it comes with the property that the sum of everytwo different matrices also has rank at least m − r . Thus, for S , S ∈ DG( m, ( m − r ) / , theconstruction of Corollary 4 yields a Clifford matrix of sparsity at least r . This is an alternativeway of creating rank r BSSCs in terms of BCs. However, this will not yield all the BSSCsbecause not every subspace H is the row/column space of a symmetric matrix S . In this section we use the rich algebraic structure of BSSCs to construct a low complexity re-construction/decoding algorithm. We will build our way up by starting with the reconstructionof a single BSSC. In order to gain some intuition we disregard noise at first. The problem inhand is to recover H, S r , and b given a binary subspace chirp w b as in (79). In this noiselessscenario, the easiest task is the recovery of the rank r . Namely, by (82) we have w b ( a ) w b ( a ) = (cid:26) / r , r times, , m − r times. (97)To reconstruct S r and then eventually H we generalize the shift and multiply technique usedin [5] for the reconstruction of binary chirps. The underlying structure that enables thisgeneralization is the fact that the on-pattern of BSSC is a BC of lower rank as discussed inRemark 9. However, in our scenario extra care is required as the shifting can perturb theon-off pattern. Namely, we must use only shifts a (cid:55)−→ a + e that preserve the on-off pattern.It follows by (81) that we must use only shifts by e that satisfy ( (cid:103) H I ) T e = , or equivalently e = H I y for y ∈ F r . In this instance, thanks to (4) we have P − e = P − H I y = (cid:20) y0 (cid:21) . (98)19f we focus on the nonzero entries of w b and on shifts that preserve the on-off pattern of w b we can make use of Remark 9, where with another slight abuse of notation we identify y with P − e . It is beneficial to take y to be f i - one of the standard basis vectors of F r . Withthis preparation we are able to use the shift and multiply technique, that is, shift the givenBSSC w b according to the shift x (cid:55)−→ x + f i (which only affects the on-pattern and fixes theoff-pattern) and then multiply by its conjugate: w b ( x + f i ) w b ( x ) = 12 r · i f T i S r f i · ( − b T r f i · ( − x T S r f i . (99)Note that above only the last term depends on x . Now if we multiply (99) with the Hadamardmatrix (68) we obtain i f T i S r f i · ( − b T r f i (cid:88) x ∈ F r ( − x T ( v + S r f i ) , (100)for all x ∈ F r (where we have omitted the scaling factor). Then (100) is nonzero precisely when v = S r f i - the i th column of S r . With S r in hand, one recovers b r similarly by multiplying w b ( x ) w ( x ) with the Hadamard matrix. To recover b m − r one simply uses the knowledge ofnonzero coordinates and (83). Next, with b in hand and the knowledge of the on-off patternone recovers H I (and thus H ) using (81) or equivalently (82). We will refer to the process offinding the column index b as dechirping .In the above somewhat ad-hoc method we did not take advantage of the geometric struc-ture of the subspace chirps as eigenvectors of given maximal stabilizers or equivalently as thecolumns of given Clifford matrices. We do this next by following the line of [21].Let w be a subspace chirp as in (79), and recall that it is the column of G := G F = G D ( P T ) G U ( S ) G Ω ( r ) where F := F Ω ( r ) F U ( S ) F D ( P T ) . Then by construction G and F satisfy G † E ( c ) G = ± E ( c T F − ) for all c ∈ F m . Recall also from Theorem 1 that G is the commoneigenspace of the maximal stabilizer E ( I m | r P T , ( I m | r S + I m |− r ) P − ) = E (cid:32)(cid:34) H I T S r I T I (cid:103) H I T (cid:35)(cid:33) . (101)Thus, to reconstruct the unknown subspace chirp w , it is sufficient to first identify the maximalstabilizer that stabilizes it, and then identify w as a column of G . The best way to accomplishthe latter task, dechirping that is, is as described above, and thus we focus only on the formertask. A crucial observation at this stage is the fact that the maximal stabilizer in (101) hasprecisely r off-diagonal and m − r diagonal Pauli matrices; see also Corollary 3.We now make use of the argument in Theorem 1, that is, w is an eigenvalue of E ( c ) iff E ( c T F − ) is diagonal. Let us focus first on identifying the m − r diagonal Pauli matrices thatstabilize w , that is, c = (cid:20) (cid:21) . First we see that F − = (cid:20) I I S r (cid:103) H I I I I (cid:101) I (cid:21) . (102)Then for such c , w is an eigenvector of E ( c ) iff y T H I = 0 iff y = (cid:103) H I z for some z ∈ F m − r .Thus, to identify the diagonal Pauli matrices that stabilize w , and consequently the subspaces H I , (cid:103) H I , it is sufficient to find m − r vectors y ∈ F m such that (cid:54) = w † E ( , y ) w = w † E ( , (cid:103) H I z ) w . (103)It follows by (50) that the above is equivalent with finding m − r vectors y such that (cid:54) = (cid:88) v ∈ F m ( − y T v | w ( v ) | = (cid:88) v ∈ F m ( − z T (cid:101) H T v | w ( v ) | . (104)The above is just a Hadamard transform which can be efficiently undone.20ith a similar argument, w is an eigenvector of a general Pauli matrix E ( x , y ) iff w † E ( x , y ) w = i x T y (cid:88) v ∈ F m ( − v T y w ( v + x ) w ( v ) (cid:54) = 0 . (105)The above is again just a Hadamard transform. In fact, we see here both the “shift” (by x ), the“multiply”, and the Hadamard transform of the “shift and multiply”. This is the main insightthat transfers the shift and multiply technique of [5] to computation with Pauli matrices. Bydefinition, the Pauli matrix E ( x , y ) has a diagonal part determined by y and an off-diagonalpart determined by x . The off-diagonal part of a Pauli determines the shift of coordinateswhereas the diagonal part takes care of the rest.Computing w † Uw for a generic N × N matrix is expensive, and even more so if the samecomputation is repeated N times. However, when U is a Pauli matrix, which is a monomialmatrix of sparsity/rank 1, the same computation is much faster. Moreover, as we will see, fora rank r BSSC we need not compute all the possible N shifts but only r of them. This isan intuitive observation based on the shape of the maximal stabilizer (101). Indeed, once thediagonal Pauli matrices are identified, one can use that information to search the off-diagonalPauli matrices only for x ∈ cs ( H I ) , which reduces the search from m to r . In fact, as wewill see, instead of r shifts we will need only use the r shifts determined by columns of H I .Let us now explicitly make use of (105) to reconstruct the symmetric matrix S r , whileassuming that we have already reconstructed H I , (cid:103) H I . In this case, as we see from (102), theonly missing piece of the puzzle is the upper-left block of F − . We proceed as follows. For c = (cid:20) xy (cid:21) , we have w † E ( x , y ) w (cid:54) = 0 iff E ( c T F − ) is diagonal, iff x T [ I I S r (cid:103) H I ] = y T [ H I ] . (106)As before, we are interested in y ∈ F m that satisfy (106). First note that solutions to (106)exist only if x T (cid:103) H I = , that is only if x = H I z , z ∈ F r . For such x , making use of (4), weconclude that (106) holds iff z T S r = y T H I . (107)Solutions of (107) are given by y = (cid:103) H I v + I I S r z , v ∈ F m − r . (108)If we take z = f i - the i th standard basis vector of F r - we have that z T S r is the i th row/columnof S r while x = H I z is the i th column of H I .We collect all these observations in Algorithm 2. In order to move towards a multi-user random access scenario, one needs a reliable reconstruc-tion algorithm of noisy BSSCs. For this we consider a signal model s = w + n , (109)where n is Additive White Gaussian Noise (AWGN). In such instance, the subspace recon-struction, that is, step (2) of Algorithm 2 is a delicate procedure. However, one can proceedas follows. For each y ∈ F m we compute s † E ( , y ) s and use it as an estimate of w † E ( , y ) w .We sort these scattered real values in decreasing order and make rank hypothesis, that is, foreach ≤ r ≤ m we select m − r largest values and proceed with Algorithm 2 to obtain w r . Wethen select the best rank using the Euclidean norm, that is, (cid:101) w = arg min r (cid:107) s − w r (cid:107) . (110)21 lgorithm 2 Reconstruction of single noiseless BSSC
Input:
Unknown BSSC w
1. Compute w † E ( , y ) w for y ∈ F m .2. Find H I using w † E ( , y ) w (cid:54) = 0 iff y T H I = iff y ∈ cs ( (cid:103) H I ) .
3. Construct P I as in (5).4. r = rank ( H I ) .5. for i = 1 , . . . , r do:6. Compute w † E ( H I f i , y ) w for y ∈ F m .7. Determine the i th row of S r using (108).8. end for
9. Dechirp w to find b . Output: r, S r , P I , b . Figure 3.
On-off pattern of noisy BSSC versus on-off pattern of noiseless BSSC.
In Figure 3 we see an instance of a rank r = 2 BSSC on-off pattern in N = 2 dimensions,with and without noise. In this case w † E ( , y ) w is non-zero m − r = 64 times. In this instance,only 94% of the 64 highest s † E ( , y ) s values of the noisy version match the on-off pattern of w . However, this can be overcame in recusntruction by using the fact that the on-off pattern isdetermined by a subspace. Thus one can build up (cid:103) H I in a greedy manner by starting with thehighest values and then including linear combinations. This strategy was tested in [21] withMonte-Carlo simulations yielding low error rates even for low Signal-to-Noise Ratio (SNR);see [21, Fig. 1]. There it was observed that, rather remarkably, BSSCs outperform BCs despitehaving the same minimum distance. The strategy of noisy single BSSC reconstruction can be used as a guideline to generalizeAlgorithm 2 to decode multiple simultaneous transmissions in a block fading multi-user scenario s = L (cid:88) (cid:96) =1 h (cid:96) w (cid:96) + n . (111)Here the channel coefficients h (cid:96) are CN (0 , , with neither phase nor amplitude known, and w (cid:96) are BSSCs. Noise n may be added, depending on the scenario. This model represents, e.g., arandom access scenario, where L randomly chosen active users transmit a signature sequence,22 lgorithm 3 Reconstruction of noiseless multi-BSSCs
Input:
Signal s as in (111).1. for (cid:96) = 1 : L do2. for r = 0 : m do3. Greedily construct the m − r dimensional subspace (cid:103) H I using the highest values of | s † E ( , y ) s | .4. Estimate (cid:101) w r as in Alg. 2.5. end for
6. Select the best estimate (cid:101) w (cid:96) .7. Determine (cid:101) h , . . . , (cid:101) h (cid:96) that minimize (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) s − (cid:96) (cid:88) j =1 h j (cid:101) w j (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) .
8. Reduce s to s (cid:48) = s − (cid:80) (cid:96)j =1 (cid:101) h j (cid:101) w j .9. end forOutput: (cid:101) w , . . . , (cid:101) w L . Figure 4.
Error probability of Algorithm 3 on absence of noise. Random codebook included for comparison. and the receiver should identify the active users. In such application, the channel gain is notknown at the receiver, and thus one cannot use the amplitude to transmit information. For thisreason, the amplitude/norm is assumed, without loss of generality, to be one. Additionally,the channel phase is also not known at the receiver and should not carry any information.Thus without loss of generality, the codewords can be assumed to come from a Grassmanniancodebook, such as V BC or V BSSC .We generalize the single-user algorithm to a multi-user algorithm, where the coefficients h (cid:96) are estimated to identify the most probable transmitted signals. For this, we use OrthogonalMatching Pursuit (OMP), which is analogous with the strategy of [5]. We assume that weknow L .The estimated error probability of single user transmission for L = 2 , is given in Figure 4.For the simulation, the rank r is selected in a weighted manner, according to the relative size ofrank r BSSCs (recall that there are m · (cid:0) mr (cid:1) · r ( r +1) / rank r BSSCs). Whereas, within a givenrank, BSSCs are chosen uniformly. We compare the results with BC codebooks and randomcodebooks with the same cardinality. For random codebooks, steps (2)-(5) of Algorithm 3 aresubstituted with exhaustive search (which is infeasible is beyond m = 6 ).The erroneous reconstructions of Algorithm 3 come in part from steps (3)-(4). Specifically,23 igure 5. On-off pattern of a noiseless vs. noisy linear combination of BSSCs. from the cross-terms of s † s = L (cid:88) (cid:96) =1 | h (cid:96) | (cid:107) w (cid:96) (cid:107) + (cid:88) i (cid:54) = (cid:96) h i h (cid:96) w † i w (cid:96) . For BCs, these cross-terms are the well-behaved second order Reed-Muller functions . TheBSSCs, unlike the BCs [47], do not form a group under point-wise multiplication (Theorem 2),and thus the products w † i w (cid:96) are more complicated. Indeed, when two BSSCs of different ranksand/or different on-off patterns are multiplied coordinate-wise (which we do during the “shiftand multiply”) the resulting BSSCs could be potentially very different (if not zero) as describedin Theorem 2. In addition, linear combinations of BSSCs (111) may perturb the on-off patternsof the constituents, and depending on the nature of the channel coefficients h (cid:96) , the algorithmmay detect a higher rank BSSC in s . If the channel coefficients of two BSSCs happen tohave similar amplitudes, the algorithm may detect a lower rank BSSC that corresponds to theoverlap of the on-off patterns of the BSSCs. These phenomena are depicted in Figure 5 (inblue) in which the on-off pattern of a linear combination of a rank two, a rank three, and arank six BSSCs in N = 2 dimensions is displayed. There, we see multiple levels (in blue)of s † E ( , y ) s , only some of which correspond to actual on-off patterns w † (cid:96) E ( , y ) w (cid:96) of thegiven BSSCs, and the rest corresponds to different combinations of overlaps. The problemsfor multi-BSSC reconstruction caused by these phenomena is alleviated by the fact that mostBSSC codewrods have high rank. E.g., as m grows, it follows by Corollary 1 that about of BSSCs are BCs. Low rank BSSCs are very unlikely in (111).Despite these phenomena affecting BSSC on-off patterns in multi-BSSC scenarios, a decod-ing algorithm like the one discussed is able to distinguish different levels and provide reliableperformance. It is worth mentioning that by comparing Figure 4 with [21, Fig. 1] we see thatthe interference of BSSCs is much more benign than general AWGN, which in turn explainsthe reliable reconstruction of noiseless multi-user transmission.Interestingly, even in this multi-user scenario, we see that BSSCs outperform BCs. Withincreasing m , the performance benefit of the algebraically defined codebook over random code-books diminishes. However, the decoding complexity remains manageable for the algebraiccodebooks.In [21] it was demonstrated that reconstruction of a single noisy BSSC was possible evenfor low SNR. We have performed preliminary simulations and have tested Algorithm 3 on anoisy multi-user transmission. Unlike the single BSSC scenario, the multi BSSCs scenariorequires a higher SNR regime for reliable performance. In Figure 5 we have shown (in red) | s † E ( , y ) s | for a noisy version of the same linear combination as before (displayed in blue). Inthis instance we have fixed SNR = 8 dB. A close look shows that this scenario is different fromthe single user scenario displayed in Figure 3. In this instance, even an exhaustive search overranks r as in Algorithm 3 produces an on-off pattern that matches at most any actual24 igure 6. Error probability of Algorithm 3 for noisy multi-user transmission in N = 256 dimensions and SNR = 30dB. on-off pattern, and thus the subspace reconstruction inevitably fails. On the other hand, ifthe on-off pattern is reconstructed correctly, then the corresponding r -dimensional BC can bereconstructed reliably. When noise is on manageable level, reliable reconstruction of multi-userBSSCs is possible with Algorithm 3. In Figure 6, we depict the performance of N = 256 BSSCand BCs in a scenario with SNR 30 dB, for a varying number of simultaneously transmittingusers. Again, we see that BSSCs provide slightly better error performance than BCs, despitethe codebook being larger.
Algebraic and geometric properties of BSSCs are described in details. BSSCs are character-ized as common eigenspaces of maximal sets of commuting Pauli matrices, or equivalently, ascolumns of Clifford matrices. This enables us to fully exploit connections between symplecticgeometry and quantum computation, which in turn yield considerable complexity reductions.Further, we have developed a low complexity decoding algorithm for multi BSSCs transmissionwith low error probability.By construction, BSSCs inherit all the desirable properties of BCs, while having a highercardinality. In wireless communication scenarios BSSCs exhibit slightly lower error probabilitythan BCs. For these reasons we think that BSSCs constitute good candidates for a variety ofapplications.Algorithm 3 is a generalization of the BC decoding algorithm of [5] to BSSCs. As pointedout in [11], the decoding algorithm of [5] does not scale well in a multi-user scenario, in termsof the number of users supported as a function of codeword length. In [11,25] slotting arrange-ments were added on top of BC codes to increase the length, and the number of supportedusers. Part of the information in a transmission is embedded in the choice of a BC, part inthe choice of active slots. In [11], interference cancellation across slots is applied, and thediscussed scheme can be considered a combination of physical layer (PHY) BC coding, and aMedium Access Control (MAC) Layer code of the type discussed in [51]. The works of [11, 25]show that following such principles, practically implementable massive random access schemes,operating in the regime of interest of [12], can be designed. If the small- m BC-transmissions inthe slots would be replaced with BSSC transmissions with the same m , the results of this paperindicate that performance per slot would be the same, if not slightly better than in [11, 25].This indicates that combined MAC/PHY codes, where BSSC would be the PHY componentinstead of BC as used in [11, 25], are likely to provide slightly higher rates with otherwisesimilar performance as [11, 25]. In future work, we plan to investigate such codes.As mentioned, we have seen in all our simulations that BSSCs outperform BCs. Although25ur algorithms do not find the closest codeword, this may be due to a fact that BSSCs havefewer closest neighbors on average than BCs. We will investigate this in future work with astatistical analysis of Algorithm 3 along the lines of [47].Binary chirps have been generalized in various works to prime dimensions, and recently tonon-prime dimensions [52]. In future work we will consider analogues generalizations of BSSCs,by adding a sparsity component to generalized BCs and/or by lifting BSSCs modulo t .As a byproduct, we have obtained a Bruhat decomposition of the symplectic group thatinvolves five elementary symplectic matrices (compared to the seven layers of [29], c.f., (23)).We think that this has implications in quantum computation. In future research we will explorewhether Algorithm 1 can be leveraged to improve upon [28, 53]. Acknowledgements
The work of TP and OT was funded in part by the Academy of Finland (grant 319484).The work of RC was supported in part by the Air Force Office of Scientific Research (grantFA 8750-20-2-0504). The authors would like to thank Narayanan Rengaswamy for helpfuldiscussions.
References [1] P. Viswanath and V. Anantharam, “Optimal sequences and sum capacity of synchronousCDMA systems,”
IEEE Trans. Inf. Th. , vol. 45, no. 6, pp. 1984–1991, Sep. 1999.[2] D. Love., R. Heath, Jr., and T. Strohmer, “Grassmannian beamforming for multiple-inputmultiple-output wireless systems,”
IEEE Trans. Inf. Th. , vol. 49, no. 10, pp. 2735–2747,Oct. 2003.[3] R. Kötter and F. Kschischang, “Coding for errors and erasures in random network coding,”
IEEE Trans. Inf. Th. , vol. 54, no. 8, pp. 3579–3591, Aug. 2008.[4] R. DeVore, “Deterministic constructions of compressed sensing matrices,”
Journal of Com-plexity , vol. 23, no. 4–6, pp. 918–925, 2007.[5] S. D. Howard, A. R. Calderbank, and S. J. Searle, “A fast reconstruction algorithm fordeterministic compressive sensing using second order Reed-Muller codes,” in
Conferenceon Information Sciences and Systems , March 2008, pp. 11–15.[6] S. Li and G. Ge, “Deterministic sensing matrices arising from near orthogonal systems,”
IEEE Trans. Inf. Th. , vol. 60, no. 4, pp. 2291–2302, Apr. 2014.[7] G. Wang, M.-Y. Niu, and F.-W. Fu, “Deterministic constructions of compressed sensingmatrices based on codes,”
Cryptography and Communications , Sep. 2018.[8] A. Thompson and R. Calderbank, “Compressed neighbour discovery using sparse Kerdockmatrices,” in
Proc. IEEE ISIT , Jun. 2018, pp. 2286–2290.[9] D. Guo and L. Zhang, “Virtual full-duplex wireless communication via rapid on-off-divisionduplex,” in
Allerton Conference on Communication, Control, and Computing , Sep. 2010,pp. 412–419.[10] C. Tsai and A. Wu, “Structured random compressed channel sensing for millimeter-wavelarge-scale antenna systems,”
IEEE Trans. Sign. Proc. , vol. 66, no. 19, pp. 5096–5110,Oct. 2018.[11] R. Calderbank and A. Thompson, “CHIRRUP: a practical algorithm for unsourcedmultiple access,”
Information and Inference: A Journal of the IMA , no. iaz029, 2019,https://doi.org/10.1093/imaiai/iaz029. 2612] Y. Polyanskiy, “A perspective on massive random-access,” in , 2017, pp. 2523–2527.[13] Z. Utkovski, T. Eftimov, and P. Popovski, “Random access protocols with collision res-olution in a noncoherent setting,”
IEEE Wireless Communications Letters , vol. 4, no. 4,pp. 445–448, 2015.[14] S. S. Kowshik, K. Andreev, A. Frolov, and Y. Polyanskiy, “Energy efficient random accessfor the quasi-static fading MAC,” in , 2019, pp. 2768–2772.[15] ——, “Short-packet low-power coded access for massive MAC,” in , pp. 827–832.[16] A. Glebov, L. Medova, P. Rybin, and A. Frolov, “On LDPC code based massive random-access scheme for the Gaussian multiple access channel,” in
Internet of Things, SmartSpaces, and Next Generation Networks and Systems , O. Galinina, S. Andreev, S. Balandin,and Y. Koucheryavy, Eds. Cham: Springer International Publishing, 2018, pp. 162–171.[17] A. Vem, K. R. Narayanan, J. Chamberland, and J. Cheng, “A user-independent successiveinterference cancellation based coding scheme for the unsourced random access gaussianchannel,”
IEEE Transactions on Communications , vol. 67, no. 12, pp. 8258–8272, 2019.[18] A. Fengler, G. Caire, P. Jung, and S. Haghighatshoar, “Massive MIMO unsourcedrandom access,” arXiv preprint arXiv:1901.00828 , 2019. [Online]. Available: https://arxiv.org/pdf/1901.00828.pdf[19] S. S. Kowshik and Y. Polyanskiy, “Fundamental limits of many-user MAC with finitepayloads and fading,” 2019. [Online]. Available: https://arxiv.org/pdf/1901.06732.pdf[20] L. Applebaum, S. D. Howard, S. Searle, and R. Calderbank, “Chirp sensing codes: Deter-ministic compressed sensing measurements for fast recovery,”
Applied and ComputationalHarmonic Analysis , vol. 26, no. 2, pp. 283 – 290, 2009.[21] O. Tirkkonen and R. Calderbank, “Codebooks of complex lines based on binary subspacechirps,” in in Proc. Information Theory Workshop (ITW) , Aug. 2019.[22] Q. Qiu, A. Thompson, R. Calderbank, and G. Sapiro, “Data representation using the Weyltransform,”
IEEE Transactions on Signal Processing , vol. 64, no. 7, pp. 1844–1853, 2016.[23] J. Dehaene and B. D. Moor, “Clifford group, stabilizer states, and linear and quadraticoperations over GF(2),”
Phys. Rev A , vol. 68, p. 042318, Oct. 2003.[24] T. Pllaha, O. Tirkkonen, and R. Calderbank, “Reconstruction of multi-user binary sub-space chirps,” in ,2020, pp. 531–536.[25] P. Yang, D. Guo, and H. Yang, “Massive access in multi-cell wireless networksusing Reed-Muller codes,” arXiv preprint arXiv:2003.11568 , 2020. [Online]. Available:https://arxiv.org/pdf/2003.11568.pdf[26] N. Bourbaki,
Elements of Mathematics - Lie groups and Lie algebras, Chapters 4-6 ,springer, 1968.[27] R. Ranga Rao, “On some explicit formulas in the theory of Weil representation,” vol. 157,no. 2, 1993, pp. 335–371.[28] D. Maslov and M. Roetteler, “Shorter stabilizer circuits via Bruhat decomposition andquantum circuit transformations,”
IEEE Trans. Inf. Th. , vol. 64, no. 7, pp. 4729–4738,Jul. 2018. 2729] T. Can, “The Heisenberg-Weyl group, finite symplectic geometry, and their applications,”Senior Thesis, Duke University, May 2018.[30] N. Rengaswamy, R. Calderbank, S. Kadhe, and H. D. Pfister, “Synthesis of logical Cliffordoperators via symplectic geometrys,” in
Proc. IEEE ISIT , Jun. 2018, pp. 791–795.[31] A. R. Calderbank, E. M. Rains, P. W. Shor, and N. J. A. Sloane, “Quantum error correctionand orthogonal geometry,”
Phys. Rev. Lett. , vol. 78, no. 3, pp. 405–408, 1997.[32] ——, “Quantum error correction via codes over
GF(4) ,” IEEE Trans. Inform. Theory ,vol. 44, no. 4, pp. 1369–1387, 1998.[33] D. Gottesman, “Stabilizer codes and quantum error correction,” PhD thesis, CaliforniaInstitute of Technology, 1997.[34] M. A. Nielsen and I. L. Chuang,
Quantum computation and quantum information . Cam-bridge University Press, Cambridge, 2000.[35] A. Ashikhmin and E. Knill, “Nonbinary quantum stabilizer codes,”
IEEE Trans. Inform.Theory , vol. 47, no. 7, pp. 3065–3072, 2001.[36] O. Tirkkonen, C. Boyd, and R. Vehkalahti, “Grassmannian codes from multiple familiesof mutually unbiased bases,” in
Proc. IEEE ISIT , Jun. 2017, pp. 789–793.[37] D. Gottesman,
An Introduction to Quantum Error Correction and Fault-Tolerant Quan-tum Computation , arXiv:0904.2557, 2009.[38] S. Bravyi and D. Maslov, “Hadamard-free circuits expose the structure of the Cliffordgroup,” 2020. [Online]. Available: https://arxiv.org/pdf/2003.09412.pdf[39] R. Steinberg, “Generators for simple groups,”
Canad. J. Math. , vol. 14, pp. 277–283, 1962.[40] O. T. O’Meara,
Symplectic groups , ser. Mathematical Surveys. American MathematicalSociety, Providence, R.I., 1978, vol. 16.[41] T. Pllaha, N. Rengaswamy, O. Tirkkonen, and R. Calderbank, “Un-Weyl-ingthe Clifford Hierarchy,”
Quantum , vol. 4, p. 370, Dec. 2020. [Online]. Available:https://doi.org/10.22331/q-2020-12-11-370[42] T. Pllaha, K. Volanto, and O. Tirkkonen, “Decomposition of Clifford gates,” 2021.[Online]. Available: https://arxiv.org/pdf/2102.11380.pdf[43] R. Koenig and J. A. Smolin, “How to efficiently select an arbitrary Clifford group element,”
J. Math. Phys. , vol. 55, no. 12, p. 122202, Dec 2014.[44] T. Can, N. Rengaswamy, R. Calderbank, and H. D. Pfister, “Kerdock codes determineunitary 2-designs,”
IEEE Transactions on Information Theory , vol. 66, no. 10, pp. 6104–6120, 2020.[45] G. E. Andrews, q -series: their development and application in analysis, number theory,combinatorics, physics, and computer algebra , ser. CBMS Regional Conference Series inMathematics. American Mathematical Society, Providence, RI, 1986, vol. 66.[46] S. Aaronson and D. Gottesman, “Improved simulation of stabilizer circuits,” Phys. Rev A ,vol. 70, no. 5, p. 052328, 2004.[47] R. Calderbank, S. Howard, and S. Jafarpour, “Construction of a large class of matricessatisfying a statistical isometry property,” in
IEEE Journal of Selected Topics in SignalProcessing, Special Issue on Compressive Sensing , vol. 4, no. 2, 2010, pp. 358–374.[48] P. Delsarte and J.-M. Goethals, “Alternating bilinear forms over GF(q),”
Journal of Com-binatorial Theory, Series A , vol. 19, no. 1, pp. 26–50, 1975.2849] A. R. Hammons, P. V. Kumar, A. R. Calderbank, N. J. A. Sloane, and P. Sole, “The Z -linearity of Kerdock, Preparata, Goethals, and related codes,” IEEE Transactions onInformation Theory , vol. 40, no. 2, pp. 301–319, 1994.[50] R. Calderbank and S. Jafarpour, “Reed-Muller sensing matrices and the LASSO,” in
Se-quences and Their Applications – SETA 2010 , C. Carlet and A. Pott, Eds. Berlin,Heidelberg: Springer Berlin Heidelberg, 2010, pp. 442–463.[51] G. Liva, “Graph-based analysis and optimization of contention resolution diversity slottedALOHA,”
IEEE Transactions on Communications , vol. 59, no. 2, pp. 477–487, 2011.[52] R. A. Pitaval and Y. Qin, “Grassmannian frames in composite dimensions by exponenti-ating quadratic forms,” in , 2020, pp. 13–18.[53] R. Koenig and J. A. Smolin, “How to efficiently select an arbitrary Clifford group element,”