Computing the Discrete Fourier Transform of signals with spectral frequency support
P Charantej Reddy, V S S Prabhu Tej, Aditya Siripuram, Brad Osgood
CComputing the Discrete Fourier Transform ofsignals with spectral frequency support
P Charantej Reddy
V S S Prabhu Tej
Aditya Siripuram
Brad Osgood
Stanford [email protected]
Abstract —We consider the problem of finding the DiscreteFourier Transform (DFT) of N − length signals with knownfrequency support of size k . When N is a power of 2 and thefrequency support is a spectral set, we provide an O ( k log k ) algorithm to compute the DFT. Our algorithm uses some recentcharacterizations of spectral sets and is a generalization of thestandard radix-2 algorithm. I. I
NTRODUCTION
The Discrete Fourier Transform (DFT) represents a signalas a combination of complex exponentials (or frequencies).Analysis of signals using the DFT has been a popular tool inmany areas of engineering and science. Given an N − lengthsignal x , its DFT is another N − length signal given by F N x ,where F N = (cid:16) e − πimn/N (cid:17) N − m,n =0 is the N × N DFT matrix. While a naive multiplication of theDFT matrix F N with the signal x would incur a computationalcomplexity of O ( N ) , the Fast Fourier Transform (FFT) isa suite of algorithms that compute this multiplication (byexploiting the structure of the matrix F N ) with O ( N log N ) computational complexity . Perhaps the most well knownamong these algorithms is the radix- FFT [1], which assumes N to be a power of and exploits the block structure of thematrix F N . Many other algorithms, including Cooley-TukeyFFT [2], Good Thomas FFT [3] and Rader’s FFT [4] (to namea few) developed in the literature can be used to computethe DFT in O ( N log N ) , even when N is non-prime powercomposite, or itself a prime. The importance of FFT makesit one of the most important algorithms developed in the lastcentury [5], [6].Many applications of DFT rely on the crucial assumptionthat most of the frequency components (i.e., the entries of F N x ) are zero (or close to zero). This naturally leads to thequestion: if the frequency components are known to be zero atcertain locations, can we do better than O ( N log N ) ? In thiswork, we attempt to address the following problem: By computational complexity we mean the number of (complex) additionsand multiplications used by the algorithm. Also recall that we say thecomplexity is O ( g ( N )) to mean that for sufficiently large N the numberof operations required is bounded above by cg ( N ) for some constant c . Problem 1:
Given a space of N − length signals B J , suchthat for any signal x ∈ B J in the space, its DFT F N x isknown to be nonzero only at locations J : F N x ( j ) = 0 for any j / ∈ J ; What is the best achievable computational complexity forfinding the DFT of signals in this space?Here we assume that J is known beforehand. We make somepreliminary observations on this problem in Section III: a naivecomputation of the |J | = k frequency coefficients incurs acomplexity of O ( N k ) , and with a simple argument, we caneasily show that the DFT of such signals can be computedwith O ( k ) complexity, irrespective of the frequency support J (more on this in Section III). However, it is also apparentwith simple examples, that more structured J is, the lesserthe complexity required to compute the DFT. We can take forinstance, assuming k divides N , the frequency support J tobe periodic ( J = { , N/k, N/k, . . . ( k − N/k } ), or a setof consecutive elements J = { , , , . . . , k − } : in both thesecases we can argue (using basic properties of the DFT, again,in Section III) that the DFT of signals with such frequencysupports can be computed in O ( k log k ) . We can then ask ifthere is any general structural property of the support set J that enables an O ( k log k ) computation of the DFT.Towards this end, we assume the set J is spectral : Definition 1:
We say that a set
J ⊆ Z N is spectral if thereexists a set I ⊆ Z N of the same size as J , such that the(square) Fourier submatrix of F N with rows indexed by I and columns indexed by J is unitary (up to scaling). If M issuch a submatrix, it satisfies M ∗ M = kI where k = |J | and I is the k × k identity, and Z N is the set of integers modulo N .Indeed, the periodic and consecutive element sets mentionedearlier are examples of spectral sets. These sets are relevantin the context of Fuglede’s conjecture [7] in Fourier analysis,and this conjecture is as yet open for the discrete case forarbitrary N [8], [9].When J is spectral and N is a power of , we providea (deterministic) algorithm to compute the DFT of signals in B J that has complexity O ( k log k ) . Our algorithm uses recent Here and in the rest of the document, whenever we say unitary, we meanunitary up to scaling. For a matrix M , M ∗ denotes the conjugate transpose of M . a r X i v : . [ ee ss . SP ] F e b esults on the structure of spectral sets [10]–[12] in termsof the binary expansion of the indices in J . Our algorithmreads k entries of the vector x at specific locations chosenaccording to the structure of the spectral set J . The algorithmoperates very similar to the radix-2 FFT algorithm (see Fig 3):the crucial difference is that as opposed to the digit reversingpermutation used by the radix-2 FFT, our algorithm reversesonly a subset of digits. Since J = Z N is trivially a spectralset (Problem 1, in this case, reduces to finding the standardDFT), our algorithm can be seen as a generalization of thestandard radix-2 FFT.In Section II, we explain our motivations and try to placethis result among other results in sparse FFT algorithms.In Section III, we make some preliminary observations onProblem 1 and elaborate on some of the comments mentionedin the introduction. In Section IV, we present the main result(Theorem 1) that enables our algorithm to work, and finallyin Section V, we provide the proof for Theorem 1. Thoughwe give our proof for the case when N is a power of forease of exposition; these techniques can be easily generalizedto the case when N is a power of any prime p .II. M OTIVATIONS AND CONNECTIONS TO S PARSE
FFT
LITERATURE
The problem of efficiently finding the DFT is even more im-portant considering the ever-increasing data sizes that emerg-ing technologies generate and analyze. As such, much ofthe work on FFT in this century has focused on the sparse-FFT algorithms. Most of these algorithms assume the DFT F N x has only k (typically k << N ) non zero entries (butthe locations of these k non zero entries are unknown). Inaddition to computational complexity, algorithms in sparseDFT computation are also interested in minimizing the samplecomplexity, which is the number of the entries of x that thealgorithm needs access to compute F N x . What differentiatesthis area research from the allied areas of compressed sensingand sparse signal recovery [13] is the emphasis on computa-tional complexity, in addition to sample complexity.The best known sample complexity is O ( k log N ) [14], andthe best known time complexity is O ( k log N log( N/k )) [15];but it is not yet known if the same algorithm can achieve bothof these. Some of these algorithms are probabilistic, whichassume that the frequency support is more or less uniform andprovide algorithms that work with constant or high probabilityfor large N [15]–[17].More recently, there has been an increasing interest indeveloping algorithms for the case when the sparsity pat-tern is not completely arbitrary. Among these include resultsfor block sparse signals [18] that achieve a complexity of O ( k poly (log n )) .Along these lines, the problem that motivated us was theDFT computation for signals with partially known support,similar to such models in compressed sensing [13]. An optimalalgorithm in such a setting has to make use of the (partiallyknown) support structure in some way. At the extreme, onecan ask about the optimal complexity when the support is fully known: this leads us to Problem 1. To the best ofour knowledge, Problem 1 has not been tackled from thecomputation perspective.In this context, our algorithm is deterministic, assumes thefrequency support is known and spectral and has a samplecomplexity k and computational complexity O ( k log k ) .III. P RELIMINARY OBSERVATIONS
Let us start with some simple observations on Problem 1.Given an x , we will often refer to x as being in the timedomain and F x as being in the frequency domain. We willdenote the DFT matrix with F N or F when N is apparentfrom the context. We also denote by x I the vector obtainedby taking only the entries indexed I from x , and by A ( I , J ) the submatrix A with rows indexed I and columns indexed J .Now for x ∈ B J , we have x = F − (cid:124)(cid:123)(cid:122)(cid:125) inverseDFT matrix F x (cid:124)(cid:123)(cid:122)(cid:125) DFT of x , and since F x (the signal in the frequency domain) is non-zero on locations in J , only the columns of F − indexedby J (in other words, only the complex exponentials withfrequencies from J ) play a role in the reconstruction of x .Suppose we read only k entries of the vector x correspondingto the locations I in the time domain, we get x I = F − ( I , J )( F x ) J . To find ( F x ) J (and hence F x ), we could solve the abovesystem of equations to get ( F x ) J = (cid:0) F − ( I , J ) (cid:1) − x I , (1)provided we pick I in such a way that the resulting submatrix F − ( I , J ) is invertible. One easy way to obtain an invertiblesubmatrix is to pick I to be k consecutive elements of Z N , this ensures the resulting submatrix is Vandermondeand hence invertible [19], [20]. The solution to (1) involvesinverting a k × k matrix, and hence has a complexity of O ( k ) [21]. Note that this works irrespective of J , and scales onlywith the size of J and not the dimension N . However, the O ( k ) complexity is applicable only in the noiseless case, asthe resulting Vandermonde matrix, though invertible, maybepoorly conditioned [20].As discussed in the introduction, we can consider specificfrequency support sets J : for e.g., suppose we assume k divides N and set J = { , , , . . . , k − } . We can thendownsample the signal x in the time domain to take samplesat I = { , k (cid:48) , k (cid:48) , . . . , ( k − k (cid:48) } where k (cid:48) = N/k . We knowfrom elementary Fourier analysis that the DFT undergoesaliasing [22]: F k x I ( m ) = (cid:88) i F N x ( m − ik ) , m = 0 , , . . . , k − . but since the signal in the frequency domain is limited to { , , . . . , k − } this aliasing does not lead to any overlaps.his reduces the complexity of finding x to the complexity offinding a k − point DFT, resulting in O ( k log k ) complexity .Note that this can be seen directly from (1): for the given I and J , the resulting submatrix F − ( I , J ) has entries exp(2 πimk (cid:48) n/N ) = exp(2 πimn/k ) for m, n = 0 , , . . . , k − . Thus the submatrix (cid:0) F − ( I , J ) (cid:1) − = k (cid:48) F k is the k × k DFT matrix and the O ( k log k ) complexity follows.We can then ask, what is the structure on J which enables O ( k log k ) computation of the DFT of signals in B J ? To startwith, we can assume that it is possible to pick an I such thatthe submatrix F − ( I , J ) is unitary, and this leads us directlyto the definition of spectral sets. In this case, (1) reduces to ( F x ) J = k (cid:48) ( F ( J , I )) x I . (2)This assumption directly reduces the complexity to O ( k ) ,however, unlike in the example above, the submatrix F ( J , I ) may not be a k × k DFT matrix.Take for example, N = 2 , J = { , , , } , and I = { , , , } , theresulting submatrix is − . − . i − . − . i .
36 + 0 . i .
71 + 0 . i . − . i − .
00 + 0 . i . − . i − .
00 + 0 . i .
36 + 0 . i − . − . i − . − . i .
71 + 0 . i − .
77 + 0 . i − .
00 + 0 . i − .
77 + 0 . i − .
00 + 0 . i Which can be checked to be unitary. However, this is not a k × k DFT matrix (nor can it be written as D F k D for somediagonal matrices D , D ). Thus, the O ( k log k ) complexitydoes not follow from the already known FFT algorithms.Also, note that given a spectral set J , there could bemany possible time domain samples I that result in a unitarysubmatrix [11]. For one specific choice of I , we prove inTheorem 1 that the resulting submatrix F ( J , I ) has a blockstructure (similar to F k ) that enables O ( k log k ) computation(down from O ( k ) ) in (2). It is not yet clear to us if thisproperty extends to arbitrary unitary submatrices of the DFTmatrix. IV. M AIN RESULT
In this section, we present our main result. We will discussabout the structure of spectral sets, but first, we note that for N is a power of 2, any spectral set has a size k = 2 r (seeLemma 1). The following theorem is the main result of thiswork: Theorem 1:
Suppose that N is a power of and that J ⊆ Z N is a spectral set. Then under a suitable choice of indices I ⊆ Z N , a suitable permutation of the rows and columns, thesubmatrix F ( J , I ) has the form (cid:18) I DI − D (cid:19) (cid:18) M M (cid:19) where D is a diagonal matrix; and M is a unitary submatrixof F of size k/ .Recall that to solve (2); we need to do the multiplication F ( J , I ) v . If T ( k ) is the complexity to compute F ( J , I ) v , A similar argument can be applied when J = { , k (cid:48) , k (cid:48) , . . . , ( k − k (cid:48) } is a periodic set. then from the structure of F ( J , I ) in Theorem 1, we have T ( k ) = 2 T ( k/ k . Since k = 2 r is a power of , Theorem1 can be recursively applied and results in a complexity of O ( k log k ) . Note that this is very similar to the block structureof the DFT matrix [22]: we may call the diagonal elements of D as the twiddle factors (see Fig 3).To elaborate on the statement of Theorem 1, we need toexplain the structure of spectral sets first. Consider writing,for each index in J , the corresponding binary digits, arrayedin rows. The columns of such an arrangement represent thebits: starting with the least significant bit on the left, and eachrow of such an arrangement represents an index of J (see Fig1).Define the pivots of such an arrangement to be the positionswhich contain the first (starting from the left) difference forsome pair of rows. In Fig 1, for instance, indices and differ in the digit corresponding to , whereas the indices and first differ in the digit corresponding to (andare identical before that). Similarly, considering all the otherdifferences, we see that { , } form the pivots. Finally, ifwe denote the pivots by L , we say a digit table is conforming if |J | = 2 |L| .
636 0 0 1 1 1 1 1 0 0 1545 1 0 0 0 0 1 0 0 0 11020 0 0 1 1 1 1 1 1 1 1161 1 0 0 0 0 1 0 1 0 0
Fig. 1. An example digit table for J = { , , , } , N = 1024 .The rows of the digit-table represent the digits of an element of J . Here L = { , } (highlighted) represent the pivots. This is a conforming digittable, since |J | = 2 |L| . Note that the pivoted digits take all possiblevalues , , and . Suppose we consider the |L|− tuple representing the pivoteddigits for each index. By definition of pivots, all these tuplesmust be distinct. Conformity enforces that these tuples musttake all the possible |L| values. For the example J in Fig1, the pivot digits for , , , are , , , respectively. With these definitions in place, we have thefollowing: Lemma 1: when N is a power of , the submatrix F − ( I , J ) is unitary iff for some set of pivots L J corresponds to a conforming digit table with pivots L ,2) I corresponds to a conforming digit table with pivots N/ L .Lemma 1 is a direct consequence of earlier works [11], [12];we discuss more about this in Section V.Suppose we start with a J that is spectral, construct itsdigit table, and read off the pivot columns L . By Lemma 1,this must be a conforming digit table, and we must have |J | =2 |L| . Now we take L (cid:48) = N/ L and construct a digit table bysetting the pivoted digits to take all possible |L (cid:48) |− tuples andall non pivoted digits to zero. For e.g., consider the set J inFig 1: the pivots are L = { , } ; so we set L (cid:48) = N/ L = { , } , and construct the digit-table, leading to Fig 2. Thisonstruction ensures the resulting digit-table is conforming.We then take the indices corresponding to this digit table asthe time domain sampling locations I . This is the choice ofindices I referred to in the statement of Theorem 1. Fig. 2. Construction of the time domain samples I for the example in Fig1. The non pivot digits are set to zero, and the pivot digits take all possiblevalues. For the structure in Theorem 1 to be realized, we also needan appropriate sorting of the indices in I and J . The sortingon I and J is related to the pivot digits:1) The indices of I (i.e., the columns of the submatrix F ( J , I ) ) are sorted lexicographically based on the pivotdigits, starting from the left to the right. For example,with two pivots (as in Fig 2), the index with pivot digits comes first, followed by the indices with pivot digits , , and respectively.2) The indices of J (i.e., the rows of the submatrix F ( J , I ) ) are sorted lexicographically based on the pivotdigits, starting from the right to the left. For example, withtwo pivots (as in Fig 1), the index with pivot digits comes first, followed by the indices with pivot digits , and respectively. The non pivot digits are ignoredfor the purpose of sorting.For e.g., if J = I = Z N , then all the digits are pivots. Inthis case, the rows J are sorted in the natural order, whereasthe columns I are sorted in the bit-reversed order, as in thestandard radix-2 FFT ( [22], [23]). Our algorithm here can beseen as a generalization of the same.With the specific choice of I , and the sorting on the rowsand columns of the submatrix F ( J , I ) mentioned above, thestructure in Theorem 1 applies. We defer the proof to SectionV. V. P ROOFS
In this section, we will prove the results presented earlier.For this, we will find it convenient to make some definitionsfirst. Suppose I , J ∈ C N represents the indicators of thesets I and J . For i ∈ Z N , we denote by ( i, N ) the greatestcommon divisor (gcd) of i and N . Define h I = F − I , h J = F − J . (3)These are convolution idempotents (i.e., they satisfy h ∗ h = h where ∗ is the discrete circular convolution). These idempo-tents arise naturally when taking inner products of any twocolumns of F ( J , I ) , as in the lemma below. Lemma 2:
The submatrix F ( J , I ) is unitary iff h J ( i − i ) = 0 for any i , i ∈ I ( i (cid:54) = i ), Proof:
The inner product of any two columns indexed n and n is given by (cid:80) m ∈J exp(2 πim ( n − n ) /N = N h J ( n − n ) . So we want to construct I in such a way that the differences ofany two elements in I fit in the zero set of h J . However, thezero set of h J correspond to the roots of a polynomial withinteger coefficients; and as such, they have a lot of structure.In particular, we have h J ( n ) = 0 iff h J ( ns ) = 0 for any s coprime to N . This allows us to write the zero set of h J as { i ∈ Z N : ( i, N ) ∈ D J } , where D J is some set of divisorsof N . Thus, the zero set contains all the indices in Z N whosegcd with N is in D J . The proof is elementary: we direct theinterested reader to the references ( [24, Theorem 2.1], [9],[12], [25]) for the proof and details. We refer to the set D J often as zero-set divisors of h J .The involvement of gcd is why digit tables are very con-venient to represent spectral sets. For any two indices i , i ,we see that the gcd ( i − i , N ) relates to the first non zerodifference in the digits of i and i . For any set I , the set { ( i − i , N ) : i , i ∈ I , i (cid:54) = i } is simply the set of pivotsin the digit table for I . This is the crucial observation we usenext. A. Proof of Lemma 1
Starting with J , as in the preceding discussion, let D J be the zero-set divisors of h J . Then F ( J , I ) is unitary, iff ( i − i , N ) ∈ D J for any i , i ∈ I (from Lemma 2). Fromthe observations made previously, this means the digit tablefor I must have some subset of D J as pivots. The largestpossible size of I (by the definition of pivots) is |D J | : so wehave |I| ≤ |D J | .The rest of the proof of Lemma 1 relies on the results onidempotents from [12, Theorem 1]: an idempotent h J has zeroset divisors D J iff it is a concatenation of conforming digittables with pivots L ∗ = N/ D J . Since there must be at leastone table in this concatenation, the size of J is at least |L ∗ | :so we have |J | ≥ |L ∗ | = 2 |D J | . However, |I| = |J | andso combining the inequalities obtained till now gives |I| = |J | = 2 |D J | . Consequently, the digit tables for both I and J are conforming: with pivots D J for I and pivots N/ D J for J . B. Proof of Theorem 1
This proof relies heavily on the ordering of I and J introduced in Section IV. Some notation before we proceed:suppose k = 2 r , and that the pivots for I , from left to right,are d , d , . . . , d r − . Then the pivots for J , from left to right,are d (cid:48) r − , d (cid:48) r − , . . . , d (cid:48) , where d (cid:48) = N/ d (this follows fromLemma 1). The ordering of I and J splits them naturally intosmaller sets as in Figure 4: we have I = I ∪ I where I contains all indices with the leftmost pivot digit zero, and I contains all indices with the leftmost pivot digit one. A similarsplit J = J ∪ J occurs for the row indices J , with the splitbased on the rightmost pivot digit.This results in F ( J , I ) = (cid:18) F ( J , I ) F ( J , I ) F ( J , I ) F ( J , I ) (cid:19) (4) ig. 3. DFT computation of − length signal x with spectral support set J = { , , , , , , , } and sparsity k = 8 · · · d (cid:48) r − · · · d (cid:48) r − · · · d (cid:48) · · · J ...01 J ...1 · · · d · · · d · · · d · · · d r − · · · I ...0 I ... Fig. 4. On the left: A general representation of confirming digit table (sorted) of frequency support set J , here d (cid:48) is the largest pivot. On the right: Correspondingconfirming digit table of time domain support set I constructed and sorted as explained in the Section IV. Here the smallest pivot is d = N/ d (cid:48) . First, we note that both I and I have pivots d , d , . . . , d r − ; and both J and J have pivots d (cid:48) r − , d (cid:48) r − , . . . , d (cid:48) . Since all these four sets have sizes k/ |L\{ d }| , it follows that all these four sets correspondto conforming digit tables. From Lemma 1, it follows that allthe four submatrices in (4) are unitary.We also make the following observations:(a) All the entries of I are multiples of d . The entries of I are even multiples of d , whereas the entries of I are odd multiples of d .(b) The digit tables for I and I are identical except for theleftmost pivot ( d ): thus I = 2 d + I .(c) The digit tables for J and J are identical up to the lastpivot digit: the entries in J has the last pivot digit as ,whereas the entries in J have the last pivot digit as .To see this, note that due to the proposed sorting, the i th in J and the i th index in J will have the same digitsin all pivots except the last ( d (cid:48) ). From the definition ofpivots, this forces all the digits (including the non pivotdigits) before d (cid:48) to be identical.Now to prove that F ( J , I ) has the structure of Theorem 1,we show the following1) F ( J , I ) = D F ( J , I ) From observation (b) above, we can write F ( J , I ) = (cid:16) e πimn/N (cid:17) m ∈ J , n ∈ I = (cid:16) e πim ( n +2 d ) /N (cid:17) m ∈ J , n ∈ I = D F ( J , I ) , where D is a diagonal matrix with entries exp(2 πim d /N ) , for m ∈ J .2) F ( J , I ) = F ( J , I ) We simply take the ratio of corresponding entries of thesematrices and show that the ratio is . This ratio is of theform e πim j n/N /e πim j n/N = e πin ( m j − m j ) /N , where n ∈ I , and m , m , m , . . . , are (in order) theentries of J ; similarly m , m , m , . . . , are (in order)the entries of J . We note from observation (c) abovethat m j − m j = α d (cid:48) , where α is odd. Further, fromobservation (a) above, the index n is an even multiple of d , resulting in the ratio being e πi (even) d α d (cid:48) /N = 1 . F ( J , I ) = −F ( J , I ) This is similar to 2) above. We take the ratio of corre-sponding entries we get e πim j n/N /e πim j n/N = e πin ( m j − m j ) /N , here n ∈ I , and m ij are as before. From observation(a) above, n is an odd multiple of d , so the ratiobecomes e πi (odd) d α d (cid:48) /N = − . R EFERENCES[1] M. Heideman, D. Johnson, and C. Burrus, “Gauss and the history of thefast fourier transform,”
IEEE ASSP Magazine , vol. 1, no. 4, pp. 14–21,1984.[2] J. W. Cooley and J. W. Tukey, “An algorithm for the machine calculationof complex fourier series,”
Mathematics of computation , vol. 19, no. 90,pp. 297–301, 1965.[3] I. J. Good, “The interaction algorithm and practical fourier analysis,”
Journal of the Royal Statistical Society. Series B (Methodological) , pp.361–372, 1958.[4] C. M. Rader, “Discrete fourier transforms when the number of datasamples is prime,”
Proceedings of the IEEE , vol. 56, no. 6, pp. 1107–1108, 1968.[5] G. Strang, “Wavelets,”
American Scientist
IEEE Annals of the History of Computing , vol. 2, no. 01,pp. 22–23, 2000.[7] B. Fuglede, “Commuting self-adjoint partial differential operators and agroup theoretic problem,”
Journal of Functional Analysis , vol. 16, no. 1,pp. 101–121, 1974.[8] D. E. Dutkay and C.-K. LAI, “Some reductions of the spectral setconjecture to integers,” in
Mathematical Proceedings of the CambridgePhilosophical Society , vol. 156, no. 01. Cambridge Univ Press, 2014,pp. 123–135.[9] A. Siripuram and B. Osgood, “Lp relaxations and fuglede’s conjecture,”in .IEEE, 2018, pp. 2525–2529.[10] A. Fan, S. Fan, and R. Shi, “Compact open spectral sets in Q p ,” Journalof functional analysis , vol. 271, no. 12, pp. 3628–3661, 2016.[11] A. Siripuram, W. Wu, and B. Osgood, “Discrete sampling: A graphtheoretic approach to orthogonal interpolation,”
IEEE Transactions onInformation Theory , 2019.[12] A. Siripuram and B. Osgood, “Convolution idempotents with a givenzero-set,” 2020.[13] N. Vaswani and W. Lu, “Modified-cs: Modifying compressive sensingfor problems with partially known support,”
IEEE Transactions onSignal Processing , vol. 58, no. 9, pp. 4595 – 4607, 2010.[14] P. Indyk and M. Kapralov, “Sample-optimal fourier sampling in anyconstant dimension,”
IEEE 55th Annual Symposium on Foundations ofComputer Science , 2014.[15] H. Hassanieh, P. Indyk, D. Katabi, and E. Price, “Nearly optimalsparse fourier transform,” ,p. 563–578, 2012.[16] A. C. Gilbert, P. Indyk, M. Iwen, and L. Schmidt, “Recent developmentsin the sparse fourier transform: A compressed fourier transform for bigdata,”
IEEE Signal Processing Magazine , vol. 31, no. 5, pp. 91–100,2014.[17] S. Pawar and K. Ramchandran, “A ffast framework for computing ak-sparse dft in o(k log k) time using sparse-graph alias codes,”
IEEEInternational Symposium on Information Theory , 2015.[18] V. Cevher, M. Kapralov, J. Scarlett, and A. Zandieh, “An adaptivesublinear-time block sparse fourier transform,”
Proceedings of the 49thAnnual ACM SIGACT Symposium on Theory of Computing , p. 702–715,2017.[19] D. Donoho and P. Stark, “Uncertainty principles and signal recovery,”
SIAM J. Appl. Math. , vol. 49, no. 3, pp. 906–931, 1989.[20] B. Osgood, A. Siripuram, and W. Wu, “Discrete sampling and interpo-lation: Universal sampling sets for discrete bandlimited spaces,”
IEEETrans. Information Theory , vol. 58, no. 7, pp. 4176–4200, 2012.[21] S. Boyd and L. Vandenberghe,
Introduction to applied linear algebra:vectors, matrices, and least squares . Cambridge university press, 2018.[22] B. Osgood,
Lectures on the Fourier Transform and Its Applications .American Mathematical Society, 2018. [23] A. V. Oppenheim,
Discrete-time signal processing . Pearson EducationIndia, 1999.[24] R.-D. Malikiosis and M. N. Kolountzakis, “Fuglede’s conjecture oncyclic groups of order p n q ,” arXiv preprint arXiv:1612.01328 , 2016.[25] P. C. Reddy, A. Siripuram, and B. Osgood, “Some results on convolutionidempotents,” in2020 IEEE International Symposium on InformationTheory (ISIT)