Iterative reconstruction of signals on graph
IIterative reconstruction of signals on graph
Emanuele Brugnoli Elena Toscano Calogero Vetro Institute for Complex Systems (ISC), Rome 00185 Italy University of Palermo, Palermo 90123 ItalyNovember 21, 2019
Abstract
We propose an iterative algorithm to interpolate graph signals from only a partial set of sam-ples. Our method is derived from the well known Papoulis-Gerchberg algorithm by consideringthe optimal value of a constant involved in the iteration step. Compared with existing graphsignal reconstruction algorithms, the proposed method achieves similar or better performanceboth in terms of convergence rate and computational efficiency.
The ongoing wide availability of information and communication spreading on social, energy, trans-portation and sensor networks, among others, requires efficient approaches to process signals definedon irregular domains. To this aim, the emerging field of signal processing on graphs extends classicaldiscrete signal processing to signals with graphs as underlying structure [1, 2, 3].More formally, let G = ( V , E ) be an undirected graph with finite vertex set V = { x i } Ni =1 and edgeset E . For any x ∈ V , we define the degree of x to be the number of edges connected to x and denoteit with d x .If one complex number is associated with each vertex, all these numbers are collectively referred asa graph signal . Thus, a graph signal is a mapping f : V → C , x n (cid:55)→ f ( x n ) ( f ( n ), for short), and thecomplex N -dimensional space C N represents the space of all N -dimensional graph signals. The task of sampling and recovery smooth signals from partial observations is one of the mostinvestigated topics in signal processing on graphs. Some works focus on the teorethical conditions forthe exact reconstruction of bandlimited signals [4, 5], other works focus on techniques for an efficientsampling set selection [6, 7], and several methods have been proposed to reconstruct bandlimitedgraph signals from sampled data [8, 9, 10, 11, 12, 13, 14, 15]. A method derived from projectiononto convex sets [16] and called
Iterative Least Square Reconstruction (ILSR) is presented in [10].A frame-based representation [17] of ILSR is given in [13] together with the definition of localsets , that is a partition of the vertex set into disjoint neighbours of the sample vertices. Basedon these concepts, the authors of [13] also proposed two algorithms, namely
Iterative WeightingReconstruction (IWR) and
Iterative Propagating Reconstruction (IPR), that perfectly recover ω -bandlimited signals for any ω less than or equal to a certain measure of the local sets. IWR andIPR are derived from the Adaptive Weights Method [18] and the Voronoi Method [19], respectively,and are shown to converge faster than ILSR. All the aforementioned algorithms are derived from theclassical Papoulis-Gerchberg iterative scheme [20, 21] with a unitary relaxation parameter involvedin the iteration step.In this work we exploit the same scheme to interpolate bandlimited graph signals from only apartial set of samples, but we consider the optimal value of the relaxation parameter. Comparedwith the other methods, the proposed algorithm achieves similar or better performance both interms of convergence rate and computational efficiency.The paper is organized as follows. Section 2 covers some preliminaries of graph signal processing,matrix operators and general iterative schemes. In Section 3, the classical Papoulis-Gerchbergiteration is introduced together with a result that guarantees its convergence. In Section 4, an1 a r X i v : . [ m a t h . NA ] N ov terative reconstruction method O-PGIR is proposed and its convergence rate is analyzed. Section5 presents some numerical experiments. Let D denote the degree matrix of a graph G which is the diagonal N × N matrix given by D =diag( d x ), and let A denote the adjacency matrix of G , also N × N , where A ( i, j ) = (cid:40) , if ( x i , x j ) ∈ E , , otherwise . Then the
Laplacian of G can be written as L = D − A . Matrix L is called Laplacian to distinguish itfrom the normalized Laplacian L = D − / LD − / . We consider exclusively the normalized Laplacianmatrix L because it is shown to produce superior classification results [23] and, under the assumption G undirected without self loops, it is a symmetric positive semi-definite matrix. Therefore, it hasnonnegative eigenvalues { λ k } Nk =1 always in the range [0 ,
2] with associated orthonormal real-valuedeigenvectors { ϕ k } Nk =1 . Let us also denote by ϕ ∗ the conjugate transpose of a vector ϕ , by Φ the N × N orthogonal matrix whose k -th column is ϕ k and by σ ( L ) the spectrum of L .The graph Fourier transform (GFT) ˆ f of a signal f : V → C on G is only defined on valuesof σ ( L ) as the expansion of f in terms of { ϕ k } , that is ˆ f = Φ ∗ f . Similar with classical Fourieranalysis, eigenvalues { λ k } Nk =1 represent frequencies of the graph, and ˆ f ( λ l ) represents the frequencycomponent corresponding to λ l . Low-frequency (High-frequency) components are associated withsmaller (larger) eigenvalues. The inverse graph Fourier transform is then given by f = Φ ˆ f . Thematrices Φ ∗ and Φ are called graph Fourier matrix and inverse graph Fourier matrix , respectively.The inverse graph Fourier transform guarantees a perfect reconstruction of f from the knowledge ofˆ f ( λ l ) for all the eigenvalues λ l of L . However if we only reconstruct f from values of ˆ f ( λ l ) with lowmagnitudes, we obtain an approximation to f .If supp( ˆ f ) ⊆ [0 , ω ], f is called ω -bandlimited and the subspace P W ω ( G ) of ω -bandlimited signalson G is called Paley-Wiener space . Bandlimited signals are smooth, and the smoothness increasesas the bandwidth decreases. Suppose that for a graph signal f ∈ P W ω ( G ), only { f ( u ) } u ∈S on thesampling set S ⊆ V are known. In [4], Pesenson showed that f can be uniquely reconstructed fromits entries { f ( u ) } u ∈S under certain conditions. To make a practical use of Pesenson’s result, theauthors of [5] presented the following result to compute the maximum ω . Proposition 2.1.
Given a graph G = ( V , E ) with normalized Laplacian matrix L , known samplingset S and unknown set S c = V \ S , let ( L ) S c be the submatrix of L containing only the rowsand columns corresponding to S c . Then any signal f ∈ P W ω ( G ) with ω ≤ σ min , where σ is thesmallest eigenvalue of ( L ) S c , can be uniquely recovered from its samples on S . Discrete signals are processed by filters , i. e., systems that take a signal as input and produceanother signal as output. We represent filtering on a graph using matrix-vector multiplication. Inparticular, we will consider the linear operations called sampling and bandlimiting, respectively,and defined on C N as follows. The sampling filter maps a signal into another by setting to zeroa certain subset of its samples. This corresponds to multiplication by a binary diagonal matrix S called sampling matrix . The density of a sampling set is s/N , s being the number of nonzero entriesin the sampling set.The bandlimiting filter onto P W ω ( G ) is characterized by an idempotent matrix P of the form P =Φ S Φ ∗ , where S is a sampling matrix other than the identity I . Signals that satisfy f = P f arelow-pass signals and P is said low-pass filter matrix . Hereafter, we will consider the space of all N -dimensional graph signals C N endowed with the metricinduced by the usual norm (cid:107)·(cid:107) . Definition 2.1.
Let ρ ( A ) be the spectral radius of an arbitrary matrix A , that is, the greatest ofits eigenvalues in absolute value. The spectral norm of A is given by (cid:107) A (cid:107) = sup (cid:107) v (cid:107) =1 (cid:107) Av (cid:107) = (cid:112) ρ ( A ∗ A ) . (1)2 efinition 2.2. A matrix A defined on C N is nonexpansive if (cid:107) Au − Av (cid:107) ≤ (cid:107) u − v (cid:107) for all u, v ∈ C N , and strictly nonexpansive if equality holds only for u = v .Nonexpansiveness of A means (cid:107) Av (cid:107) ≤ (cid:107) v (cid:107) for any v ∈ C N . Thus, the spectral norm of anonexpansive matrix A cannot exceed unity. It follows from ρ ( A ) ≤ (cid:107) A (cid:107) that all the eigenvalues of A do not exceed unity (in absolute value). Therefore, a strictly nonexpansive matrix A is convergentto zero [24]. The point v ∈ C N is a fixed point of A if Av = v .Consider now the general linear stationary iterative algorithm of first order v ( i +1) = Av ( i ) + b, (2)where A is a square matrix defined on C N and b ∈ C N . Assuming A convergent to zero, that is ρ ( A ) <
1, the error vector e ( k ) = v ( k +1) − v at iteration k is given by e ( k ) = Ae ( k − and it isinductively related to the initial error e (0) by e ( k ) = A k e (0) . Thus (cid:107) e ( k ) (cid:107) ≤ (cid:107) A k (cid:107)(cid:107) e (0) (cid:107) . (3)Since e (0) is unknown in practical problems, (cid:107) A k (cid:107) must be studied for comparing different iterativealgorithms. The simplest measure of rapidity of convergence of A is the rate of convergence R ∞ ( A )defined as follows [25]. Definition 2.3.
Let A be a N × N complex matrix. If (cid:107) A k (cid:107) < k >
0, then R ( A k ) = − ln (cid:107) A k (cid:107) k is the average rate of convergence for k iterations of A . Theorem 2.1. If A is a convergent N × N complex matrix, then R ( A k ) satisfies R ∞ ( A ) = lim k →∞ R ( A k ) = − ln ρ ( A ) (4) and R ∞ ( A ) represents the rate of convergence for A . Therefore, the lower is the spectral radius of A , the faster the convergence of A . The
Papoulis-Gerchberg Iterative Reconstruction (PGIR) [20, 21] has been extensively used to solvethe missing data problem in bandlimited signals [22]. Each iteration of PGIR consists of two steps:the low-pass filtering step p ( k ) = P f ( k ) (5)which imposes the frequency domain constraints about the data, followed by the resampling stepwhich restores the known data f ( k +1) = T p ( k ) . (6)The resampling operator is defined by T ( · ) = µf (1) + ( I − µS )( · ), where S is a sampling matrixand µ is a fixed constant called relaxation parameter . The original signal satisfies f = P f and theobserved signal is f (1) = Sf . Combining (5) and (6) we have f ( k +1) = µSf + ( I − µS ) P f ( k ) = T (cid:48) f ( k ) (7)where the operator T (cid:48) ( · ) = µSf + ( I − µS ) P ( · ) (8)is nonexpansive for 0 ≤ µ ≤
2. Indeed, by letting A µ = ( I − µS ) P (9)it follows from (8) that T (cid:48) is nonexpansive if and only if A µ is. Since P is obviously nonexpansive,the nonexpansiveness of A µ depends only on I − µS , which in turn is nonexpansive if and only if0 ≤ µ ≤ T (cid:48) and then the convergenceof (7) if suitable conditions are imposed upon S, P and µ . Theorem 3.1.
Let S be a sampling matrix with density d , and let P be a w -bandlimiting matrix. If < µ < and d ≥ ω, (10) then T (cid:48) is strictly nonexpansive and the iteration (7) converges to the unique fixed point of T (cid:48) forarbitrary f (1) . The proposed algorithm
Theorem 3.1 provides an iterative method for the reconstruction of bandlimited graph signals froman appropriate sampling set.
Proposition 4.1.
Given a graph G = ( V , E ) , let S be the sampling matrix associated with a samplingset S of density d , and let P be the low-pass filter matrix onto P W ω ( G ) with ω ≤ σ min as inProposition 2.1. Then, under the assumptions (10), any f ∈ P W ω ( G ) can be uniquely recoveredfrom its entries { f ( u ) } u ∈S through the iteration (7). Moreover, by recalling (9), the best convergencerate of (7) is obtained when µ = µ opt = 22 − ρ ( A ) . (11) Proof.
The first statement follows from Proposition 2.1 and Theorem 3.1. From Theorem 2.1,it follows that the best convergence rate of (7) is obtained for the value of µ which minimizes ρ ( A µ ). To this aim, consider T µ = P ( I − µS ) P as in [27]. By left multiplying A µ v = λv by P , we have T µ P v = λP v , and thus ρ ( A µ ) ≤ ρ ( T µ ). Moreover, by left multiplying T µ v = λv by A µ , we obtain A µ v = λA µ v . This shows that ρ ( T µ ) ≤ ρ ( A µ ). Combining the two results, itfollows that ρ ( A µ ) = ρ ( T µ ). Now, by left multiplying T v = λv by P , we have P v = v and also P ( I − S ) P v = v − P SP v = λv . Therefore, if v is an eigenvector of T referring to λ then v is aneigenvector of P SP referring to 1 − λ . This implies P ( I − µS ) P v = v − µP SP v = v − µ (1 − λ ) v ,thus v is also an eigenvector of T µ pertaining to 1 − µ (1 − λ ). This shows that increasing µ towards2 reduces the spectral radius leading to better convergence rates, then we can assume µ >
1. Underthe conditions of Theorem 3.1, it is ρ ( A ) < A is zero. Then theoptimal value of µ is obtained by minimizing ρ ( A µ ) = max <µ< { µ − , − µ (1 − ρ ( A )) } . Thus, (11) follows by solving µ opt − − µ opt (1 − ρ ( A )).The algorithm proposed is based on the iteration (7) with µ = µ opt . We call it Optimal Papoulis-Gerchberg Iterative Reconstruction (O-PGIR). A pseudocode is displayed below, where σ min is themaximal cutoff frequency defined as in Proposition 2.1. Algorithm 1
Optimal Papoulis-Gerchberg Iterative Reconstruction (O-PGIR)
Input:
Graph G , sampling set S with density d , sampled data { f ( u ) } u ∈S , cutoff frequency ω ≤ min { σ min , d } ; Output:
Interpolated signal f ( k ) ; Initialization : A = ( I − S ) P ; µ opt = − ρ ( A ) ; A µ opt = ( I − µ opt S ) P ; f (1) = µ opt Sf ; Loop: f ( k +1) = f (1) + A µ opt f ( k ) ; Until:
The stop condition is satisfied.
An Erd¨osR´enyi random graph G [28] with 3000 vertices and 12000 edges and the Minnesota roadgraph M [29] which has 2640 vertices and 6604 edges, are chosen as underlying sample structuresto compare the performance of the proposed O-PGIR with the existing ILSR, IWR and IPR. Forthe sampling set, 35% of vertices are selected uniformly at random among all the vertices. Thebandlimited signal f is generated by first considering a random signal and then removing its high-frequency components. The convergence curves of ILSR, IWR, and IPR and the proposed O-PGIRare illustrated in Fig. 1 and Fig. 2 with respect to number of iterations and execution time,respectively, both on graph G (left panels) and graph M (right panels).The simulation has been repeated for 100 randomly generated signals and plots represent themean values of all the simulation results. The code was implemented in MATLAB. Plots clearly showthe importance of the relaxation parameter µ in the performance of convergence of (7). Indeed, O-PGIR and ILSR, which basically differ only for the value of the relaxation parameter in (7) ( µ = µ opt −12 −10 −8 −6 −4 −2 R e l a t i v e e rr o r AlgorithmILSRIPRIWRO−PGIR
Graph G −12 −10 −8 −6 −4 −2 R e l a t i v e e rr o r AlgorithmILSRIPRIWRO−PGIR
Graph M
Figure 1: Left: graph G . Right: graph M . Comparison of the algorithms with respect to iterationnumber. −12 −10 −8 −6 −4 −2 R e l a t i v e e rr o r AlgorithmILSRIPRIWRO−PGIR
Graph G −12 −10 −8 −6 −4 −2 R e l a t i v e e rr o r AlgorithmILSRIPRIWRO−PGIR
Graph M
Figure 2: Left: graph G . Right: graph M . Comparison of the algorithms with respect to executiontime. 5nd µ = 1, respectively), exhibit a very different convergence speed. Moreover, O-PGIR performswell even when compared with IWR and the reference algorithm IPR, specially with respect to theexecution time. This is mainly due to the fact that, at each iteration, IWR and IPR require apropagating step that increases their computational complexity, whereas in O-PGIR (and ILSR) themost expensive, time-consuming operations are perfomed once outside the loop. Here we want to investigate the robustness of O-PGIR in presence of noise. To this aim, we corruptthe observed signal with uncorrelated additive Gaussian noise and we compare the error perfomanceof O-PGIR with the other algorithms. Fig. 3 shows that the steady-state error decreases as theSNR increases, suggesting that all the methods have almost the same behavior against observationnoise. Plots also shows that the interference of noise significantly limits the performance of all themethods. Indeed, the proposed method and the compared algorithms are all derived from PGIRwhich in turn usually runs under a noise-free condition. However, some recent studies on PGIR andits generalizations show how the use of some error tolerant techniques leads to reasonable results inthe presence of noise too (see [30]).
SNR=15dBSNR=30dBSNR=45dB −5 −4 −3 −2 −1 R e l a t i v e e rr o r AlgorithmILSRIPRIWRO−PGIR
Graph G
SNR=15dBSNR=30dBSNR=45dB −5 −4 −3 −2 −1 R e l a t i v e e rr o r AlgorithmILSRIPRIWRO−PGIR
Graph M
Figure 3: Left: graph G . Right: graph M . Robustness of the algorithms against noise with differentSNR. In this paper, the task of bandlimited signal reconstruction on graph from only a partial set ofsamples is investigated. An iterative method, called O-PGIR, is proposed to reconstruct the missingdata from the observed samples. Similar to existing graph signal reconstruction algorithms, namelyILSR, IWR and IPR, the proposed method is derived from the Papoulis-Gerchberg iterative scheme,but with the optimal value of the relaxation parameter involved in the iteration step. Comparedwith the aforementioned algorithms, our method achieves similar or better performance both interms of convergence rate and execution time. 6 eferences [1] A. Sandryhaila, and J. M. F. Moura, “Big data analysis with signal processing on graphs,”
IEEE Sig. Proc. Mag. , vol. 31, no. 5, pp. 80–90, Sep, 2014.[2] A. Sandryhaila, and J. M. F. Moura, “Discrete signal processing on graphs,”
IEEE Trans. Sig.Proc. , vol. 61, no. 7, pp. 1644–1656, Apr, 2013.[3] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emergingfield of signal processing on graphs: Extending high-dimensional data analysis to networks andother irregular domains,”
IEEE Trans. Sig. Proc. Mag. , vol. 30, no. 3, pp. 83–98, May, 2013.[4] I. Pesenson, “Sampling in Paley-Wiener spaces on combinatorial graphs,”
Trans. Am. Math.Soc. , vol. 360, no. 10, pp. 5603–5627, Oct, 2008.[5] S. K. Narang, A. Gadde, E. Sanou, and A. Ortega, “Signal processing techniques for interpola-tion of graph structured data,” in
IEEE ICASSP , Vancouver, BC, Canada, 2013, 5445–5449.[6] A. Anis, A. Gadde, and A. Ortega, “Efficient sampling set selection for bandlimited graphsignals using graph spectral proxies,”
IEEE Trans. Sig. Proc. , vol. 64, no. 14, pp. 3775–3789,Jul, 2015.[7] S. Chen, R. Varma, A. Sandryhaila, and J. Kovaˇcevi´c, “Discrete signal processing on graphs:sampling theory,”
IEEE Trans. Sig. Proc. , vol. 63, no. 24, pp. 6510–6523, Dec, 2015.[8] S. Chen, A. Sandryhaila, J. M. F. Moura, and J. Kovaˇcevi´c, “Signal recovery on graphs: vari-ation minimization,”
IEEE Trans. Sig. Proc. , vol. 63, no. 7, pp. 4609–4624, Sep, 2015.[9] A. G. Marques, S. Segarra, G. Leus, and A. Ribeiro, “Sampling of graph signals with successivelocal aggregations,”
IEEE Trans. Sig. Proc. , vol. 64, no. 7, pp. 1832–1843, Apr, 2016.[10] S. K. Narang, A. Gadde, E. Sanou, and A. Ortega, “Localized iterative methods for interpolationin graph structured data,” in
IEEE GlobalSIP , Austin, TX, USA, 2013, pp. 491–494.[11] S. Segarra, A. G. Marques, G. Leus, and A. Ribeiro, “Reconstruction of Graph Signals ThroughPercolation from Seeding Nodes,”
IEEE Trans. Sig. Proc. , vol. 64, no. 16, pp. 4363–4378, Aug,2016.[12] M. Tsitsvero, S. Barbarossa, and P. Di Lorenzo, “Signals on Graphs: Uncertainty Principle andSampling,”
IEEE Trans. Sig. Proc. , vol. 64, no. 18, pp. 4845–4860, Sep, 2016.[13] X. Wang, P. Liu, and Y. Gu, “Local-set-based graph signal reconstruction,”
IEEE Trans. Sig.Proc. , vol. 63, no. 9, pp. 2432–2444, May, 2015.[14] X. Wang, J. Chen, and Y. Gu, “Local measurement and reconstruction for noisy bandlimitedgraph signals,”
Sig. Proc. , vol. 129, pp. 119–129, Dec, 2016.[15] X. Zhu, and M. Rabbat, “Approximating signals supported on graphs,” in
IEEE ICASSP ,Kyoto, Japan, 2012, 3921–3924.[16] D. C. Youla, and H. Webb, “Image restoration by the method of convex projections: Part I -Theory,”
IEEE Trans. Medical Imaging , vol. 1, no. 2, pp. 81–94, Oct, 1982.[17] O. Christensen, “Frames in Hilbert Spaces,” in
An introduction to frames and Riesz bases ,Boston, MA, USA, Birkh¨auser, 2003, ch. 4, sec. 5.9, pp. 117–119.[18] H. G. Feichtinger, and K. H. Gr¨ochenig, “Theory and practice of irregular sampling,”
Math.Appl , pp. 305–363, Jan, 1994.[19] K. H. Gr¨ochenig, “Reconstruction algorithms in irregular sampling,”
Math. Comput. , vol. 59,no. 199, pp. 181–194, Jul, 1992.[20] R. W. Gerchberg, “Super-resolution through error energy reduction,”
Optica Acta , vol. 21, no.9, pp. 709–720, 1974. 721] A. Papoulis, “A new algorithm in spectral analysis and band-limited extrapolation,”
IEEETrans. Circuits Syst. , vol. 22, no. 9, pp. 735–742, Sep, 1975.[22] F. Marvasti, M. Analoui, and M. Gamshadzahi, “Recovery of signals from nonuniform samplesusing iterative methods,”
IEEE Trans. Sig. Proc. , vol. 39, no. 4, pp. 872–878, Apr, 1991.[23] D. Zhou, and B. Sch¨olkopf, “Learning from labeled and unlabeled data using random walks,”in C.E. Rasmussen, H.H. Blthoff, B. Schlkopf, M.A. Giese (eds)
Pattern Recognition DAGM ,LNCS, vol. 3175, 2004, pp.237–244.[24] T. Lim, “Nonexpansive Matrices with Applications to Solutions of Linear Systems by FixedPoint Iterations”,
Fixed Point Theory Appl , vol. 2010, no. 821928, 2010.[25] R. S. Varga, “Basic Iterative Methods and Comparison Theorems,” in
Matrix iterative analysis,
Springer, 2000, ch. 3, pp. 63–110.[26] P. J. S. G. Ferreira, “Interpolation and the discrete Papoulis-Gerchberg algorithm,”
IEEETrans. Sig. Proc. , vol. 42, no. 10, pp. 2596–2606, Oct, 1994.[27] P. J. S. G. Ferreira, “Iterative and Noniterative Recovery of Missing Samples for 1-D Band-Limited Signals,” in F. Marvasti (ed.)
Nonuniform sampling: theory and practice , New York,NY, USA, Springer, ch. 4, pp. 235–281.[28] P. Erd¨os and P. R´enyi, “On the evolution of random graphs,”
Publ. Math. Inst. Hungary. Acad.Sci. , vol. 5, pp. 17–61, 1960.[29] D. Gleich, The MatlabBGL Matlab Library [Online]. Available: [30] L. Cho, C. Chen, C. Lin and C. Hsu, “The convergence analysis of extended Papoulis-GerchbergAlgorithm on AWGN-smeared signals,”2018 IEEE International Conference on Applied SystemInvention (ICASI)