Sampling and Galerkin reconstruction in reproducing kernel spaces
SSAMPLING AND GALERKIN RECONSTRUCTION INREPRODUCING KERNEL SPACES
CHENG CHENG, YINGCHUN JIANG, AND QIYU SUN
Abstract.
In this paper, we consider sampling in a reproducingkernel subspace of L p . We introduce a pre-reconstruction oper-ator associated with a sampling scheme and propose a Galerkinreconstruction in general Banach space setting. We show that theproposed Galerkin method provides a quasi-optimal approxima-tion, and the corresponding Galerkin equations could be solved byan iterative approximation-projection algorithm. We also presentdetailed analysis and numerical simulations of the Galerkin methodfor reconstructing signals with finite rate of innovation. Introduction
The celebrated Whittaker-Shannon-Kotelnikov’s sampling theoremstates that a bandlimited signal can be recovered from its samplestaken at a rate greater than twice the bandwidth [28, 39]. In last twodecades, that paradigm has been extended to represent signals in ashift-invariant space [5, 7, 37], signals with finite rate of innovation[11, 24, 27, 32, 33, 38], and signals in a reproducing kernel space [10,15, 20, 25, 26].In this paper, we consider signals living in a reproducing kernel space(RKS) of the form(1.1) V K,p := (cid:8) T f : f ∈ L p (cid:9) = { f ∈ L p : T f = f } , ≤ p ≤ ∞ , where T is an idempotent integral operator with kernel K ,(1.2) T f ( x ) := (cid:90) R d K ( x, y ) f ( y ) dy, f ∈ L p . The RKS has rich geometric structure, lots of flexibility and technicalsuitability for sampling. It has been used for modeling bandlimited sig-nals, wavelet (spline) signals, and signals with finite rate of innovation[5, 25, 26, 32, 37].
Mathematics Subject Classification.
Key words and phrases. sampling, Galerkin reconstruction, oblique projec-tion, reproducing kernel space, finite rate of innovation, iterative approximation-projection algorithm. a r X i v : . [ c s . I T ] O c t CHENG CHENG, YINGCHUN JIANG, AND QIYU SUN
Take a (finite) sampling set Γ and consider the sampling scheme f (cid:55)−→ { f ( γ n ) , γ n ∈ Γ } , f ∈ V K,p . We are interested in finding a quasi-optimal linear approximation Rf ,depending completely on the sampling data, in a reconstruction space U for a signal f ∈ V K,p , (cid:107) Rf − f (cid:107) p ≤ C inf h ∈ U (cid:107) f − h (cid:107) p , f ∈ V K,p . In this paper, we focus on pre-reconstruction operators (1.3) S Γ ,δ f ( x ) := (cid:88) γ n ∈ Γ | I n | f ( γ n ) K ( x, γ n ) , f ∈ V K,p , where δ > { I n ⊂ B ( γ n , δ ) : γ n ∈ Γ } is a disjoint covering of B (Γ , δ ) := ∪ γ ∈ Γ B ( γ, δ ) = ∪ γ ∈ Γ { x : | x − γ | ≤ δ } . Our crucial observation is that S Γ ,δ f ( x ) is a good approximation to f ( x ) when δ is sufficiently small and x ∈ B (Γ , δ ) is far away from thecomplement of B (Γ , δ ), see Figure 3 in Section 5.Associated with the pre-reconstruction operator S Γ ,δ , we introducethe Garlekin method(1.4) (cid:104) S Γ ,δ Rf, g (cid:105) = (cid:104) S Γ ,δ f, g (cid:105) , g ∈ ˜ U ⊂ L p/ ( p − to define a quasi-optimal linear approximation Rf in the reconstructionspace U , where (cid:104)· , ·(cid:105) is the standard dual product between L p and L p/ ( p − . We recognize that the Galerkin equation (1.4) could be solvedby certain iterative approximation-projection algorithm:(1.5) g ∈ U and g m +1 = g m − P U, ˜ U S Γ ,δ g m + g , m ≥ , where P U, ˜ U is an oblique projection for the trial-test space pair ( U, ˜ U ),c.f. [4, 6, 13, 25, 35].This paper is organized as follows. In Section 2, we introduce the con-cept of admissibility of pre-reconstruction operators in Banach spacesetting. We show that (sub-)Galerkin reconstruction provides a quasi-optimal approximation (Theorem 2.3), and such (sub-)Galerkin recon-struction exists whenever the trial and test spaces are finite-dimensional(Theorem 2.4, Corollaries 2.5 and 2.6). In Section 3, we discuss admis-sibility of the pre-reconstruction operator S Γ ,δ in (1.3) (Theorem 3.1).In that section, we also propose to use the iterative approximation-projection algorithm (1.5) to solve the Galerkin equation (1.4) (Theo-rem 3.6 and Lemma 3.7). Lots of signals with finite rate of innovationlive in some reproducing kernel spaces of the form (1.1). In Section 4,we provide detailed analysis for pre-reconstruction operators, and we ALERKIN RECONSTRUCTION IN REPRODUCING KERNEL SPACES 3 obtain matrix formulation of Galerkin reconstructions for signals withfinite rate of innovation. In last section, we present some numericalsimulations to demonstrate our Galerkin method.2.
Sub-Galerkin reconstruction in Banach spaces
In this section, we consider numerical stability and quasi-optimalityof a (sub-)Galerkin reconstruction in Banach space setting.Denote by (cid:104)· , ·(cid:105) the action between elements in a Banach space B and its dual space B ∗ . First we introduce admissibility of operators forthe trial-test space pair. Definition 2.1.
Let ( U, V, B ) be a triple of Banach spaces with U ⊂ V ⊂ B , and let ˜ U ⊂ B ∗ . We say that a bounded linear operator S : V → V is admissible for the trial-test space pair ( U, ˜ U ) if thereexist positive constants D and D such that (2.1) sup g ∈ ˜ U, (cid:107) g (cid:107)≤ |(cid:104) Sf, g (cid:105)| ≥ D (cid:107) f (cid:107) for all f ∈ U, and (2.2) sup g ∈ ˜ U, (cid:107) g (cid:107)≤ |(cid:104) Sf, g (cid:105)| ≤ D (cid:107) f (cid:107) for all f ∈ V. An admissible operator S for the trial-test space pair ( U, ˜ U ) is boundedbelow on U , (cid:107) Sf (cid:107) ≥ D (cid:107) f (cid:107) , f ∈ U. The performance of our proposed (sub-)Galerkin reconstruction de-pends on the test space ˜ U , particularly on the ratio between bounds D and D in (2.1) and (2.2), see Theorem 2.3. In our model for sam-pling, S is the pre-reconstruction operator S Γ ,δ in (1.3), and the tripleof Banach spaces contains the reconstruction space U , the reproducingkernel space V K,p and the space L p .Next we introduce a general notion of Galerkin reconstruction. Definition 2.2.
Let S : V → V be a bounded linear operator, and ( U, ˜ U ) be a trial-test space pair. We say that a linear operator R : V → U is a Galerkin reconstruction if (2.3) Rh = h, h ∈ U and (2.4) (cid:104) SRf, g (cid:105) = (cid:104) Sf, g (cid:105) , f ∈ V and g ∈ ˜ U ; CHENG CHENG, YINGCHUN JIANG, AND QIYU SUN and a sub-Galerkin reconstruction if (2.3) holds and (2.5) sup g ∈ ˜ U, (cid:107) g (cid:107)≤ |(cid:104) SRf, g (cid:105)| ≤ D sup g ∈ ˜ U, (cid:107) g (cid:107)≤ |(cid:104) Sf, g (cid:105)| , f ∈ V, for some D > . In the following theorem, we establish numerical stability and quasi-optimality of (sub-)Galerkin reconstructions associated with admissibleoperators.
Theorem 2.3.
Let
V, U, ˜ U be as in Definition 2.1, and S be admissiblefor the pair ( U, ˜ U ) with bounds D and D . If R : V → U is a sub-Galerkin reconstruction with bound D , then (i) R is numerically stable, (cid:107) Rf (cid:107) ≤ D D D (cid:107) f (cid:107) , f ∈ V. (ii) R is quasi-optimal, (cid:107) Rf − f (cid:107) ≤ D + D D D inf h ∈ U (cid:107) f − h (cid:107) , f ∈ V. Proof. (i) For f ∈ V , we obtain from (2.1), (2.2) and (2.5) that D (cid:107) Rf (cid:107) ≤ sup g ∈ ˜ U, (cid:107) g (cid:107)≤ |(cid:104) SRf, g (cid:105)| ≤ D sup g ∈ ˜ U, (cid:107) g (cid:107)≤ |(cid:104) Sf, g (cid:105)| ≤ D D (cid:107) f (cid:107) . This proves numerical stability of the reconstruction operator R .(ii) For f ∈ V and h ∈ U , (cid:107) f − Rf (cid:107) ≤ (cid:107) f − h (cid:107) + (cid:107) h − Rf (cid:107) = (cid:107) f − h (cid:107) + (cid:107) R ( f − h ) (cid:107) ≤ D + D D D (cid:107) f − h (cid:107) , where we have used the facts that R is a sub-Galerkin reconstructionand has numerical stability. Then quasi-optimality of the reconstruc-tion operator R holds by taking infinimum over h ∈ U . (cid:3) By Theorem 2.3, the existence of a quasi-optimal approximation re-duces to finding a sub-Galerkin reconstruction. Now we show thatsuch a sub-Galerkin reconstruction always exists when U and ˜ U arefinite-dimensional. Theorem 2.4.
Let
V, U, ˜ U be as in Definition 2.1, and S be admissiblefor the pair ( U, ˜ U ) . If U and ˜ U are finite-dimensional, then there is asub-Galerkin reconstruction. ALERKIN RECONSTRUCTION IN REPRODUCING KERNEL SPACES 5
Proof.
Let { f i } mi =1 and { g i } ni =1 be bases of U and (cid:101) U respectively. Bythe admissibility of S , we may assume that B := ( (cid:104) Sf i , g j (cid:105) ) ≤ i,j ≤ m isnonsingular. Write B − = ( b ij ) and define linear operator R by Rf := m (cid:88) i,j =1 (cid:104) Sf, g i (cid:105) b ij f j , f ∈ V. Obviously, R satisfies (2.3). Now it remains to show that R satisfies(2.5).Let ˜ U ∗ be the space spanned by { g j } mj =1 . One may verify that Rf solves Galerkin equations(2.6) (cid:104) SRf, g (cid:105) = (cid:104) Sf, g (cid:105) , g ∈ ˜ U ∗ for any f ∈ V , and(2.7) C (cid:107) h (cid:107) ≤ sup g ∈ (cid:101) U ∗ , (cid:107) g (cid:107)≤ |(cid:104) Sh, g (cid:105)| , h ∈ U for some positive constant C . Thereforesup g ∈ (cid:101) U, (cid:107) g (cid:107)≤ |(cid:104) SRf, g (cid:105)| ≤ D (cid:107) Rf (cid:107)≤ D ( C ) − sup g ∈ (cid:101) U ∗ , (cid:107) g (cid:107)≤ |(cid:104) SRf, g (cid:105)| = D ( C ) − sup g ∈ (cid:101) U ∗ , (cid:107) g (cid:107)≤ |(cid:104) Sf, g (cid:105)|≤ D ( C ) − sup g ∈ (cid:101) U, (cid:107) g (cid:107)≤ |(cid:104) Sf, g (cid:105)| , f ∈ V, by (2.6), (2.7) and the admissibility of S . (cid:3) For the case that U and ˜ U have the same dimension, we have Corollary 2.5.
Let
V, U, ˜ U be as in Definition 2.1, and S be admissiblefor the pair ( U, ˜ U ) . If dimensions of U and ˜ U are the same, then for f ∈ V , the unique solution of Galerkin equations (2.8) (cid:104) SRf, g (cid:105) = (cid:104) Sf, g (cid:105) , g ∈ ˜ U , defines a Galerkin reconstruction.
In Hilbert space setting, we can establish the following result forleast squares solutions.
Corollary 2.6.
Let V be a Hilbert space, U and ˜ U be linear subspacesof V , and let S be admissible for the pair ( U, ˜ U ) . If U and ˜ U are finite-dimensional, then the least squares solution of Galerkin equations (2.8) , Rf := argmin h ∈ U sup g ∈ ˜ U, (cid:107) g (cid:107)≤ |(cid:104) S ( h − f ) , g (cid:105)| , f ∈ V, CHENG CHENG, YINGCHUN JIANG, AND QIYU SUN defines a sub-Galerkin reconstruction with bound D ≤ . The above conclusion on least squares solutions with ˜ U = U has beenestablished by Adcock, Gataric and Hansen for non-uniform sampling[1, 2]. 3. Sampling and Reconstruction in V K,p
To consider sampling and reconstruction in V K,p , we always assumethat the kernel K of the space V K,p in (1.1) satisfies(3.1) (cid:107) K (cid:107) W := max (cid:110) sup x ∈ R d (cid:107) K ( x, · ) (cid:107) , sup y ∈ R d (cid:107) K ( · , y ) (cid:107) (cid:111) < ∞ and(3.2) lim δ → (cid:107) ω δ ( K ) (cid:107) W = 0 , where ω δ ( K )( x, y ) := sup | x (cid:48) | , | y (cid:48) |≤ δ | K ( x + x (cid:48) , y + y (cid:48) ) − K ( x, y ) | . Under the above hypothesis, the integral operator T in (1.2) is abounded operator on L p , (cid:107) T f (cid:107) p ≤ (cid:107) K (cid:107) W (cid:107) f (cid:107) p , f ∈ L p . More importantly, its range space V K,p is a reproducing kernel space[25]. In this section, we consider admissibility of the pre-reconstructionoperator S Γ ,δ in (1.3) and the unique Galerkin reconstruction associatedwith it.3.1. Admissibility, stability and samplability.
To discuss the ad-missibility, we introduce the residue E ( U, F ) of signals in a linear space U ⊂ L p outside a measurable set F , E ( U, F ) := sup (cid:54) = f ∈ U (cid:107) f (cid:107) L p ( R d \ F ) (cid:107) f (cid:107) p , where (cid:107) · (cid:107) L p ( E ) is the p -norm on a measurable set E . The reader mayrefer to [1, 21, 22] for some applications of residues of bandlimitedsignals. Theorem 3.1.
Let V K,p and S Γ ,δ be as in (1.1) and (1.3) respectively.Assume that U ⊂ V K,p and ˜ U ⊂ L p/ ( p − . If (3.3) sup g ∈ ˜ U, (cid:107) g (cid:107) p/ ( p − ≤ |(cid:104) f, g (cid:105)| ≥ D (cid:107) f (cid:107) p , f ∈ U ALERKIN RECONSTRUCTION IN REPRODUCING KERNEL SPACES 7 for some constant D satisfying (3.4) r := D − (cid:0) E ( U, B (Γ , δ )) (cid:107) K (cid:107) W + (cid:107) ω δ ( K ) (cid:107) W (cid:0) (cid:107) K (cid:107) W + (cid:107) ω δ ( K ) (cid:107) W (cid:1)(cid:1) < , then S Γ ,δ is admissible for the pair ( U, ˜ U ) . Given a sampling set Γ, we say that the sampling scheme(3.5) U (cid:51) f (cid:55)−→ { f ( γ n ) , γ n ∈ Γ } has weighted (cid:96) p -stability on U if there exist positive constants C , C and δ such that C (cid:107) f (cid:107) p ≤ (cid:16) (cid:88) γ n ∈ Γ | I n || f ( γ n ) | p (cid:17) /p ≤ C (cid:107) f (cid:107) p , f ∈ U, if 1 ≤ p < ∞ , and C (cid:107) f (cid:107) ∞ ≤ sup γ n ∈ Γ | f ( γ n ) | ≤ C (cid:107) f (cid:107) ∞ , f ∈ U, if p = ∞ , where { I n ⊂ B ( γ n , δ ) , γ n ∈ Γ } is a disjoint covering ofthe δ -neighborhood B (Γ , δ ) of the sampling set Γ. Weighted stabilityof a sampling scheme implies its unique determination. It is an im-portant concept for robust signal reconstruction, see [5, 6, 9, 12, 25,33, 34, 35, 37] and references here. The following result connects theweighted (cid:96) p -stability of a sampling scheme with the admissibility of apre-reconstruction operator. Theorem 3.2.
Let V K,p and S Γ ,δ be as in (1.1) and (1.3) respectively.Assume that U ⊂ V K,p and ˜ U ⊂ L p/ ( p − . If S Γ ,δ is admissible forthe pair ( U, ˜ U ) , then the sampling scheme (3.5) on Γ has weighted (cid:96) p -stability on U . By the regularity assumption (3.2) on the reproducing kernel K , thesecond requirement (3.4) in Theorem 3.1 is satisfied if δ is sufficientlysmall and B (Γ , δ ) is the whole Euclidean space R d . For the case that B (Γ , δ ) contains an open domain F but not necessarily the whole space R d , we obtain the following samplability result from Theorems 3.1 and3.2. Corollary 3.3.
Let U ⊂ V K,p and D be as in Theorem 3.1. Assumethat F is an open domain satisfying E ( U, F ) (cid:107) K (cid:107) W < D . If Γ is asampling set with B (Γ , δ ) ⊃ F for some sufficiently small δ > , thensignals in U are uniquely determined by their samples taken on Γ . The samplability of various signals is well-studied, see, e.g., [2, 13, 19]for band-limited signals, [5, 37] for signals in a shift-invariant space,
CHENG CHENG, YINGCHUN JIANG, AND QIYU SUN [32, 33] for signals with finite rate of innovation, and [20, 25] for signalsin a reproducing kernel space.To prove Theorem 3.1, we need the following lemma.
Lemma 3.4.
Let V K,p and S Γ ,δ be as in (1.1) and (1.3) respectively.Then (cid:107) S Γ ,δ f (cid:107) p ≤ (cid:0) (cid:107) K (cid:107) W + (cid:107) ω δ ( K ) (cid:107) W (cid:1)(cid:0) (cid:107) ω δ ( K ) (cid:107) W (cid:1) (cid:107) f (cid:107) p , f ∈ V K,p . Proof.
Let { I n } be the disjoint covering of B (Γ , δ ) in (1.3). For f ∈ V K,p , write S Γ ,δ f ( x ) = (cid:88) n (cid:90) I n (cid:90) R d K ( x, γ n ) K ( γ n , z ) f ( z ) dzdy = (cid:88) n (cid:90) I n (cid:90) R d (cid:110) K ( x, y ) K ( y, z ) + ( K ( x, γ n ) − K ( x, y )) × K ( y, z ) + K ( x, y )( K ( γ n , z ) − K ( y, z ))+( K ( x, γ n ) − K ( x, y ))( K ( γ n , z ) − K ( y, z )) (cid:111) f ( z ) dzdy =: I + II + III + IV. (3.6)Observe that (cid:107) I (cid:107) p = (cid:13)(cid:13)(cid:13) (cid:90) B (Γ ,δ ) K ( · , y ) f ( y ) dy (cid:13)(cid:13)(cid:13) p ≤ (cid:107) K (cid:107) W (cid:107) f (cid:107) p , (cid:107) II (cid:107) p ≤ (cid:13)(cid:13)(cid:13) (cid:90) R d ω δ ( K )( · , y ) | f ( y ) | dy (cid:13)(cid:13)(cid:13) p ≤ (cid:107) ω δ ( K ) (cid:107) W (cid:107) f (cid:107) p , (cid:107) III (cid:107) p ≤ (cid:13)(cid:13)(cid:13) (cid:90) R d (cid:90) R d | K ( · , y ) | ω δ ( K )( y, z ) | f ( z ) | dzdy (cid:13)(cid:13)(cid:13) p ≤ (cid:107) K (cid:107) W (cid:107) ω δ ( K ) (cid:107) W (cid:107) f (cid:107) p , and (cid:107) IV (cid:107) p ≤ (cid:13)(cid:13)(cid:13) (cid:90) R d (cid:90) R d ω δ ( K )( · , y ) ω δ ( K )( y, z ) | f ( z ) | dzdy (cid:13)(cid:13)(cid:13) p ≤ (cid:107) ω δ ( K ) (cid:107) W (cid:107) f (cid:107) p . Combining the above four estimates with (3.6) completes the proof. (cid:3)
We finish this subsection with proofs of Theorems 3.1 and 3.2.
Proof of Theorem 3.1.
The upper bound estimate (2.2) for the operator S Γ ,δ follows immediately from Lemma 3.4.Define T ∗ g ( x ) := (cid:90) R d K ( y, x ) g ( y ) dy, g ∈ L p/ ( p − . ALERKIN RECONSTRUCTION IN REPRODUCING KERNEL SPACES 9
For f ∈ U and g ∈ ˜ U ⊂ L p/ ( p − with (cid:107) g (cid:107) p/ ( p − ≤
1, we obtain |(cid:104) S Γ ,δ f, g (cid:105) − (cid:104) f, g (cid:105)| ≤ (cid:12)(cid:12)(cid:12) (cid:90) R d \ B (Γ ,δ ) f ( x ) T ∗ g ( x ) dx (cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12) (cid:88) n (cid:90) I n f ( γ n )( T ∗ g )( γ n ) − f ( x )( T ∗ g )( x ) dx (cid:12)(cid:12)(cid:12) ≤ (cid:107) K (cid:107) W (cid:107) f (cid:107) L p ( R d \ B (Γ ,δ )) + (cid:107) ω δ ( K ) (cid:107) W (cid:0) (cid:107) K (cid:107) W + (cid:107) ω δ ( K ) (cid:107) W (cid:1) (cid:107) f (cid:107) p , (3.7)where { I n } is the disjoint covering of B (Γ , δ ) in (1.3). This togetherwith (3.3) and (3.4) proves the lower bound estimate (2.1) for theoperator S Γ ,δ . (cid:3) Proof of Theorem 3.2.
Take f ∈ V . Following the argument used inLemma 3.4, we obtain (cid:0) (cid:107) K (cid:107) W + (cid:107) ω δ ( K ) (cid:107) W (cid:1) − (cid:107) S Γ ,δ f (cid:107) p ≤ (cid:16) (cid:88) n | I n || f ( ω n ) | p (cid:17) /p ≤ (cid:0) (cid:107) ω δ ( K ) (cid:107) W (cid:1) (cid:107) f (cid:107) p for 1 ≤ p < ∞ and (cid:0) (cid:107) K (cid:107) W + (cid:107) ω δ ( K ) (cid:107) W (cid:1) − (cid:107) S Γ ,δ f (cid:107) ∞ ≤ sup n | f ( ω n ) | ≤ (cid:107) f (cid:107) ∞ for p = ∞ . The above two estimates together with admissibility of theoperator S Γ ,δ complete the proof. (cid:3) Galerkin reconstruction.
To consider Galerkin reconstructionassociated with the operator S Γ ,δ on the reproducing kernel space V K,p ,we introduce the oblique projection for a pair ( U, ˜ U ) of Banach spaces. Definition 3.5.
Given U ⊂ V K,p and ˜ U ⊂ L p/ ( p − , a bounded operator P U, ˜ U : V K,p → U is said to be an oblique projection for the pair ( U, ˜ U ) if (3.8) P U, ˜ U h = h, h ∈ U, and (3.9) (cid:104) P U, ˜ U f, g (cid:105) = (cid:104) f, g (cid:105) , f ∈ V K,p , g ∈ ˜ U .
In Hilbert space setting, an oblique projection P U, ˜ U exists when co-sine of the subspace angle between U and ˜ U ⊥ is positive [3, 9, 12, 36].Following the argument used in Theorem 2.4, we can show that if U and ˜ U have the same dimension and satisfy the first requirement (3.3)of Theorem 3.1, then there is an oblique projection P U, ˜ U for the pair( U, ˜ U ). Theorem 3.6.
Let V K,p and S Γ ,δ be as in (1.1) and (1.3) respectively.Assume that U ⊂ V K,p and ˜ U ⊂ L p/ ( p − satisfy (3.3) and (3.4) , andan oblique projection P U, ˜ U associated with the pair ( U, ˜ U ) exists. ThenGalerkin equations (3.10) (cid:104) S Γ ,δ h, g (cid:105) = (cid:104) S Γ ,δ f, g (cid:105) , g ∈ ˜ U , have a unique solution h ∈ U for f ∈ V K,p . Moreover, the mapping f → h defines a Galerkin reconstruction. To solve Galerkin equations (3.10), we need exponential convergenceof the iterative approximation-projection algorithm (1.5). The algo-rithm (1.5) has been demonstrated to be efficient to reconstruct vari-ous signals. The reader may refer to [13, 35] for band-limited signals,[4, 6] for signals in a shift-invariant space, and [25] for signals in areproducing kernel space.
Lemma 3.7.
Let V K,p , S Γ ,δ , U, ˜ U and P U, ˜ U be as in Theorem 3.6, andlet r ∈ (0 , be as in (3.4) . Then for any g ∈ U , the sequence g m , m ≥ , in the iterative algorithm (1.5) converges to some g ∞ ∈ U , (3.11) (cid:107) g m − g ∞ (cid:107) p ≤ r m +10 − r (cid:107) g (cid:107) p , m ≥ . Moreover, if g = P U, ˜ U S Γ ,δ h + ˜ g for some h, ˜ g ∈ U , then (3.12) (cid:107) g ∞ − h (cid:107) p ≤ (cid:107) ˜ g (cid:107) p − r . Proof.
Combining (3.3), (3.7) and (3.9), we obtain (cid:107) P U, ˜ U S Γ ,δ f − f (cid:107) p ≤ D − sup g ∈ ˜ U, (cid:107) g (cid:107) p/ ( p − ≤ |(cid:104) P U, ˜ U S Γ ,δ f − f, g (cid:105)| = D − sup g ∈ ˜ U, (cid:107) g (cid:107) p/ ( p − ≤ |(cid:104) S Γ ,δ f − f, g (cid:105)|≤ r (cid:107) f (cid:107) p , f ∈ U. (3.13)Observe from (1.5) that g m +1 − g m = ( I − P U, ˜ U S Γ ,δ )( g m − g m − ) , m ≥ . This together with (3.13) proves (3.11).Now we prove (3.12). Taking limit in (1.5) leads to the followingconsistence condition(3.14) P U, ˜ U S Γ ,δ g ∞ = g . Replacing g in (3.14) by P U, ˜ U S Γ ,δ h + ˜ g gives P U, ˜ U S Γ ,δ ( g ∞ − h ) = ˜ g. ALERKIN RECONSTRUCTION IN REPRODUCING KERNEL SPACES 11
This together with (3.13) completes the proof. (cid:3)
Proof of Theorem 3.6.
Take f ∈ V K,p , set g = P U, ˜ U S Γ ,δ f , and let g ∞ ∈ U be the limit of g m , m ≥
0, in the iterative algorithm (1.5). Theexistence of such a limit follows from Lemma 3.7. Taking limit in (1.5)leads to(3.15) P U, ˜ U S Γ ,δ f = P U, ˜ U S Γ ,δ g ∞ . Then for any g ∈ ˜ U ,(3.16) (cid:104) S Γ ,δ g ∞ , g (cid:105) = (cid:104) P U, ˜ U S Γ ,δ g ∞ , g (cid:105) = (cid:104) P U, ˜ U S Γ ,δ f, g (cid:105) = (cid:104) S Γ ,δ f, g (cid:105) by (3.9) and (3.15). This proves that g ∞ is a solution of Galerkinequations (3.10).Next, we show that g ∞ is the unique solution of Galerkin equations(3.10). Let h ∈ U be another solution. Then (cid:104) P U, ˜ U S Γ ,δ ( h − g ∞ ) , g (cid:105) = (cid:104) S Γ ,δ ( h − g ∞ ) , g (cid:105) = 0 . This together with (3.3) implies that P U, ˜ U S Γ ,δ ( h − g ∞ ) = 0 . Recall from (3.13) that P U, ˜ U S Γ ,δ is invertible on U . Then h = g ∞ andthe uniqueness follows.Observe that any f ∈ U satisfies Galerkin equations (3.10). Thistogether with (3.16) proves that the unique solution of Galerkin equa-tions (3.10) defines a Galerkin reconstruction. (cid:3) We finish this section with a remark on the iterative approximation-projection algorithm (1.5).
Remark 3.8.
Given δ >
0, a sampling set Γ and probability measures µ n supported on I n , we define˜ S Γ ,δ f ( x ) = (cid:88) γ n ∈ Γ | I n | f ( γ n ) (cid:90) I n K ( x, y ) dµ n ( y ) , f ∈ V K,p , where { I n ⊂ B ( γ, δ ) , γ n ∈ Γ } is a disjoint covering of B (Γ , δ ). Theoperator ˜ S Γ ,δ just defined becomes the sampling operator S Γ ,δ in (1.3)when µ n are point measures supported on γ n , and the sampling oper-ator S Γ ,δ f ( x ) = (cid:88) ω n ∈ Γ f ( γ n ) (cid:90) I n K ( x, y ) dy, f ∈ V K,p when µ n are normalized Lebsegue measure supported on I n . Followingthe argument used in Theorem 3.1 and Lemma 3.7, we can show that the approximation-projection algorithm (1.5) with S Γ ,δ replaced by ˜ S Γ ,δ has exponential convergence if D − (cid:0) E ( U, B (Γ , δ )) (cid:107) K (cid:107) W + (cid:107) ω δ ( K ) (cid:107) W (cid:0) (cid:107) K (cid:107) W + (cid:107) ω δ ( K ) (cid:107) W (cid:1)(cid:1) < , c.f., the second requirement (3.4) in Theorem 3.1.4. Sampling signals with finite rate of innovation
A signal with finite rate of innovation (FRI) has finitely many de-grees of freedom per unit of time [11, 24, 27, 32, 33, 38]. Define the
Wiener amalgam space by W := (cid:110) φ, (cid:107) φ (cid:107) W := (cid:88) k ∈ Z sup ≤ x ≤ | φ ( x + k ) | < ∞ (cid:111) . It is observed in [32] that lots of FRI signals live in a space of the form(4.1) V (Φ) := (cid:110) (cid:88) i ∈ Z c i φ i ( · − i ) , (cid:88) i ∈ Z | c i | < ∞ (cid:111) , where the generator Φ := ( φ i ) i ∈ Z satisfies(4.2) (cid:107) Φ (cid:107) W := (cid:13)(cid:13) sup i ∈ Z | φ i | (cid:13)(cid:13) W < ∞ and lim δ → (cid:13)(cid:13) sup i ∈ Z ω δ ( φ i ) (cid:13)(cid:13) W = 0 . In this section, we consider Galerkin reconstruction of signals in finite-dimensional spaces(4.3) V ,L (Φ) = (cid:110) L (cid:88) i = − L c i φ i ( · − i ) , L (cid:88) i = − L | c i | < ∞ (cid:111) , L ≥ . Reproducing kernel spaces.
For Φ := ( φ i ) i ∈ Z and ˜Φ := ( ˜ φ j ) j ∈ Z satisfying (4.2), define their correlation matrix by A Φ , ˜Φ := (cid:0) (cid:104) φ i ( · − i ) , ˜ φ j ( · − j ) (cid:105) (cid:1) i,j ∈ Z . In this subsection, we consider when V (Φ) and V ( ˜Φ) in (4.1) are rangespaces of some idempotent integral operators with kernels satisfying(3.1) and (3.2). Theorem 4.1.
Let Φ and ˜Φ satisfy (4.2) . If the correlation matrix A Φ , ˜Φ has bounded inverse on (cid:96) , then V (Φ) = V K, and V ( ˜Φ) = V K ∗ , for some kernel K satisfying (3.1) and (3.2) , where K ∗ ( x, y ) := K ( y, x ) , x, y ∈ R . ALERKIN RECONSTRUCTION IN REPRODUCING KERNEL SPACES 13
Let C contain all infinite matrices A := (cid:0) a ij (cid:1) i,j ∈ Z with (cid:107) A (cid:107) C := (cid:88) k ∈ Z (cid:16) sup i − j = k | a ij | (cid:17) < ∞ . To prove Theorem 4.1, we recall Wiener’s lemma for the
Baskakov-Gohberg-Sj¨ostrand class C , see [8, 16, 18, 29, 30, 31] and referencestherein. Lemma 4.2. If A ∈ C has bounded inverse on (cid:96) , then its inverse A − belongs to C too.Proof of Theorem 4.1. By direct calculation, we have (cid:107) A Φ , ˜Φ (cid:107) C ≤ (cid:107) Φ (cid:107) W (cid:107) ˜Φ (cid:107) W . Thus the inverse of the correlation matrix A Φ , ˜Φ belongs to the Baskakov-Gohberg-Sj¨ostrand class by Lemma 4.2. Write ( A Φ , ˜Φ ) − = ( b ij ) i,j ∈ Z .One may verify that the kernel defined by(4.4) K Φ , ˜Φ ( x, y ) := (cid:88) i,j ∈ Z φ i ( x − i ) b ji ˜ φ j ( y − j )satisfies all requirements of the theorem. (cid:3) Admissibility and Galerkin reconstruction.
Given a sam-pling set Γ = { γ n } Nn =1 ordered as γ < γ < · · · < γ N , define(4.5) S Φ , ˜Φ , Γ f ( x ) := N (cid:88) n =1 γ n +1 − γ n − f ( γ n ) K Φ , ˜Φ ( x, γ n ) , f ∈ V (Φ) , and(4.6) A Φ , ˜Φ , Γ := (cid:16) N (cid:88) n =1 γ n +1 − γ n − φ i ( γ n − i ) ˜ φ j ( γ n − j ) (cid:17) − L ≤ i,j ≤ L , L ≥ , where γ = γ , γ N +1 = γ N , and the kernel K Φ , ˜Φ is given in (4.4).In this subsection, we investigate admissibility of the operator S Φ , ˜Φ , Γ and its corresponding Galerkin reconstruction, c.f. Corollary 2.5, andTheorems 3.1 and 3.6. Theorem 4.3.
Let Φ and ˜Φ satisfy (4.2) . Assume that the correlationmatrix A Φ , ˜Φ has bounded inverse on (cid:96) . Then the following statementsare equivalent: (i) The L × L matrix A Φ , ˜Φ , Γ in (4.6) is nonsingular. (ii) S Φ , ˜Φ , Γ is admissible for the pair ( V ,L (Φ) , V ,L ( ˜Φ)) . (iii) For any f ∈ V (Φ) , Galerkin equations (4.7) (cid:104) S Φ , ˜Φ , Γ h, g (cid:105) = (cid:104) S Φ , ˜Φ , Γ f, g (cid:105) , g ∈ V ,L ( ˜Φ) have a unique solution h in V ,L (Φ) . (iv) For any g ∈ V ( ˜Φ) , dual Galerkin equations (cid:104) S Φ , ˜Φ , Γ f, ˜ h (cid:105) = (cid:104) S Φ , ˜Φ , Γ f, g (cid:105) , f ∈ V ,L (Φ) have a unique solution ˜ h in V ,L ( ˜Φ) .Proof. For h = (cid:80) Li = − L c i φ i ( ·− i ) ∈ V ,L (Φ) and g = (cid:80) Lj = − L d j ˜ φ j ( ·− j ) ∈ V ,L ( ˜Φ), we obtain (cid:104) S Φ , ˜Φ , Γ h, g (cid:105) = L (cid:88) i,j = − L (cid:16) N (cid:88) n =1 γ n +1 − γ n − φ i ( γ n − i ) (cid:104) K Φ , ˜Φ ( t, γ n ) , ˜ φ j ( t − j ) (cid:105) (cid:17) c i d j = L (cid:88) i,j = − L (cid:16) N (cid:88) n =1 γ n +1 − γ n − φ i ( γ n − i ) ˜ φ j ( γ n − j ) (cid:17) c i d j = c T A Φ , ˜Φ , Γ d, (4.8)where c = ( c i ) − L ≤ i ≤ L and d = ( d j ) − L ≤ j ≤ L . By the invertibility assump-tion on A Φ , ˜Φ , { φ i ( · − i ) , − L ≤ i ≤ L } and { ˜ φ i ( · − i ) , − L ≤ i ≤ L } are Riesz bases of V ,L (Φ) and V ,L ( ˜Φ) respectively. This together with(4.8) proves the desired equivalent statements. (cid:3) Oblique Projection and iterative approximation-projectionalgorithm.
In this subsection, we first discuss existence and unique-ness of oblique projection for the pair ( V ,L (Φ) , V ,L ( ˜Φ)). Theorem 4.4.
Let L ≥ , and let Φ and ˜Φ satisfy (4.2) . Assumethat the correlation matrix A Φ , ˜Φ has bounded inverse on (cid:96) . Then theprincipal submatrix (4.9) A Φ , ˜Φ ,L := (cid:0) (cid:104) φ i ( · − i ) , ˜ φ j ( · − j ) (cid:105) (cid:1) − L ≤ i,j ≤ L of the correlation matrix A Φ , ˜Φ is nonsingular if and only if there existsa unique oblique projection for the pair ( V ,L (Φ) , V ,L ( ˜Φ)) . Moreover,the oblique projection could be defined by (4.10) P Φ , ˜Φ ,L f := (cid:88) − L ≤ i,j ≤ L (cid:104) f, ˜ φ i ( · − i ) (cid:105) ˜ b ij φ j ( · − j ) , f ∈ V (Φ) , where ( A Φ , ˜Φ ,L ) − = (˜ b ij ) − L ≤ i,j ≤ L . ALERKIN RECONSTRUCTION IN REPRODUCING KERNEL SPACES 15
Proof.
The sufficiency is obvious. Now we prove the necessity. Suppose,to the contrary, that A Φ , ˜Φ ,L in (4.9) is singular. Take a nonzero vector e = ( e i ) − L ≤ i ≤ L in the null space N (( A Φ , ˜Φ ,L ) T ) and a nonzero linearfunctional J on V (Φ) such that J ( h ) = 0 for all h ∈ V ,L (Φ). Define Q ( f ) := J ( f ) (cid:88) − L ≤ i ≤ L e i φ i ( · − i ) , f ∈ V (Φ) . Then Q is a nonzero linear operator from V (Φ) to V ,L (Φ), Qh = 0 , h ∈ V ,L (Φ)and (cid:104) Qf, g (cid:105) = J ( f ) (cid:88) − L ≤ i,j ≤ L e i (cid:104) φ i ( · − i ) , ˜ φ j ( · − j ) (cid:105) d j = 0 , where g = (cid:80) − L ≤ j ≤ L d j ˜ φ j ( · − j ) ∈ V ,L ( ˜Φ). This contradicts to theuniqueness of oblique projections. (cid:3) In this subsection, we then examine exponential convergence of aniterative algorithm for the recovery of signals with finite rate of innova-tion. Replacing P U, ˜ U and S Γ ,δ in the iterative algorithm (1.5) by P Φ , ˜Φ ,L and S Φ , ˜Φ , Γ respectively, it becomes(4.11) g m +1 = g m − N (cid:88) n =1 L (cid:88) i,j = − L γ n +1 − γ n − g m ( γ n ) ˜ φ i ( γ n − i )˜ b ij φ j ( ·− j )+ g , m ≥ , with g ∈ V ,L (Φ). Theorem 4.5.
Let Φ and ˜Φ satisfy (4.2) . Assume that A Φ , ˜Φ ,L is non-singular. If (4.12) (cid:107) A Φ , ˜Φ , Γ ( A Φ , ˜Φ ,L ) − − I (cid:107) < , then the iterative algorithm (4.11) has exponential convergence. More-over, it recovers the original signal h ∈ V ,L (Φ) when g = N (cid:88) n =1 L (cid:88) i,j = − L γ n +1 − γ n − h ( γ n ) ˜ φ i ( γ n − i )˜ b ij φ j ( · − j ) . Proof.
Write g m = (cid:80) − L ≤ i ≤ L c m ( i ) φ i ( · − i ) and set c m = ( c m ( i )) − L ≤ i ≤ L .Then we can reformulate the iterative algorithm (4.11) as c Tm +1 = c Tm − c Tm A Φ , ˜Φ , Γ ( A Φ , ˜Φ ,L ) − + c T , m ≥ . This together with (4.12) proves the desired conclusions. (cid:3) Numerical Simulation
In this section, we present several examples to illustrate our Galerkinreconstruction of signals with finite rate of innovation.Let Θ := { θ i } be either Θ O := { } (the identical zero set), or Θ I with θ i being randomly selected in [ − . , . = { φ ( · − θ i ) } i ∈ Z , where the generating function φ is either (i) the sinc function sinc( t ) := sin πtπt , or (ii) the Gaussian function gauss( t ) := exp( − t / B -spline spline( t ), see Figure 1 for examples of signals in V (Φ ).In our numerical simulations, reconstructed signals live in the space Figure 1.
Above are bandlimited signals x (sinc ,
0) = (cid:80) i α i sinc( t − i ) with (1 + | i | ) α i ∈ [ − ,
1] randomly se-lected (left), and x (sinc ,
1) = (cid:80) i β i sinc( t − i ) with β i =(1 + | i | ) − cos( πi/
8) (right). Below are signals x (sinc ,
2) = (cid:80) i α i sinc( t − i − θ i ) (left) and x (sinc ,
3) = (cid:80) i β i sinc( t − i − θ i )with θ i ∈ [ − . , .
2] randomly selected (right). V ,L (Φ ) = (cid:110) L (cid:88) i = − L c i φ ( t − i − θ i ) : L (cid:88) i = − L | c i | < ∞ (cid:111) , L ≥ , ALERKIN RECONSTRUCTION IN REPRODUCING KERNEL SPACES 17 and sampling schemes are • Nonuniform sampling on Γ N := { γ k , | k | ≤ L +2 } , where γ − L − = − L − γ k − γ k − ∈ [0 . , . , | k | ≤ L + 2, are randomlyselected. • Jittered sampling on Γ J := { γ k := k + δ k , | k | ≤ L + 2 } , where δ k ∈ [ − . , .
1] are randomly selected. • Adaptive sampling on Γ C := { γ k ∈ [ − L − , L +2] } of a boundedsignal x ∈ V (Φ) via crossing time encoding machine (C-TEM),where x ( t ) (cid:54) = (cid:107) x (cid:107) ∞ sin( πt ) for all t ∈ [ − L − , L + 2] except t = γ k for some k , see Figure 2 [14, 17, 23]. Figure 2.
Above is the signal x (sinc ,
0) in Figure 1 andthe crossing signal (cid:107) x (sinc , (cid:107) ∞ sin πt on [ − L − , L + 2],and below is the sampling data of x (sinc ,
0) on the samplingset Γ C ⊂ [ − L − , L + 2], where L = 30. To reconstruct signals via Galerkin method, we take˜Φ = { ˜ φ } with ˜ φ = χ [ − / , / . Then the equation (4.7) to determine the Galerkin reconstruction G Φ , ˜Φ , Γ f := L (cid:88) i = − L c i φ ( · − i − θ i ) ∈ V ,L (Φ ) can be reformulated as follows: L (cid:88) i = − L (cid:16) N (cid:88) n =1 γ n +1 − γ n − φ ( γ n − i − θ i ) ˜ φ ( γ n − j ) (cid:17) c i = N (cid:88) n =1 γ n +1 − γ n − f ( γ n ) ˜ φ ( γ n − j ) , − L ≤ j ≤ L, (5.1)where f ∈ V (Φ ) and Γ := { γ n } Nn =1 is either the nonuniform samplingset Γ N , or the jittered sampling set Γ J , or the adaptive C-TEM sam-pling set Γ C . Considering the bandlimited signal x (sinc ,
0) described inFigure 1, we present some numerical results for its pre-reconstructionin V (Φ ) and Galerkin reconstruction in V ,L (Φ ) in Figure 3. Wesee that a pre-reconstruction may provide a reasonable approximation,while a Galerkin reconstruction could recover the original signal almostperfectly in the sampling interval.For Φ = { φ ( · − θ i ) } , let signals x ( φ , l ) ∈ V (Φ ) , ≤ l ≤
3, be as x (sinc , l ) in Figure 1 with the sinc function replaced by the function φ .In Figure 4, we illustrate their best approximation in V ,L (Φ ) and solu-tions of the Galerkin system (5.1) with f replaced by x ( φ , l ) , ≤ l ≤ V (Φ ), its Galerkin re-construction in V ,L (Φ ) could almost match its best approximationin V ,L (Φ ), except near the boundary of the sampling interval. Theboundary effect is viewable especially when φ has slow decay at infin-ity.Given signals x ( φ , l ) , ≤ l ≤
3, let y L ( φ , l ) be their best approxi-mators in V ,L (Φ ), and denote by e ( φ , l ) = (cid:107) x ( φ , l ) − y L ( φ , l ) (cid:107) their best approximation error in V ,L (Φ ). For Γ = Γ N or Γ J or Γ C ,set (cid:15) Γ ( φ , l ) = (cid:107) z L (Γ , φ , l ) − y L ( φ , l ) (cid:107) , where z L (Γ , φ , l ) is obtained from solving Galerkin system (5.1) with f replaced by x ( φ , l ). For signals x ( φ , l ) , ≤ l ≤
3, and samplingsets Γ = Γ N , Γ J and Γ C , Galerkin reconstruction (5.1) provides quasi-optimal approximation in V ,L (Φ ), and the quasi-optimal constant inTheorem 2.3 is well behaved, (cid:107) z L (Γ , φ , l ) − x ( φ , l ) (cid:107)(cid:107) y L ( φ , l ) − x ( φ , l ) (cid:107) ≤ (cid:15) Γ ( φ , l ) e ( φ , l ) ≤ , see Table 1 for numerical results with abbrievated notations. ALERKIN RECONSTRUCTION IN REPRODUCING KERNEL SPACES 19
Figure 3.
On the top left is the difference between thesignal x (sinc ,
0) in Figure 1 and its pre-reconstructed sig-nal S Φ , ˜Φ , Γ N x (sinc , x (sinc ,
0) and its Galerkin reconstruction G Φ , ˜Φ , Γ N x (sinc , x (sinc , − S Φ , ˜Φ , Γ J x (sinc ,
0) (left) and x (sinc , − G Φ , ˜Φ , Γ J x (sinc , x (sinc , − S Φ , ˜Φ , Γ C x (sinc ,
0) (left) and x (sinc , − G Φ , ˜Φ , Γ C x (sinc ,
0) (right) associated with adap-tive C-TEM sampling.
Numerical stability of Galerkin reconstruction (5.1) could be re-flected by the condition number cond Γ , Θ ( φ ) of the square matrix A Φ , ˜Φ , Γ = (cid:16) N (cid:88) n =1 γ n +1 − γ n − φ ( γ n − i − θ i ) ˜ φ ( γ n − j ) (cid:17) − L ≤ i,j ≤ L . Figure 4.
Listed are differences between best approxima-tions of signals x ( φ ,
0) in V , (Φ ) and their Galerkin re-constructions associated with operators S Φ , ˜Φ , Γ , where onthe above, φ = sinc, Γ = Γ N (left) and Γ = Γ J (right), whileon the bottom Γ = Γ N , φ = gauss (left) and φ = spline(right). Some numerical results of condition numbers cond Γ , Θ ( φ ) with Γ = Γ N or Γ J , and Θ = Θ O or Θ I , are presented in Table 2 with abbreviatednotations. For the robust (sub-)Galerkin reconstruction, the generatingfunction ˜ φ of the test space V ,L ( ˜Φ ) should be so chosen that thecorresponding matrice A Φ , ˜Φ , Γ is well-conditioned, c.f. Theorem 2.3.We conclude this sections with two more remarks. Remark 5.1.
The iterative approximation-projection algorithm (4.11)could have better performance on solving Galerkin equations (5.1),especially while matrices A Φ , ˜Φ , Γ have large condition number, whichis the case when the sampling set Γ and/or the shifting set Θ are notchosen appropriately. Remark 5.2.
For the admissibility of the pre-reconstruction operator S Γ ,δ , the test space ˜ U must have its dimension larger than or equal ALERKIN RECONSTRUCTION IN REPRODUCING KERNEL SPACES 21
Table 1.
Quasi-optimality of Galerkin reconstructionsfor bandlimited/Gauss/spline signalsL 10 15 20 25 30 e (sinc ,
0) 0.2176 0.1711 0.1388 0.1166 0.1024 (cid:15) N (sinc ,
0) 0.0795 0.0668 0.0197 0.0201 0.0294 (cid:15) J (sinc ,
0) 0.0770 0.0668 0.0201 0.0214 0.0290 (cid:15) C (sinc ,
0) 0.0789 0.0715 0.0239 0.0263 0.0325 e (sinc ,
1) 0.2600 0.2124 0.1816 0.1457 0.1303 (cid:15) N (sinc ,
1) 0.0344 0.0809 0.0370 0.0294 0.0431 (cid:15) J (sinc ,
1) 0.0353 0.0806 0.0372 0.0301 0.0433 (cid:15) C (sinc ,
1) 0.0363 0.0831 0.0379 0.0319 0.0442 e (sinc ,
2) 0.2095 0.1703 0.1365 0.1167 0.1007 (cid:15) N (sinc ,
2) 0.0619 0.0618 0.0256 0.0163 0.0281 (cid:15) J (sinc ,
2) 0.0596 0.0618 0.0260 0.0177 0.0275 (cid:15) C (sinc ,
2) 0.0608 0.0664 0.0284 0.0226 0.0308 e (sinc ,
3) 0.2655 0.2180 0.1863 0.1477 0.1322 (cid:15) N (sinc ,
3) 0.0461 0.0810 0.0374 0.0258 0.0406 (cid:15) J (sinc ,
3) 0.0446 0.0809 0.0375 0.0265 0.0401 (cid:15) C (sinc ,
3) 0.0474 0.0837 0.0392 0.0298 0.0418 e (gauss ,
0) 0.2055 0.1682 0.1398 0.1250 0.1086 (cid:15) N (gauss ,
0) 0.0437 0.0515 0.0270 0.0158 0.0093 (cid:15) J (gauss ,
0) 0.0439 0.0523 0.0259 0.0160 0.0096 (cid:15) C (gauss ,
0) 0.0433 0.0527 0.0270 0.0181 0.0108 e (spline ,
0) 0.1482 0.1325 0.1110 0.0924 0.0664 (cid:15) N (spline ,
0) 0.0405 0.0298 0.0204 0.0266 0.0176 (cid:15) J (spline ,
0) 0.0403 0.0299 0.0204 0.0281 0.0184 (cid:15) C (spline ,
0) 0.0407 0.0292 0.0209 0.0279 0.0181to the one of the reconstruction space U . For U = V ,L (Φ ) and ˜ U = V , ˜ L ( ˜Φ ) with ˜ L ≥ L , least square solutions of the linear system (5.1)with − L ≤ j ≤ L replaced by − ˜ L ≤ j ≤ ˜ L defines a sub-Galerkinreconstruction (cid:80) Li = − L c i φ ( · − i − θ i ) ∈ V ,L (Φ ) by Corollary 2.6, where f ∈ V (Φ ) and Γ := Γ N , Γ J , Γ C . Our numerical simulations showthat the above sub-Galerkin reconstructions for different ˜ L ≥ L havecomparable approximation errors. Acknowledgement
The authors thank Dr. B. Adcock for his com-ments and suggestions. The project is partially supported by theNational Natural Science Foundation of China (Nos. 11201094 and11161014), Guangxi Natural Science Foundation (2014GXNSFBA118012),
Table 2.
Stability of Galerkin reconstructions fornonuniform/jittered samplingL 10 15 20 25 30cond
N,O (sinc) 1.2059 1.2367 1.3458 1.4273 1.2904cond
N,I (sinc) 1.9190 1.8946 1.9828 2.0635 2.0421cond
N,O (gauss) 3.0162 2.7000 2.7908 3.3314 2.8362cond
N,I (gauss) 3.2850 3.1447 3.1421 4.0283 3.4391cond
N,O (spline) 3.7677 3.7534 3.0534 3.1400 4.1708cond
N,I (spline) 4.4768 5.2417 3.3507 3.5354 5.0292cond
J,O (sinc) 1.3737 1.4164 1.4105 1.4149 1.3763cond
J,I (sinc) 1.9723 1.9351 2.3328 2.2037 2.1744cond
J,O (gauss) 2.7066 2.7074 2.6936 2.6957 2.7190cond
J,I (gauss) 3.0847 3.1591 3.0696 3.0197 3.0878cond
J,O (spline) 3.1052 3.2109 3.2218 3.3257 3.2331cond
J,I (spline) 3.5570 3.7388 3.7140 3.9172 4.1830Program for Innovative Research Team of Guilin University of Elec-tronic Technology, and the National Science Foundation (DMS-1109063and DMS-1412413).
References [1] B. Adcock, M. Gataric and A. C. Hansen, On stable reconstruction fromunivariate nonuniform Fourier measurements, arXiv1310.7820[2] B. Adcock, M. Gataric and A. C. Hansen, Weighted frames of exponentialsand stable recovery of multidimensional functions from nonuniform Fouriersamples, arXiv:1405.3111[3] B. Adcock, A. C. Hansen and C. Poon, Beyond consistent reconstructions:optimality and sharp bounds for generalized sampling, and application to theuniform resampling problem,
SIAM J. Math. Anal. , (2013), 3114–3131.[4] A. Aldroubi and H. Feichtinger, Exact iterative reconstruction algorithm formultivariate irregularly sampled functions in spline-like spaces: the L p theory. Proc. Amer. Math. Soc. , (1998), 2677–2686.[5] A. Aldroubi and K. Gr¨ochenig, Nonuniform sampling and reconstruction inshift-invariant spaces, SIAM Rev. , (2001), 585–620.[6] A. Aldroubi, Q. Sun and W.-S. Tang, Non-uniform average sampling andreconstruction in multiply generated shift-invariant spaces, Constr. Approx. , (2004), 173–189.[7] A. Aldroubi, Q. Sun and W.-S. Tang, Convolution, average sampling, and aCalderon resolution of the identity for shift-invariant spaces, J. Fourier Anal.Appl. , (2005), 215–244.[8] A. G. Baskakov, Wiener’s theorem and asymptotic estimates for elements ofinverse matrices, Funktsional Anal i Prilozhen , (1990), 64–65; translationin Funct. Anal. Appl. , (1990), 222–224. ALERKIN RECONSTRUCTION IN REPRODUCING KERNEL SPACES 23 [9] P. Berger and K. Gr¨ochenig, Sampling and reconstruction in different sub-spaces by using oblique projections, arXiv 1312.1717[10] J. G. Christensen, Sampling in reproducing kernel Banach spacecs in Liegroup,
J. Approx. Theor. , (2012), 179–203.[11] P. L. Dragotti, M. Vetterli and T. Blu, Sampling moments and reconstructingsignals of finite rate of innovation: Shannon meets Strans-Fix, IEEE Trans.Signal Process. , (2007), 1741–1757.[12] Y. C. Eldar and T. Werther, General framework for consistent sampling inHilbert spaces, Int. J. Wavelets Multiresolution Inf. Process. , (2005), 347–359.[13] H. G. Feichtinger and K. Gr¨ochenig, Iterative reconstruction of multivariateband-limited functions from irregular sampling values, SIAM J. Math. Anal. , (1992), 244–261.[14] H. G. Feichtinger, J. C. Principe, J. L. Romero, A. A. Singh, and G. A.Alexander, Approximate reconstruction of bandlimited functions for the in-tegrate and fire sampler, Adv. Comput. Math. , (2012), 67–78.[15] A. G. Garcia and A. Portal, Sampling in reproducing kernel Banach spaces, Mediterr. J. Math. , (2013), 1401–1417.[16] I. Gohberg, M. A. Kaashoek and H. J. Woerdeman, The band method forpositive and strictly contractive extension problems: an alternative versionand new applications, Integral Equation Oper. Theory , (1989), 343–382.[17] D. Gontier and M. Vetterli, Sampling based on timing: time encodingmachines on shift-invariant subspaces, Appl. Computat. Harmonic Anal. , (2014), 63–78.[18] K. Gr¨ochenig, Wiener’s lemma: theme and variations, an introduction tospectral invariance and its applications, In: Four Short Courses on Har-monic Analysis: Wavelets, Frames, Time-Frequency Methods, and Applica-tions to Signal and Image Analysis , editor by P. Massopust and B. Forster,Birkhauser, Boston, 2010.[19] K. Gr¨ochenig, Reconstructing algorithms in irregular sampling,
Math. Com-put. , (1992), 181–194.[20] D. Han, M. Z. Nashed and Q. Sun, Sampling expansions in reproducing kernelHilbert and Banach spaces, Numer. Funct. Anal. Optim. , (2009), 971–987.[21] J. A. Hogan and J. D. Lakey, Duration and Bandwidth Limiting: ProlateFunctions, Sampling, and Applications , Birkh¨auser, 2012.[22] P. Jaming, A. Karoui, R. Kerman and S. Spektor, Approximation of almosttime and band limited functions I: Hermite expansion, Arxiv 1407.1293[23] A. A. Lazar and L. T. Toth, Perfect recovery and sensitivity analysis of timeencoded bandlimited signals,
IEEE Trans. Circuits System , (2004), 2060–2073.[24] M. Mishali, Y. C. Eldar and A. J. Elron, Xampling: signal acquisition andprocessing in union of subspaces, IEEE Trans. Signal Process. , (2011),4719–4734.[25] M. Z. Nashed and Q. Sun, Sampling and reconstruction of signals in a repro-ducing kernel subspace of L p ( R d ), J. Funct. Anal. , (2010), 2422–2452.[26] M. Z. Nashed and G. G. Walter, General sampling theorems for functions inreproducing kernel Hilbert spaces, Math. Control Signals Systems , (1991),363–390. [27] H. Pan, T. Blu and P. L. Dragotti, Sampling curves with finite rate of inno-vation, IEEE Trans. Signal Process. , (2014), 458–471.[28] C. E. Shannon, Communication in the presence of noise, Proc. IRE , (1949),10–21.[29] J. Sj¨ostrand, Wiener type algebra of pseudodifferential operators, Cent.Math., Ecole Polytechnique, Palaiseau France, Seminaire 1994, 1995, De-cember 1994.[30] Q. Sun, Wiener’s lemma for infinite matrices, Trans. Amer. Math. Soc. , (2007), 3099–3123.[31] Q. Sun, Wiener’s lemma for infinite matrices II, Constr. Approx. , (2011),209–235.[32] Q. Sun, Frames in spaces with finite rate of innovation, Adv. Comput. Math. , (2008), 301–329.[33] Q. Sun, Non-uniform average sampling and reconstruction for signals withfinite rate of innovations, SIAM J. Math. Anal. , (2006), 1389–1422.[34] Q. Sun and J. Xian, Rate of innovation for (non-)periodic signals and optimallower stability bound for filtering, J. Fourier Anal. Appl. , (2014), 119–134.[35] W. Sun and X. Zhou, Reconstruction of bandlimited signals from local aver-ages, IEEE Trans. Inf. Theory , (2002), 2955–2963.[36] W.-S. Tang, Oblique projections, biorthogonal Riesz bases and multiwaveletsin Hilbert spaces, Proc. Amer. Math. Soc. , (1999), 463–473..[37] M. Unser, Sampling – 50 years after Shannon, Proc. IEEE , (2000), 569–587.[38] M. Vetterli, P. Marziliano and T. Blu, Sampling signals with finite rate ofinnovation, IEEE Trans. Signal Process. , (2002), 1417–1428.[39] J. M. Whittaker, Interpolating Function Theory , Cambridge University Press,London, 1935.
Cheng: Department of Mathematics, University of Central Florida,Orlando, Florida 32816, USA
E-mail address : [email protected] Jiang: School of Mathematics and Computational Science, GuilinUniversity of Electronic Technology, Guilin, Guangxi 541004, China.
E-mail address : [email protected] Sun: Department of Mathematics, University of Central Florida,Orlando, Florida 32816, USA
E-mail address ::