Multichannel reconstruction from nonuniform samples with application to image recovery
MMultichannel reconstruction from nonuniform samples withapplication to image recovery
Dong Cheng ∗ and Kit Ian Kou † Department of Mathematics, Faculty of Science and Technology, University of Macau, Macao, China
Abstract
The multichannel trigonometric reconstruction from uniform samples was proposed re-cently. It not only makes use of multichannel information about the signal but is also capableto generate various kinds of interpolation formulas according to the types and amounts ofthe collected samples. The paper presents the theory of multichannel interpolation fromnonuniform samples. Two distinct models of nonuniform sampling patterns are considered,namely recurrent and generic nonuniform sampling. Each model involves two types of sam-ples: nonuniform samples of the observed signal and its derivatives. Numerical examplesand quantitative error analysis are provided to demonstrate the effectiveness of the proposedalgorithms. Additionally, the proposed algorithm for recovering highly corrupted images isalso investigated. In comparison with the median filter and correction operation treatment,our approach produces superior results with lower errors.
Keywords : Interpolation, nonuniform sampling, FFT, trigonometric polynomial, error analysis, deriva-tive, image recovery.
Mathematics Subject Classification (2010) : 42A15, 94A12, 65T50, 94A08.
Sampling and reconstruction are used as fundamental tools in data processing and communication sys-tems. Classical uniform sampling [1] is an effective tool to recover signals and has been applied in manyapplications. However there are various instances that reconstruction of signals from their nonuniformsamples are required, such as computed tomography [2], magnetic resonance [3] and radio astronomy[4]. Numerous approaches have been proposed in the literature to reconstruct bandlimited signals fromnonuniform samples [5, 6, 7, 8]. A widely known nonuniform sampling theorem [1, 9] may be statedas follows. Let { t n } n ∈ Z be a sequence of real numbers such that (cid:12)(cid:12) t n − nπσ (cid:12)(cid:12) < π σ , then a σ -bandlimitedfunction can be reconstructed by f ( t ) = ∞ (cid:88) n = −∞ f ( t n ) S n ( t ) (1.1) ∗ [email protected] † [email protected] a r X i v : . [ m a t h . C A ] D ec here S n ( t ) = G ( t ) G (cid:48) ( t n )( t − t n ) , G ( t ) = ( t − t ) ∞ (cid:89) k =1 (cid:18) − tt k (cid:19) (cid:18) − tt − k (cid:19) and (1.1) converges uniformly on any compact subset of R . The series (1.1) is an extension of La-grange interpolation. Unlike the uniform sampling case, (1.1) contains infinitely many terms and theinterpolating functions S n ( t ) involve complicated components. These factors bring difficulties for exactreconstruction of a bandlimited function from nonuniform samples. To give a simpler approximationfor reconstructing a bandlimited function from nonuniform samples, the authors in [10] proposed a newkind of sinc interpolation method and they restricted S n ( t ) in (1.1) to be of the form sinc[ σ ( t − ˜ t n )] ,where ˜ t n = nT + ζ n and ζ n is a sequence of random variables independent of G ( t ) . This restrictionguarantees that the interpolating functions only consist of translation of sinc function, just like mostcases of uniform interpolation [11, 12, 13, 14]. However, the restriction strategy simplifies reconstruc-tion problem but introduces error inevitably. To overcome the error, several methods for determiningsuitable ˜ t n were analyzed [10]. By the similar idea, the sinc interpolation for nonuniform samples infractional Fourier domain was studied in [15].In a real application, there are only finitely many samples, albeit with large amount, are given in abounded region. Interpolating by sinc functions or any other bandlimited functions has some limitations.On the one hand, the bandlimited interpolating functions cannot be time limited by the uncertaintyprinciple , thereby the approximation error is introduced. On the other hand, the interpolating functions S n ( t ) for nonuniform samples are complicated and there is no closed form in general. Therefore, treatinga finite amount of data as samples of a periodic function is a convenient and feasible choice [16]. Aswe know, sines and cosines are classical periodic functions, they have wide applications in modernscience. It is no exaggeration to say that trigonometry pervades the area of signal processing. We knowthat { e i nt : n ∈ Z } is an orthogonal system and is complete in square integrable functions space onunit circle T , i.e., L ( T ) . Besides, these functions possess elegant symmetries and concise frequencymeanings. These desirable properties of trigonometric functions could make interpolation much simplerand more effective [16, 17]. In fact, in the early 1841, Cauchy first proved a sampling interpolationtheorem on trigonometric polynomials [18]. It may be recognized as the headstream of sampling theory[19]. Cauchy’s result states that if f ( t ) = (cid:80) | n |≤ M c n e π i tn , then it can be written as a sum of itssampled values f ( k M +1 ) , ≤ k ≤ M , each multiplied by a interpolating function. That is, f ( t ) = 12 M + 1 M (cid:88) k =0 f ( k M + 1 ) ( − k sin π (2 M + 1) t sin π ( t − k M +1 ) . Certain studies have been given to the problem of interpolating finite length samples by trigonometricfunctions or discrete Fourier transform. In a series of papers [20, 21, 22], the sinc interpolation ofdiscrete periodic signals were extensively discussed. Although referred to as sinc interpolation, theresulting interpolating functions are trigonometric. In [23], the authors decomposed a periodic signalin a basis of shifted and scaled versions of a generating function. Moreover, an error analysis for theapproximation method was also addressed. A generalized trigonometric interpolation was consideredin [17] to make a good approximation for non-smooth functions. Recently, the nonuniform samplingtheorems for trigonometric polynomials were presented [16, 24]. Selva [25] proposed a FFT-basedinterpolation of nonuniform samples. However, this method is valid only for nonuniform samples lyingin a regular grid rather than for non-uniformly distributed data in the general sense.In all of above mentioned interpolation methods for finite length discrete points, only the samples oforiginal function are processed. As an extension of trigonometric interpolation, a multichannel interpo-2ation of finite length samples was suggested in [26]. This novel method makes good use of multifacetedinformation (such as derivatives, Hilbert transform) of function and is capable of generating varioususeful interpolation formulas by selecting suitable parameters according to the types and amount of col-lected data. In addition, it can be used to approximate some integral transformations (such as Hilberttransform). A fast algorithm based on FFT makes multichannel interpolation more effective and sta-ble. However, only the cases of uniform sampling were considered in [26]. There is a need to extendmultichannel interpolation such that non-uniformly distributed data can be processed.The purpose of this paper is to establish the theory of multichannel interpolation for non-uniformlydistributed data. We will consider two kinds of nonuniform sampling patterns: recurrent and genericnonuniform sampling. Meanwhile, each kind of nonuniform sampling involves two types of samples:its own nonuniform samples and derivative’s samples. There are four nonuniform interpolation formulaswill be analyzed. All closed-form expressions of interpolating functions are derived. Some examplesare also demonstrated. We show that the trigonometric polynomial (also called periodic bandlimitedfunction) of finite order can be exactly reconstructed by the proposed interpolation formulas providedthat the total number of samples is enough. Error analysis of the reconstruction for non-bandlimitedsquare integrable functions are analyzed. Concretely, the contributions of this paper may be summarizedas follows:1. We propose four types of interpolation formulas for non-uniformly distributed data. The proposedformulas involves not only samples of f but also samples of f (cid:48) , where f is the function to bereconstructed. If the given data is sampled from a periodic bandlimited function, then we arrive ata perfect reconstruction provided that the amount of data is larger than the bandwidth.2. We analyze the error that arise in reconstructing a non-bandlimited function by the proposed for-mulas. In particular, a comparison of performance on reconstructing square integrable functions(not necessarily to be bandlimited) by these formulas is made.3. Applying the proposed interpolation formulas, we develop algorithms for the recovery of damagedpixels which are non-uniformly located in a degraded image. The algorithms perform well andcan be efficiently implemented. Thus they could be good pre-processing methods for some moresophisticated approaches (such as deep learning) in the image recovery problem.This paper is organized as follows. In Section 2 some preparatory knowledge of Fourier series andmultichannel interpolation are reviewed. Section 3 and 4 formulate four types of interpolation formulasfor non-uniformly distributed data. The numerical examples and error analysis are presented in Section5. The application of proposed interpolation method to image recovery is shown in Section 6. Finally,conclusion will be drawn in Section 7. Without loss of generality, we will consider the functions defined on unit circle T . Let L ( T ) be thetotality of square integral functions defined on T . It is known that L ( T ) is a Hilbert space embeddedwith the inner product ( f, h ) := 12 π (cid:90) T f ( t ) h ( t ) dt, ∀ f, h ∈ L ( T ) . f ∈ L ( T ) , it can be expanded as f ( t ) = (cid:88) n ∈ Z a ( n ) e i nt where the Fourier series is convergent to f in L norm. The general version of Parseval’s identity is ofthe form ( f, h ) = (cid:88) n ∈ Z a ( n ) b ( n ) , where { a ( n ) } and { b ( n ) } are Fourier coefficients of f and h respectively. The convolution theoremmanifests as ( f ∗ h )( t ) := 12 π (cid:90) T f ( s ) h ( t − s ) ds = (cid:88) n ∈ Z a ( n ) b ( n ) e i nt . The circular Hilbert transform [27, 28] is an useful tool in harmonic analysis and signal processing.It is defined by the singular integral H f ( t ) := 12 π p . v . (cid:90) T f ( s ) cot (cid:18) t − s (cid:19) ds = (cid:88) n ∈ Z ( − i sgn ( n )) a ( n ) e i nt where sgn is the signum function taking values , − or for n > , n < or n = 0 respectively. Fromthe definition, we see that it is simple and straightforward to compute Hilbert transform for trigono-metric functions. Thus trigonometry-based interpolation can be availably used to approximate Hilberttransform as well. Multichannel interpolation proposed in [26] is about the reconstruction problem of finite order trigono-metric polynomials. To maintain consistent terminology with the classical case, in what follows, a finiteorder trigonometric polynomial is called a periodic bandlimited function, or briefly a bandlimited func-tion. Let N = ( N , N ) ∈ Z , in the sequel we denote by B N the totality of bandlimited functions withthe following form: f ( t ) = (cid:88) n ∈ I N a ( n ) e i nt , I N = { n : N ≤ n ≤ N } . The bandwidth of f is defined by the cardinality of I N , denoted by µ ( I N ) .For ≤ m ≤ M , let h m ( t ) = (cid:88) n ∈ Z b m ( n ) e i nt , (2.1) g m ( t ) = ( f ∗ h m )( t ) = 12 π (cid:90) T f ( s ) h m ( t − s ) ds. Suppose that N − N +1 M = K ∈ N + , we cut I N into pieces as I N = (cid:83) Mj =1 I j , where I j = { n : N + ( j − K ≤ n ≤ N + jK − } . The multichannel interpolation indicates that a bandlimited function f ∈ B N can be reconstructed bysamples of g m , namely, f ( t ) = 1 K M (cid:88) m =1 K − (cid:88) p =0 g m ( 2 πpK ) y m ( t − πpK ) (2.2)4rovided that M × M matrix H n = [ b m ( n + jK − K )] jm is invertible for every n ∈ I . Here, theinterpolating functions are constructed by the elements of H − n . We denote the inverse matrix as H − n = q ( n ) q ( n ) · · · q M ( n ) q ( n ) q ( n ) · · · q M ( n ) ... ... ... q M ( n ) q M ( n ) · · · q MM ( n ) . The interpolating function y m for ≤ m ≤ M is given by y m ( t ) = (cid:88) n ∈ I N r m ( n ) e i nt where r m ( n ) = (cid:40) q mj ( n + K − jK ) , if n ∈ I j , j = 1 , , · · · , M, if n / ∈ I N . In the following sections, using the powerful technique of multichannel interpolation, we presentfour types of nonuniform interpolation formulas. Since there are some similar concepts involved in thefollowing parts, several notations may appear repeatedly with minor difference. We particularly remarkthat a notation could have different meanings across different parts.
The recurrent nonuniform sampling often arises in time-interleaved analog-to digital converting process[29, 10]. As for recurrent nonuniform sampling, a classical result that has to be mentioned is the Pa-poulis’ generalized sampling expansion (GSE) [30]. The differences between GSE and the multichannelinterpolation are mainly as follows: • The GSE involves infinite summation and is applied to recovering functions defined on whole realline. Therefore the truncation is inevitable in practice. The multichannel interpolation is aboutreconstructing a finite length function from a finite number of samples. • There is a FFT-based fast algorithm to implement the multichannel interpolation. The implemen-tation of GSE is more complicated. • The multichannel interpolation can be extended to the generic nonuniform sampling case (seeSection 4). However, to the authors’ knowledge, there is no generic nonuniform sampling formulabased on GSE.In the following two subsections, based on multichannel interpolation technique, we derive two in-terpolation formulas associated with recurrent nonuniform samples: one concerns derivative of functionand the other does not. Throughout Section 3, let m ∈ N + and t p = πpm for p = 0 , , . . . , m − . By setting b ( n ) = 1 , b ( n ) = e i nα with < α < πm in (2.1) and applying multichannel interpolation,it is easy to have the interpolation formula for recurrent non-uniformly distributed data: T ( f, m , α, t ) = m − (cid:88) p =0 f ( t p ) y ,α ( t − t p ) + f ( α + t p ) y ,α ( t − t p ) . (3.1)5 original functioninterpolated resultdata points Figure 1: Illustration of interpolation for recurrent nonuniform samples and its consistency. The blueline is original function. The red dash-dot line is the interpolated result for the given data points.The resulting interpolating functions are: y ,α ( t ) = (cid:0) e i m t − (cid:1) (cid:0) e i ( m α + N t ) − e i ( m + N ) t (cid:1) m ( e i t −
1) ( e i m α − , (3.2) y ,α ( t ) = e i N t (cid:0) e i m t − (cid:1) (cid:0) e i m α − e i m t (cid:1) e i (1 − m − N ) α m ( e i m α −
1) ( e i α − e i t ) . (3.3)It is noted that for the case α = πm , the formula (3.1) reduces to the uniform sampling interpolation.Another fact is that if m is larger than the half bandwidth of f , then the reconstruction is exact. Mostoften, one may have no need to compute the interpolating functions, since y i,α ( t ) in (3.2) and (3.3) canbe implemented by FFT efficiently [26].Importantly, the interpolation consistency holds for the formula (3.1). Namely, the following identi-ties hold: y ,α ( t q − t p ) = δ pq , y ,α ( t q − t p + α ) = 0 (3.4) y ,α ( t q − t p ) = 0 , y ,α ( t q − t p + α ) = δ pq (3.5)where p, q = 1 , , . . . , m and δ pq = 1 for p = q and δ pq = 0 otherwise. By direct computation frominterpolating functions (3.2) and (3.3), formulas (3.4) and (3.5) hold. A concrete example is depicted inFigure 1. Here, m = 4 , α = π m and the original function is given by f ( t ) = 0 . t ( t − π )(0 . t + 0 . t + cos(3 sin t )) , t ∈ [0 , π ) . (3.6)We see that the red dash-dot line passes through all the red circles. In this part we consider a kind of recurrent multichannel interpolation which involves nonuniform sam-ples and derivatives. Let b ( n ) = e i nα and b ( n ) = i n , then we have a matrix defined by H n = (cid:20) e i nα i ne i ( n + m ) α i ( n + m ) (cid:21) for n ∈ I . original functioninterpolated result Figure 2: Illustration of interpolation for recurrent nonuniform samples of a function and its derivative.The blue line is original function. The red dash-dot line is interpolated result for the given data points.It is easy to get its inverse as H − n = (cid:34) e − i nα ( m + n ) m + n − ne i m α − ne − i nα m + n − ne i m α i e i m α m + n − ne i m α − i m + n − ne i m α (cid:35) . Set v ,n,α ( t ) := e − i nα ( m + n ) m + n − ne i m α e i nt − ne − i nα e i ( n + m ) t m + n − ne i m α = (cid:0) m − n (cid:0) e i m t − (cid:1)(cid:1) e i n ( t − α ) m + n − ne i m α ,v ,n,α ( t ) := i e i m α e i nt m + n − ne i m α − i e i ( n + m ) t m + n − ne i m α = i (cid:0) e i ( m α + nt ) − e i ( m + n ) t (cid:1) m + n − ne i m α . It should be noted that if α = πm , then the denominator in v i,n,α ( t ) , i.e., m + n − ne i m α = m + 2 n would become for n = − m . Otherwise, we have the interpolating functions y k,α ( t − t p ) = 1 m N + m − (cid:88) n = N v k,n,α ( t ) e − i n πpm , k = 1 , . Moreover, a bandlimited function f ∈ B N can be exactly reconstructed by T ( f, m , α, t ) := m − (cid:88) p =0 f ( α + t p ) y ,α ( t − t p ) + f (cid:48) ( t p ) y ,α ( t − t p ) , (3.7)provided that m ≥ µ ( I N )2 . As mentioned in [26], the interpolating functions y k,α ( t − t p ) for k = 1 , can be calculated by taking FFT for v k,n,α ( t ) with respect to n . When α = 0 , the formula (3.7) reducesto a kind of multichannel interpolation for uniformly distributed data { f ( t p ) } , { f (cid:48) ( t p ) } : T ( f, m , , t ) m − (cid:88) p =0 f ( t p ) y , ( t − t p ) + f (cid:48) ( t p ) y , ( t − t p ) , where y , ( t ) = e i N t ( e i m t − ( N + m − ( N + m − e i t ) m (1 − e i t ) ,y , ( t ) = i e i N t (cid:0) e i m t − e i m t − (cid:1) m ( e i t − .
7e illustrate the interpolation formula (3.7) in Figure 2 for recurrent non-uniformly distributed data of f ( t ) given by (3.6). The red circles represent the samples of f ( t ) . The reconstructed function (in reddash-dot line) passes through all the red circles. Besides, the blue line and red dash-dot line have thesame slope at the particular positions (shown by black asterisks), and the t -coordinates of red circles andblack asterisks are interlaced and bunched. Although referred to as nonuniform, there are restrictions on location of samples for recurrent nonuni-form sampling case. The distribution of samples, to some extent, is still regular. Moreover, as mentionedin [31], recurrent nonuniform samples can be regarded as a combination of several mutual delayed se-quences of uniform samples. In this part, we consider a more general interpolation formula which isapplicable to generic non-uniformly distributed data. Thanks to the finite summation in (2.2), it is pos-sible to consider a specific case. Let M = µ ( I N ) , I = { N } , K = 1 in (2.2), then we construct amatrix H = b ( N ) b ( N ) · · · b M ( N ) b ( N + 1) b ( N + 1) · · · b M ( N + 1) ... ... . . . ... b ( N + M − b ( N + M − · · · b M ( N + M − . Under this setting, one may drive various nonuniform sampling interpolation formulas provided that H is invertible. The key points are how to determine whether H is invertible and how to calculate theinverse. Unlike H n in the normal case, H is a large complex-valued matrix with high condition numberin general. Therefore, in order to achieve a stable reconstruction, it is not feasible to compute the inverseof H by numerical methods. Let ≤ t < t < · · · < t M < π and b p ( n ) = e i nt p for ≤ p ≤ M . We have the following matrix: H = e i N t e i N t · · · e i N t M e i ( N +1) t e i ( N +1) t · · · e i ( N +1) t M ... ... . . . ... e i ( N + M − t e i ( N + M − t · · · e i ( N + M − t M . It is easy to show that the determinant of H is det H = e i N ( t + t + ··· + t M ) (cid:89) ≤ p 9t follows that h p ( t ) = M (cid:88) k =1 z p ( k ) e i ( k + N − t = M (cid:88) k =1 ( − k +1 e − i N t p (cid:89) ≤ s ≤ Ms (cid:54) = p ( e i t s − e i t p ) ( − M − k β k e i ( k + N − t = ( − M +1 e i N ( t − t p ) (cid:89) ≤ s ≤ Ms (cid:54) = p ( e i t s − e i t p ) M (cid:88) k =1 β k e i ( k − t = ( − M +1 e i N ( t − t p ) (cid:89) ≤ s ≤ Ms (cid:54) = p ( e i t s − e i t p ) M (cid:89) s =1 ,s (cid:54) = p ( e i t − e i t s )= e i N ( t − t p ) e i ( M − t − tp )2 (cid:81) Ms =1 ,s (cid:54) = p sin (cid:0) t − t s (cid:1)(cid:81) Ms =1 ,s (cid:54) = p sin (cid:16) t p − t s (cid:17) . This formula is consistent with the result presented in [16] by selecting specific values for parameters N and M . In comparison to the proof of this result in [16], the proposed derivation is simpler and moreunderstandable.We illustrate the interpolation formula (4.1) in Figure 3 for nonuniform samples of f ( t ) given by(3.6). The red circles represent the randomly selected nonuniform samples of f ( t ) . The reconstructedfunction (in red dash-dot line) passes through all the red circles. For the case t p = π ( p − M , ≤ p ≤ M − , the formula (4.1) reduces to the uniform sampling interpolation given in [26]. The fact that a bandlimited function could be reconstructed from the values of the function and its deriva-tive is well known [30]. However, the samples involved in such a theorem are uniformly distributed. Let t , t , . . . , t m be arbitrary m non-uniformly spaced points on [0 , π ) . Suppose that f ∈ B N with µ ( I N ) ≤ M = 2 m . There is a question of whether f can be perfectly reconstructed from the sam-ples of itself and its first derivative (i.e., { f ( t p ) , f (cid:48) ( t p ) } m p =1 ). It is tantamount to asking whether H isinvertible. Here H = [ v kj ] is a M -th order square matrix with v kj := (cid:40) e i ( N + k − t p , j = 2 p − i ( N + k − e i ( N + k − t p , j = 2 p. The answer is affirmative. In this subsection, we derive the main result of the current paper: interpolationfor non-uniformly distributed samples of a function and its derivative. The interpolating functions arepresented in closed-form and the error of reconstructing a non-bandlimited function by proposed formulawill be discussed in the next section.Let (cid:101) H = [ (cid:101) v kj ] with (cid:101) v kj := (cid:40) e i ( k − t p , j = 2 p − N + k − e i ( k − t p , j = 2 p. It is easy to see that det H = ( i ) m e i N ( t + t + ··· + t m ) det (cid:101) H . Note that det (cid:101) H is a function of t , t , . . . , t m . The following lemma gives a recursive relation of det (cid:101) H .10 emma 4.1 Let (cid:101) H be given above. Then its determinant satisfies the following recursive relation det (cid:101) H ( t , t , . . . , t m ) = e i t m (cid:89) p> ( e i t p − e i t ) det (cid:101) H ( t , . . . , t m ) . (4.3) Proof. Applying some column operations to (cid:101) H , it follows that det (cid:101) H is equal to the determinant offollowing matrix C = · · · e i t e i t · · · e i t m e i t m e i t e i t · · · e i t m e i t m ... ... . . . ... ... e i (2 m − t (2 m − e i (2 m − t · · · e i (2 m − t m (2 m − e i (2 m − t m . (4.4)Subtracting the multiple e i t of row ( k − from row k for k = 2 m , m − , · · · , successively, weremove first column without changing the determinant: det e i t x · x e i t + e i t · · · e i t x e i t · x e i t + e i t · · · ... ... ... . . . e i (2 m − t x e i (2 m − t (2 m − x e i t + e i (2 m − t · · · x m · x m e i t m + e i t m x m e i t m · x m e i t m + e i t m ... ... x m e i (2 m − t m (2 m − x m e i t m + e i (2 m − t m where x k = e i t k − e i t for k = 2 , , · · · , m . Subtracting the multiple e i tk x k of column (2 k − fromcolumn (2 k − and extracting x k from column (2 k − and (2 k − for k = 2 , , · · · , m successively,we reach the result of det (cid:101) H = x x · · · x m det (cid:101) H (1) (4.5)where (cid:101) H (1) equals e i t · · · e i t e i t e i t · · · e i t m e i t m e i t e i t e i t · · · e i t m e i t m ... ... ... . . . ... ... e i (2 m − t e i (2 m − t (2 m − e i (2 m − t · · · e i (2 m − t m (2 m − e i (2 m − t m For (cid:101) H (1) , extracting e i t from the first column and subtracting the multiple e i t of row ( k − from row k for k = 2 m − , m − , · · · , successively, we remove first column of (cid:101) H (1) and reach the resultof det (cid:101) H (1) = e i t det (cid:101) H (2) (4.6)11here (cid:101) H (2) equals x e i t · · · x e i t x e i t + e i t · · · ... ... . . . x e i (2 m − t (2 m − x e i t + e i (2 m − t · · · x m e i t m x m e i t m x m e i t m + e i t m ... ... x m e i (2 m − t m (2 m − x m e i t m + e i (2 m − t m . Subtracting the multiple e i tk x k of column (2 k − from column (2 k − and extracting x k from column (2 k − and (2 k − for k = 2 , , · · · , m successively, we get det (cid:101) H (2) = x x · · · x m det (cid:101) H ( t , . . . , t m ) . (4.7)Then the recursive relation (4.3) follows from (4.5), (4.6) and (4.7). The proof is complete. (cid:3) Since det (cid:101) H ( t m ) = det (cid:20) N e i t m ( N + 1) e i t m (cid:21) = e i t m . By induction, we conclude that det (cid:101) H ( t , t , . . . , t m ) = e i ( t + t + ··· + t m ) (cid:89) ≤ p Let ≤ t < t < · · · < t m < π be non-uniformly distributed points. Suppose that f ∈ B N with µ ( I N ) ≤ M = 2 m . Then it can be exactly recovered by the following interpolationformula T ( f, m , t ) = m (cid:88) p =1 f ( t p ) ψ p ( t ) + f (cid:48) ( t p ) φ p ( t ) . (4.10)To derive the closed form expressions of ψ p and φ p , the direct approach is to compute the inverse of H . This is, as discussed earlier, not a feasible approach. Fortunately, the interpolating functions can becomputed tactfully by introducing some auxiliary matrices. Firstly, we need to compute cofactor matrix12f C defined by (4.4). Constructing an auxiliary matrix A by substituting the second column of C with [1 , e i t , · · · , e i (2 m − t ] will bring convenience to the computation: A = · · · e i t e i t · · · e i t m e i t m e i t e i t · · · e i t m e i t m ... ... . . . ... ... e i (2 m − t e i (2 m − t · · · e i (2 m − t m (2 m − e i (2 m − t m . On the one hand, by similar arguments to the computation of det (cid:101) H , we get that det A ( t, t , t , · · · , t m )=( e i t − e i t ) (cid:32) m (cid:89) s> ( e i t − e i t s ) (cid:33) m (cid:89) q> ( e i t q − e i t ) det (cid:101) H ( t , . . . , t m ) . (4.11)On the other hand, the cofactor expansion of det A along the second column gives: det A ( t, t , t , · · · , t m ) = m (cid:88) k =1 C k ( t , t , · · · , t m ) e i ( k − t (4.12)where C k is the ( k, cofactor of C . By comparing the coefficients of e i ( k − t in (4.11) and (4.12), weobtain the expression of C k for k = 1 , , · · · , m . For example, C ( t , t , · · · , t m )= − e i ( t +2 t +2 t + ··· +2 t m ) m (cid:89) q> ( e i t q − e i t ) det (cid:101) H ( t , . . . , t m )= − e i ( t +3 t +3 t + ··· +3 t m ) m (cid:89) q> ( e i t q − e i t ) (cid:89) ≤ p 1) 2 πN + ζ n , n = 1 , , . . . , N (5.1) ˜ t n = ( n − 1) 4 πN + η n , n = 1 , , . . . , N (5.2)where ζ n and η n are i.i.d. sequences of random variables with uniform distribution on (0 , π N ) and (0 , π N ) respectively. To give a more comprehensive presentation for GN1 and GN2, we repeat eachexperiment of generic nonuniform sampling for 100 times. Accordingly, δ and δ of GN1 and GN2 areaveraged over these 100 times experiments, and the corresponding variances are also provided.Some results for reconstructing f and H f are depicted in Figure 6 and 7. Visually there is no muchdifference among these reconstructed results by different formulas provided that the same number ofsamples are used. Roughly, some conclusions could be drawn from the numerical results as follows.1. If the same amount of data is employed to reconstruct f (or H f ), the fluctuations of RMSE causedby the different data types and data distribution patterns are not significant. In other words, theamount of data is the chief factor that affects performance of the reconstruction.2. The more grid points the data is distributed on, the better performance of the reconstruction be-have. We can see this by comparing the reconstructed results of RN2 and GN2. In addition, themore even the data distribution is, the better performance of the reconstruction behave. We cansee this by comparing the reconstructed results of GN1 and U1, or GN2 and U2.3. In general, reconstructing a function from its own samples performs slightly better than the recon-struction that involves other types of data. This can be seen from the reconstructed results of GN1and GN2. 16able 1: Reconstruction results using the different interpolation formulasTotal samples f f (cid:48) Pattern δ (Variance) δ (Variance) 36 36 0 RN1 . . GN1 . ( . ) . ( . ) 36 36 0 U1 . . RN2 . . GN2 . ( . ) . ( . ) 36 18 18 U2 . . RN1 . . GN1 . ( . × − ) . ( . × − ) 54 54 0 U1 . . RN2 . . GN2 . ( . × − ) . ( . × − ) 54 27 27 U2 . . RN1 . . GN1 . ( . × − ) . ( . × − ) 72 72 0 U1 . . RN2 . . GN2 . ( . × − ) . ( . × − ) 72 36 36 U2 . . RN1 . . GN1 . ( . × − ) . ( . × − ) 108 108 0 U1 . . RN2 . . GN2 . ( . × − ) . ( . × − ) 108 54 54 U2 . . The last two conclusions are certainly based on the premise that the same amount of data is used forreconstruction. And an additional observation is that δ and δ are nearly equal in each experiment sincethe Fourier coefficients of f and H f have the same absolute value for all n ∈ Z \ { } . In the previous subsection, we presented the reconstruction errors for the proposed interpolation formu-las experimentally. In this part, we will give the error estimations analytically which are very importantto the reliability of the reconstruction methods.We denote by f τ ( t ) = f ( t − τ ) the shifted function of f . Let T N be a reconstruction operatorcorresponding to any one of the aforementioned interpolation formulas. Here N represents the locationof Fourier coefficients for reconstructed function T N f . It is easy to see that T N f τ ( t ) = m (cid:88) p =1 f ( t p − τ ) ψ p ( t ) + f (cid:48) ( t p − τ ) φ p ( t ) (5.3) T N f ( t − τ ) = m (cid:88) p =1 f ( t p ) ψ p ( t − τ ) + f (cid:48) ( t p ) φ p ( t − τ ) . (a) (b) (c) (d) (e) (f) Figure 6: Reconstructing f by (a) GN1 with total 54 samples, (b) GN2 with total 72 samples, (c) GN1with total 72 samples, (d) RN1 with total 54 samples, (e) RN2 with total 72 samples, (f) RN1 with total72 samples.Thus T N f ( t − τ ) (cid:54) = T N f τ ( t ) . Not just for GN2, most of the other interpolation formulas are notshift-invariant in general. Therefore the MSE defined by ς ( f, N , τ ) = (cid:107) f τ − T N f τ (cid:107) = 12 π (cid:90) T | f τ ( t ) − T N f τ ( t ) | dt is not independent on τ . There is no doubt that ς ( f, N , τ ) is π periodic in τ . Note that the time shift τ could be viewed as the phase difference of f and f τ . And the exact phase of a function or a signal isgenerally unknown in most practical applications [23]. Hence, we need to compute the averaged error ε ( f, N ) = (cid:115) π (cid:90) π ς ( f, N , τ ) dτ . (a) (b) (c) (d) (e) (f) Figure 7: Reconstructing H f by (a) RN1 with total 64 samples, (b) GN1 with total 64 samples, (c) U1with total 64 samples, (d) RN2 with total 64 samples, (e) GN2 with total 64 samples, (f) U2 with total64 samples.As can be seen from the previous section that the derivation of GN2 is more arduous than the others.In the following, we derive the expression of averaged error for GN2. From (5.3), (4.8) and (4.9), werewrite T N f τ ( t ) as m (cid:88) k =1 e i ( N + k − t m (cid:88) p =1 (cid:2) f ( t p − τ ) w p − ( k ) + f (cid:48) ( t p − τ ) w p ( k ) (cid:3) . It is noted that N is equal to { N , N + 2 m − } in the above formula. Applying the Parseval’s identity,19e have that π (cid:90) T f τ ( t ) T N f τ ( t ) dt = (cid:88) n ∈ I N a ( n ) e i nτ m (cid:88) p =1 (cid:2) f ( t p − τ ) w p − ( n − N + 1) + f (cid:48) ( t p − τ ) w p ( n − N + 1) (cid:3) . (5.4)Similarly, (cid:107) f τ (cid:107) = (cid:88) n ∈ Z | a ( n ) | (5.5) (cid:107)T N f τ (cid:107) = m (cid:88) k =1 m (cid:88) p =1 m (cid:88) q =1 4 (cid:88) j =1 D j ( p, q, τ ) E j ( p, q, k ) (5.6)where D ( p, q, τ ) = f ( t p − τ ) f ( t q − τ ) , E ( p, q, k ) = w p − ( k ) w q − ( k ); D ( p, q, τ ) = f ( t p − τ ) f (cid:48) ( t q − τ ) , E ( p, q, k ) = w p − ( k ) w q ( k ); D ( p, q, τ ) = f (cid:48) ( t p − τ ) f ( t q − τ ) , E ( p, q, k ) = w p ( k ) w q − ( k ); D ( p, q, τ ) = f (cid:48) ( t p − τ ) f (cid:48) ( t q − τ ) , E ( p, q, k ) = w p ( k ) w q ( k ) . To simplify (5.4), (5.5) and (5.6), we need to introduce some identities: π (cid:90) T f ( t p − τ ) e i nτ dτ = a ( n ) e i nt p , π (cid:90) T f (cid:48) ( t p − τ ) e i nτ dτ = i na ( n ) e i nt p , π (cid:90) T f ( t p − τ ) f ( t q − τ ) dτ = (cid:88) n ∈ Z | a ( n ) | e i n ( t p − t q ) , π (cid:90) T f ( t p − τ ) f (cid:48) ( t q − τ ) dτ = − i (cid:88) n ∈ Z | a ( n ) | ne i n ( t p − t q ) , π (cid:90) T f (cid:48) ( t p − τ ) f (cid:48) ( t q − τ ) dτ = (cid:88) n ∈ Z | a ( n ) | n e i n ( t p − t q ) . Integrating the both sides of (5.4) and (5.6) on T with respect to τ and making use of the above identities,we get that π (cid:90) T dτ (cid:90) T f τ ( t ) T N f τ ( t ) dt = (cid:88) n ∈ I N | a ( n ) | m (cid:88) p =1 (cid:16) e i nt p w p − ( n − N + 1) + i ne i nt p w p ( n − N + 1) (cid:17) π (cid:90) T (cid:107)T N f τ (cid:107) dτ = m (cid:88) k =1 (cid:88) n ∈ Z | a ( n ) | (cid:88) ≤ p,q ≤ m (cid:16) e i n ( t p − t q ) E ( p, q, k ) − i ne i n ( t p − t q ) E ( p, q, k )+ i ne i n ( t p − t q ) E ( p, q, k ) + n e i n ( t p − t q ) E ( p, q, k ) (cid:17) = m (cid:88) k =1 (cid:88) n ∈ Z | a ( n ) | (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) p =1 (cid:16) e i nt p w p − ( k ) + i ne i nt p w p ( k ) (cid:17)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . From the definition of w j ( k ) , for any n ∈ I N m (cid:88) p =1 (cid:16) e i nt p w p − ( n − N + 1) + i ne i nt p w p ( n − N + 1) (cid:17) = 1 , m (cid:88) p =1 (cid:16) e i nt p w p − ( k ) + i ne i nt p w p ( k ) (cid:17) = 1 . It follows that π (cid:90) T dτ (cid:90) T f τ ( t ) T N f τ ( t ) dt = 14 π (cid:90) T dτ (cid:90) T f τ ( t ) T N f τ ( t ) dt = (cid:88) n ∈ I N | a ( n ) | and π (cid:90) T (cid:107)T N f τ (cid:107) dτ = (cid:88) n ∈ I N | a ( n ) | + (cid:88) n/ ∈ I N | a ( n ) | m (cid:88) k =1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) p =1 (cid:16) e i nt p w p − ( k ) + i ne i nt p w p ( k ) (cid:17)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Therefore the square of the averaged error for GN2 is given by ε ( GN2 , f, N ) = 12 π (cid:90) T (cid:107) f τ (cid:107) dτ − π (cid:90) T dτ (cid:90) T f τ ( t ) T N f τ ( t ) dt − π (cid:90) T dτ (cid:90) T f τ ( t ) T N f τ ( t ) dt + 12 π (cid:90) T (cid:107)T N f τ (cid:107) dτ = (cid:88) n/ ∈ I N | a ( n ) | Er ( GN2 , N , n ) where Er ( GN2 , N , n ) = 1 + m (cid:88) k =1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m (cid:88) p =1 (cid:16) e i nt p w p − ( k ) + i ne i nt p w p ( k ) (cid:17)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Similarly, we can get the averaged errors for GN1 and RN2 respectively as ε ( GN1 , f, N ) = (cid:88) n/ ∈ I N | a ( n ) | Er ( GN1 , N , n ) ε ( RN2 , f, N ) = (cid:88) n/ ∈ I N | a ( n ) | Er ( RN2 , N , n ) U1GN1RN1U2GN2RN2 Figure 8: Illustration of log Er ( N , n ) for U1, GN1, RN1, U2, GN2, RN2 respectively.Table 2: Comparison of several existing interpolation methods.Different methods Untruncatedimplementa-tion Applicable tononuniformsamples Applicable tomultichannelsamples Closed form ofinterpolatingfunctionsProposed method yes yes yes yesSingle-channel in-terpolation by FFT[25] yes yes N/A N/AGSE [30, 14] N/A yes yes yesClassical nonuniformsampling on real line[1, 9] N/A yes N/A N/ASingle-channel nonuni-form trigonometric in-terpolation [16] yes yes N/A yes The nonuniform samples in [25] have to be located in a regular grid . The distribution of nonuniform samples in GSE is recurrent. where Er ( GN1 , N , n ) =1 + M (cid:88) k =1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) M (cid:88) p =1 e i nt p z p ( k ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Er ( RN2 , N , n ) =1 + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (2 m + n − k n m ) e i ( k n − m α − ne i m α m + n − k n m − ( n + m − k n m ) e i m α (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n − ( m + n − k n m ) e i ( k n − m α m + n − k n m − ( n + m − k n m ) e i m α (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) k n = fix (cid:16) n − N m (cid:17) + 1 and fix ( x ) rounds x to the nearest integer toward zero. Note that the otherinterpolation formulas can be subsumed in the above three cases, therefore we obtain all the averaged er-rors of six aforementioned formulas. The sequences Er ( U1 , N , n ) , Er ( GN1 , N , n ) , ..., Er ( RN2 , N , n ) are depicted graphically in Figure 8. Here N = ( N , N ) = ( − , , thereby m = 32 , M = 64 .For GN1 and GN2, the nonuniform grids are randomly generated by (5.1) and (5.2) respectively. Thedomain for each Er ( N , n ) plotted in Figure 8 is set as { N + 1 ≤ n ≤ N + 3 µ ( I N ) } . The theoreticalanalysis of error is in accord with the result of numerical examples and therefore the conclusions madein the previous subsection are underpinned.The proposed interpolation method involves non-uniformly spaced multichannel samples. There arenotable existing interpolation methods involving nonuniform or multichannel samples. We provide theTable 2 to compare these existing results. Among the numerous sampling or interpolation methods , weonly present several typical types in Table 2. It is noted that the representative references listed here arefar from complete. In the previous sections, we dealt with techniques for reconstructing a continuous function from differenttypes of discrete samples. In this section, we introduce a simple application of the proposed interpolationformulas to image recovery. To begin, consider Figure 9 (b), which is severely degraded because of thedamaged pixels. Suppose that the damaged pixels are non-uniformly located. The goal of this part is torecover the missing pixels via interpolation.Note that the proposed formulas are one-dimensional, we have to compute interpolation result foreach row of image first, and then apply interpolation for each column by using the same operations. Asthe distantly separated image regions are irrelevant virtually, we should treat the reconstruction problemlocally. In the following, the test image is set to be Lena ( × ), and it is degraded by wipingout . randomly selected pixels, see Figure 9 (b). Each row of image is divided into equalparts, namely pixels per part. Repeating interpolation process through the image pieces produced bydividing, we obtain values for all the missing pixels. Applying the same operations to each column, wehave another reconstructed result. It is noted that the dividing treatment has an additional benefit that itmakes computation complexity linear in the size of image.It is natural to average two reconstructed results. Besides, we need to convert interpolation resultinto unsigned 8-bit integer type. A direct way for such a conversion is based on Z ( I xy ) = if I xy ≥ if I xy ≤ round ( I xy ) if < I xy < where I xy is the intensity value at location ( x, y ) . For a more elaborate conversion, we introduce acorrection for the values produced by interpolation. Let Λ xy be the × neighborhood centered on ( x, y ) , the correction is defined as ˆ I xy = max (cid:8) I x (cid:48) y (cid:48) : ( x (cid:48) , y (cid:48) ) ∈ Λ xy \{ ( x, y ) } (cid:9) if I xy = max (cid:8) I x (cid:48) y (cid:48) : ( x (cid:48) , y (cid:48) ) ∈ Λ xy (cid:9) min (cid:8) I x (cid:48) y (cid:48) : ( x (cid:48) , y (cid:48) ) ∈ Λ xy \{ ( x, y ) } (cid:9) if I xy = min (cid:8) I x (cid:48) y (cid:48) : ( x (cid:48) , y (cid:48) ) ∈ Λ xy (cid:9) I xy otherwise23 a)(d) (c)(f)(b)(e) Figure 9: (a) Ideal original image Lena. (b) Degraded image (with . pixels damaged). (c) Recon-structed image by GN1 + CRT. (d) Reconstructed image by GN2 + CRT. (e) Reconstructed image byMED + CRT. (e) Reconstructed image by CRT + MED.where I xy and ˆ I xy are the intensity values at location ( x, y ) before and after correction respectively.From the definition, this correction is certain to be convergent after finite iterations. In practice, morefortunately, it can be convergent generally by or iterations.Note that the damaged pixels can be also viewed as impulse noise (also called salt-and-pepper noise)in an image. It is known that the median filter, which is a very useful order-statistic filter in imageprocessing, is particularly effective in the reduction of impulse noise [32, 33]. Basically, to performmedian filtering at ( x, y ) is to determine the median for values of the pixel in Λ xy and assign thatmedian to ( x, y ) in the filtered image.We are in position to compare the performance of interpolation method and median filtering inthe problem of restoring damaged pixels. Specifically, we consider three methods: GN1, GN2 andmedian filter (MED for short). In general, GN2 requires a prerequisite condition of differentiable sinceit involves derivative. It would be stretching a point to describe a digital image as a set of samples ofa smooth (differentiable) function. Nevertheless, the introduction of difference (also called derivativein some literature without ambiguity) for the original digital image could help to preserve more usefulinformation in the reconstructed image. The experimental results are shown in Figure 9 and Table 3.Here CRT represents the correction operation. We use relative mean square error (RMSE) δ , peak signalto noise ratio (PSNR) ρ and correlation coefficient (CC) γ , to measure the quality of reconstructedimages. They are defined respectively as: δ ( I , I r ) = (cid:107)I − I r (cid:107) F (cid:107)I(cid:107) F , δ ρ γ ρ ( I , I r ) = 10 log (cid:32) × L × L (cid:107)I − I r (cid:107) F (cid:33) ,γ ( I , I r ) = (cid:80) i,j ( I ( i, j ) − I )( I r ( i, j ) − I r ) (cid:107)I − I (cid:107) F (cid:107)I r − I r (cid:107) F , where I , I r denote original and reconstructed image respectively, I , I r denote their averaged pixelvalues, and (cid:107)·(cid:107) F denotes Frobenius norm, and L and L are the number of rows and columns of I .From Table 3, we conclude that the interpolation-based method GN1 performs significantly betterthan the median filtering method. If there is some information about gradient of original image availableto be utilized, the performance of image recovery can be improved further by GN2. These conclusionsare also reflected in Figure 9 visually.It is noted that we consider the image recovery problem only from the point where a digital imageis degraded by simply wiping out some pixel values. Besides, the material about recovery methodsdeveloped in this section is far from exhaustive. Even so, the nonuniform-interpolation-based imagerecovery methods perform well and are easily implemented. It is conceivable that these methods couldbe integrated into some more comprehensive image recovery approaches. These further explorations,although of importance in image processing, are beyond the scope of this paper. Several interpolation formulas associated with non-uniformly distributed data are presented. If the signalto be reconstructed is bandlimited, then it is possible to reconstruct the entire signal by sampling it withthe total number of samples larger than the corresponding bandwidth. For the case of non-bandlimitedsignal, quantitative error analysis for reconstructing is also analyzed. It has been shown that the intro-ducing derivative samples of function can improve reconstruction result significantly. As an application,several nonuniform-interpolation-based algorithms for recovering a certain kind of corrupted images aredemonstrated. The performance is satisfactory. References [1] A. I. Zayed, Advances in Shannon’s sampling theory . CRC press, 1993.[2] E. Y. Sidky and X. Pan, “Image reconstruction in circular cone-beam computed tomography byconstrained, total-variation minimization,” Physics in Medicine & Biology , vol. 53, no. 17, p. 4777,2008.[3] Z.-P. Liang and P. C. Lauterbur, Principles of magnetic resonance imaging: a signal processingperspective . SPIE Optical Engineering Press, 2000.254] B. F. Burke and F. Graham-Smith, An introduction to radio astronomy . Cambridge UniversityPress, 2009.[5] J. Yen, “On nonuniform sampling of bandwidth-limited signals,” IRE Transactions on circuit the-ory , vol. 3, no. 4, pp. 251–257, 1956.[6] K. Yao and J. Thomas, “On some stability and interpolatory properties of nonuniform samplingexpansions,” IEEE Transactions on Circuit Theory , vol. 14, no. 4, pp. 404–408, 1967.[7] A. J. Jerri, “The Shannon sampling theorem - its various extensions and applications: a tutorialreview,” Proceedings of the IEEE , vol. 65, no. 11, pp. 1565–1596, 1977.[8] H. G. Feichtinger and K. Gr¨ochenig, “Irregular sampling theorems and series expansions of band-limited functions,” Journal of Mathematical Analysis and Applications , vol. 167, no. 2, pp. 530–556, 1992.[9] K. Seip, “An irregular sampling theorem for functions bandlimited in a generalized sense,” SIAMJ. Appl. Math. , vol. 47, no. 5, pp. 1112–1116, 1987.[10] S. Maymon and A. V. Oppenheim, “Sinc interpolation of nonuniform samples,” IEEE Trans. SignalProcess. , vol. 59, no. 10, pp. 4745–4758, Oct 2011.[11] J. Higgins, G. Schmeisser, and J. Voss, “The sampling theorem and several equivalent results inanalysis,” J. Comput. Anal. Appl. , vol. 2, no. 4, pp. 333–371, 2000.[12] Y. L. Liu, K. I. Kou, and I. T. Ho, “New sampling formulae for non-bandlimited signals associatedwith linear canonical transform and nonlinear Fourier atoms,” Signal Process. , vol. 90, no. 3, pp.933–945, 2010.[13] D. Cheng and K. I. Kou, “Novel sampling formulas associated with quaternionic prolate spheroidalwave functions,” Adv. Appl. Clifford Algebr. , vol. 27, no. 4, pp. 2961–2983, 2017.[14] ——, “Generalized sampling expansions associated with quaternion Fourier transform,” Math.Meth. Appl. Sci. , vol. 41, no. 11, pp. 4021–4032, 2018.[15] L. Xu, F. Zhang, and R. Tao, “Randomized nonuniform sampling and reconstruction infractional Fourier domain,” Signal Process. IEEE Trans.Signal Process. , vol. 56, no. 7, pp. 2728–2745, 2008.[17] M. Navascus, S. Jha, A. Chand, and M. Sebastin, “Generalized trigonometric interpolation,” Journal of Computational and Applied Mathematics Compte Rendu (Paris) , vol. 12, pp. 283–298, 1841.[19] J. J. Benedetto and P. J. Ferreira, Modern sampling theory: mathematics and applications .Springer Science & Business Media, 2012. 2620] T. Schanze, “Sinc interpolation of discrete periodic signals,” IEEE Trans. Signal Process. , vol. 43,no. 6, pp. 1502–1503, 1995.[21] F. Candocia and J. C. Principe, “Comments on ”Sinc interpolation of discrete periodic signals”,” IEEE Trans. Signal Process. , vol. 46, no. 7, pp. 2044–2047, 1998.[22] S. R. Dooley and A. K. Nandi, “Notes on the interpolation of discrete periodic signals using sincfunction related approaches,” IEEE Trans. Signal Process. , vol. 48, no. 4, pp. 1201–1203, 2000.[23] M. Jacob, T. Blu, and M. Unser, “Sampling of periodic signals: A quantitative error analysis,” IEEE Trans. Signal Process. , vol. 50, no. 5, pp. 1153–1159, 2002.[24] L. Xiao and W. Sun, “Sampling theorems for signals periodic in the linear canonical transformdomain,” Opt. Commun. , vol. 290, pp. 14–18, 2013.[25] J. Selva, “FFT interpolation from nonuniform samples lying in a regular grid,” IEEE Trans. SignalProcess. , vol. 63, no. 11, pp. 2826–2834, June 2015.[26] D. Cheng and K. I. Kou, “Multichannel interpolation for periodic signals via FFT, error analysisand image scaling,” arXiv preprint arXiv:1802.10291 , 2018.[27] F. W. King, Hilbert transforms . Cambridge University Press, 2009.[28] Y. Mo, T. Qian, W. Mai, and Q. Chen, “The AFD methods to compute Hilbert transform,” Appl.Math. Lett. , vol. 45, pp. 18–24, 2015.[29] T. Strohmer and J. Tanner, “Fast reconstruction methods for bandlimited functions from periodicnonuniform sampling,” SIAM J. Numer. Anal. , vol. 44, no. 3, pp. 1073–1094, 2006.[30] A. Papoulis, “Generalized sampling expansion,” IEEE Trans. Circuits Syst. , vol. 24, no. 11, pp.652–654, 1977.[31] P. Sommen and K. Janse, “On the relationship between uniform and recurrent nonuniform discrete-time sampling schemes,” IEEE Trans. Signal Process. , vol. 56, no. 10, pp. 5147–5156, 2008.[32] M. Petrou and C. Petrou, Image processing: the fundamentals , 2nd ed. Oxford: John Wiley &Sons, 2010.[33] R. C. Gonzalez and R. E. Woods, M ) function values of h p ( t ) by taking N fast Fourier transform for { z p ( k ) } k through zero padding. By applying some trigonometric identities, the interpolating function h p ( t ) canbe simplified into a simper form. By a few basic calculations, e i t − e i t s = cos t − cos t s + i (sin t − sin t s ) = 2 i sin( t − t s e i t + ts . Therefore M (cid:89) s =1 ,s (cid:54) = p ( e i t − e i t s ) = (2 i ) M − e i M − t M (cid:89) s =1 ,s (cid:54) = p sin (cid:18) t − t s (cid:19) e i ts . (4.2)Note that the left hand side of (4.2) is a trigonometric polynomial with respect to t . Expanding theproduct, we have M (cid:89) s =1 ,s (cid:54) = p ( e i t − e i t s ) = M (cid:88) k =1 β k e i ( k − t where β k = ( − M − k (cid:88) ≤ s