Super-resolution of periodic signals from short sequences of samples
SSUPER-RESOLUTION OF PERIODIC SIGNALS FROM SHORT SEQUENCES OF SAMPLES
Marek W. Rupniewski
Institute of Electronic SystemsWarsaw University of Technology,Nowowiejska 15/19, 00-665 Warsaw, PolandEmail: [email protected]
ABSTRACT
Reconstruction of undersampled periodic signals of unknownperiod is an important signal processing operation. It is espe-cially difficult operation when the sequences of samples areshort and no information on the inter-sequence time distancesare given. For such a case, there exist some algorithms that al-low for approximation of the sampled signal. However, thesealgorithms require bandlimitedness of the signal or noiselesssamples. In this paper we propose an algorithm, which re-laxes these requirements. It does not require the signal to bebandlimited and it can cope with additive noise in the sam-ples. The algorithm is illustrated and validated with real data.
Index Terms — signal reconstruction, signal sampling,nonuniform sampling, point cloud approximation
1. INTRODUCTION
Periodic signal reconstruction from a finite sequence of sam-ples (a sample train) taken at a given sampling rate is an im-portant signal processing operation needed in many applica-tions in diverse areas such as communications, remote sens-ing, system testing, and characterization. If the period T ofthe sampled signal is known and the signal is band-limited,then a single sample train allows for perfect reconstructionof the signal, provided that the train is long enough [1–3].In such a case, the reconstruction reduces to solving a setof linear equations. If period T is not known but the sam-pling period τ is much smaller than T , then one can get areasonable approximation to the sampled signal by interpola-tion. This is how the digital scopes usually reconstruct thesampled signals. The situation becomes much more com-plicated when τ and T are of the same order of magnitude.In such a case, we deal with undersampled signals and themethods of dealing with approximation in this case are of-ten called super-resolution methods. Undersampling mightbe due purely economic reasons, since high-speed samplingsystems are relatively expensive, or it can result from sys-tems designed or configured for a low frequency applicationto recover unanticipated high frequency signals [4]. The sim-plest super-resolution technique is based on the stroboscopic effect. To take advantage of this effect, one needs to get a pre-cise estimate of period T . This crucial estimation is usuallyperformed in the time domain [4–7] or in the frequency do-main [8]. In both the approaches, a long train of samplesis required to get a reasonable period estimate. If the na-ture of the sampling process or the sampling hardware allowsfor acquisition of short sample-trains only, then one can stillachieve super-resolution by using multiple such trains, even ifthe inter-train time distances are not known. This is possibleif the sampled signal is of known band limit [9, 10], or if thestarting times of the trains are distributed uniformly (in theprobabilistic sense) when considered modulo T [11, 12], or ifthe ratio τ /T is irrational [13]. The algorithms for signal re-construction proposed by the author in [11, 12] require noise-less samples. In this paper we propose an algorithm whichcan deal with trains of noisy samples. The algorithm is basedon the ideas introduced in [11].The next section explains the relationship between peri-odic signals and the probabilistic distribution of sample trains.Section 3 proposes an algorithm for reconstruction of peri-odic signals from a finite number of trains of noisy samples.In Section 4, we present the results of a proof-of-concept ex-periment which was designed for the proposed algorithm ver-ification. The paper is concluded in Section 5.
2. DISTRIBUTION OF TRAINS OF SAMPLES
A train of samples of signal s is henceforth denoted by s d,τ ( t ) , i.e., s d,τ ( t ) = [ s ( t ) , s ( t + τ ) , . . . , s ( t + ( d − τ )] ∈ R d . (1)where d is the length of the train, t is its starting time, and τ isthe inter-sample distance (sampling period). Figure 1 showsan example of a periodic signal and its 3-sample trains.If s is a continuous periodic function of variable t , then sois the above mapping s d,τ . In this case, the image of s d,τ isa closed curve. If we treat time instant t in (1) as a randomvariable distributed uniformly on the interval [0 , T ) , then theresulting vector s d,τ ( t ) becomes a multivariate C -valued ran-dom variable henceforth denoted by S d,τ . In [11] it is shown a r X i v : . [ ee ss . SP ] O c t t T Tt t τ ττ τ Fig. 1 . A T -periodic signal and its two sample trains oflength taken with sampling period τ ≈ . T (the sequencesstart at t and t , respectively)that the probabilistic distribution of S d,τ determines signal s up to a time shift provided that: function s d,τ restricted to theinterval [0 , T ) is a one-to-one mapping, the time derivative of s d,τ does not vanish anywhere, and τ < T . For the complete-ness of this paper we rephrase the reconstruction algorithmpresented in [11] as Algorithm 1. There and henceforth, f C isthe probability density function (PDF) of S d,τ , C is the closedcurve formed by the trains of samples, i.e., C is the imageof function s d,τ , L is the length of C , π m is the projectiononto the m -th coordinate, i.e., π m ([ x , . . . , x d ]) = x m , f ◦ g means a composition of the functions, i.e, f ◦ g ( t ) = f ( g ( t )) ,and f φ = g means that functions f and g are equal up to a timeshift, i.e., there exists t such that f ( t ) = g ( t + t ) for all t ∈ R . Algorithm 1
Signal reconstruction from the probabilistic dis-tribution of its trains of samples (noiseless case) [11]
Input: sampling period τ , PDF f C and its support C Output: signal s (up to a time shift), and its period T .1. Take any arc-length parametrization of curve C and ex-tend it to an L -periodic function q : R → C .2. Set q = q ◦ r , where function r : R → R is given byequation x = (cid:82) Lr ( x )0 f C ( q ( u ))d u.
3. Take any integers ≤ k < l ≤ d and find x ∈ (0 , such that π k ◦ q ( x + ( l − k ) x ) = π l ◦ q ( x ) ∀ x ∈ R . (2)4. If x < , then T = τx and s d,τ ( t ) φ = q ( tT ) .If x > , then T = τ − x and s d,τ ( t ) φ = q ( − tT ) .5. s ( t ) φ = (cid:80) dk =1 π k ◦ s d,τ (cid:0) t + ( k − τ (cid:1) . Fig. 2 . A point cloud formed by a large number of noisy -sample sequences of the signal presented in Fig. 1 (forbounded noise the points lie inside a pipe of a bounded di-ameter; in the absence of the noise the points would form aclosed curve)We model the presence of additive noise in the samplesby assuming that each acquired train of sample takes the fol-lowing form: s d,τ ( t ) + η ∈ R d , (3)where t is a random variable distributed uniformly in theinterval [0 , T ) , and η = [ η , . . . , η d ] is a multivariate ran-dom variable. Henceforth, we assume that random variables η , . . . , η d are independent, identically distributed, and ofzero mean value. Noisy trains (3) no longer lies in a closedcurve C because the noise scatters these points around thecurve. However, if the noise variance is small the noisy trainshave to fall into a small neighbourhood of C , i.e., curve C forms a thread along which the probability distribution ofnoisy trains (3) is concentrated. Figures 2 illustrates sucha situation by showing a cloud of trains of noisy samples ofthe periodic signal presented in Fig. 1.Let f and f η denote the probability density function of anoisy train of samples (3) and the noise itself, respectively.Function f is a convolution of functions f C and f η , i.e., f ( p ) = (cid:90) C f C ( p ) f η ( p − q )d q . (4)Therefore, if η is of zero mean and of small variance, then foreach point p ∈ C f ( p ) ≈ C d × f C ( p ) , (5)where C d is a constant, which depends on dimension d only.The smaller the variance of η is, the more accurate approxi-mation (5) is.
3. THE RECONSTRUCTION ALGORITHM
The trains of samples used for the reconstruction of a signalare henceforth treated as points in a d -dimensional space andhey are denoted by p , . . . , p n ∈ R d . (6)The outline of the proposed reconstruction algorithm is pre-sented below as Algorithm 2. The following subsections ex-plain in detail the three stages of the outline. Algorithm 2
Signal reconstruction from a finite number oftrains of noisy samples (outline)
Input: sampling period τ , trains of samples (6) Output: estimates ˆ s and ˆ T of signal s and its period T , resp.1. Approximate points (6) with a closed smooth curve ˆ C ,2. Estimate a PDF ˆ f C along curve ˆ C with a help of (5),3. Compute ˆ s and ˆ T as the output of appropriately adaptedAlgorithm 1 applied to: τ , ˆ f C , ˆ C C approximation The problem of recovering a curve from its noisy samples(cloud of points) appears in several applications such as Com-puted Axial Tomography, Coordinate-Measuring Machinemeasurements and Magnetic Resonance Imaging. There existvarious strategies to solve this problem. They are based onmethods of mathematical analysis [14, 15], geometry [16, 17]and statistics [18–21]. The approximation algorithm pre-sented in [21] is especially well suited for the first stage ofAlgorithm 2 because it can cope with closed curves, it worksin Euclidean space of arbitrary dimension, and it is relativelysimple. The algorithm of [21] has a parameter R which canbe adjusted for a given or expected noise variance. The out-put of the algorithm of [21] comprises sequences of points,where each sequence represents the nodes of a polygonalchain which approximate a curve. If the first and the lastpoints of an output sequence coincide, then the correspond-ing polygonal chain represents a closed curve. An outputsequence which does not represent a closed curve, as wellas more than one output sequences, signals that either thealgorithm parameter R is too big, or the number of points n is not big enough, or eventually that the underlying curvehas some self-intersections. If the algorithm of [21] producesonly one output sequence ˜ p , . . . , ˜ p M ∈ R d , (7)and if this sequence represents a closed polygonal path, i.e., ˜ p M = ˜ p , then what remains to complete the first stage of Al-gorithm 2 is to interpolate the nodes of the path with a smoothcurve. This can be achieved with periodic cubic splines [22].For the needs of the next subsection, we denote the resultingsmooth curve parametrization by ˆ q : [0 , ˆ L ) → R d . Withoutloss of generality, we henceforth assume that ˆ q (0) = ˜ p andthat ˆ q is already an arc-length parametrization, i.e. (cid:107) ˆ q (cid:48) (cid:107) = 1 and ˆ L is the length of curve ˆ C which is the image of ˆ q . Let ˜ t , . . . , ˜ t M denote the preimages of points (7) under ˆ q ,i.e., ˆ q (˜ t i ) = ˜ p i , i = 1 , . . . , M. (8)In order to estimate density along curve ˆ C , for each point ˜ t i we compute the number c i of points (6) that lie in the R -neighbourhood of ˜ p i , where R is the same as described in theprevious subsection. We obtain ˆ f C by linear interpolation ofvalues c i followed by division of the resulting function by afactor which makes the interpolation function a PDF, i.e. bya factor which makes the integral of the so obtained estimate ˆ f C unitary. This factor can be expressed as M − (cid:88) i =1 (cid:0) ˜ t i +1 − ˜ t i (cid:1) c i + c i +1 . (9) In the final stage of Algorithm 2 we refer to Algorithm 1.However, Algorithm 1 has to be adapted for the fact that ˆ f C and ˆ q are estimates only and they could make Equations (2)not solvable for any ≤ k < l ≤ d . Therefore, we look fora least-squares solutions to the set of Equations (2), i.e., wefind x by minimizing a functional F in the interval (0 , ,where F ( x ) = (cid:88) ≤ k 4. PROOF OF CONCEPT EXPERIMENTS To verify the proposed algorithm, we have designed the fol-lowing experiments. A chirp-like T -periodic signal presentedin Fig. 1 was generated with a function generator and recordedwith a digital scope. The sampling period τ of the scopewas set to around . T . The signal peak-to-peak value wasaround and the quantization step of the scope was ∆ =0 . 02 V . For a fixed value of sample train length d and a fixednumber n , we picked at random n d -sample trains from thelong sample sequence recorded with the scope. These se-quences, treated as points (6), and the sampling period τ madeup the input data for Algorithm 2, in which we set R = 5∆ (the parameter is needed at the first two stages of the algo-rithm). The reconstructed signal period ˆ T and signal ˆ s werethen compared to known period T and a reference signal s ref ,respectively. The reference signal was obtained by recordingthe averaged (by the scope) signal with the same digital scope,but with the sampling period set to τ ref < τ / . To assess .51.01.52.02.5 5 10 20 50 100 n/ (cid:15) / ∆ d = 3 d = 5 Fig. 3 . Signal reconstruction root-mean-square errors (12)the quality of reconstruction, the following reconstruction er-ror measures were computed: (cid:15) T = | T − ˆ T | , (11) (cid:15) = min t ∈ [0 ,T ) (cid:115)(cid:90) T (cid:16) s ref ( t − t ) − ˆ s ( t ˆ T /T ) (cid:17) , (12) (cid:15) ∞ = min t ∈ [0 ,T ) max t ∈ [0 ,T ) | s ref ( t − t ) − ˆ s ( t ˆ T /T ) | . (13)(the argument of the reconstructed signal ˆ s in (12) and (13) isscaled to compare two periodic signals of the same period).For each pair of the considered values of parameters d and n , the experiment was repeated times. The resulting er-rors are presented as box-plots in Fig. 3–5. The figures showthat for the signal considered in the experiment, the super-resolution signal reconstruction has been achieved. The pe-riod estimate error is by three order of magnitude lower thanthe sampling period τ . The both root-mean-square and max-imum errors of signal reconstructions reach the level of thequantization step ∆ of the recording of the reference signal.Not surprisingly, the experiment showed that the quality ofsignal reconstruction rises with the amount of input data. Thereason for this is two-fold. First, the number of cloud pointsaffects the quality of curve reconstruction from that cloud (thefirst stage of the algorithm). A quantitative study of this phe-nomenon and the role of the dimension of the space and theamplitude of the noise in the process of curve reconstructionfrom a point cloud can be found in [21]. Second, the amountof input data affects the density estimation along the recon-structed curve. We expect that the root-mean-square error ofsuch estimation is similar to that based on histogram, i.e., itfalls at rate n − / [23]. 246 5 10 20 50 100 n/ (cid:15) ∞ / ∆ d = 3 d = 5 Fig. 4 . Signal reconstruction maximum errors (13) n/ (cid:15) T / τ d = 3 d = 5 Fig. 5 . Period estimation errors (11) 5. CONCLUSION We have proposed an algorithm for signal reconstruction fromthe short trains of its noisy samples. The algorithm allows forachieving a super-resolution reconstruction because as it al-lows the sampling period τ to be relatively big with respectto the signal period T (the algorithm requires τ < T ). Weperformed a series of experiments, which showed that the re-constructed signal is close to the reference signal and that thelarger the number of trains is, the smaller reconstruction er-rors are obtained. We believe that the ability of the algorithmto cope with noisy and non-continuous recordings (multipletrains of samples) will make it a valuable tool in a number ofapplications. 6. REFERENCES [1] T. Schanze, “Sinc interpolation of discrete periodic sig-nals,” IEEE Transactions on Signal Processing , vol. 43,o. 6, pp. 1502–1503, Jun 1995.[2] F. Candocia and J. C. Principe, “Comments on sinc in-terpolation of discrete periodic signals,” IEEE Trans-actions on Signal Processing , vol. 46, no. 7, pp. 2044–2047, Jul 1998.[3] S. R. Dooley and A. K. Nandi, “Notes on the inter-polation of discrete periodic signals using sinc functionrelated approaches,” IEEE Transactions on Signal Pro-cessing , vol. 48, no. 4, pp. 1201–1203, Apr 2000.[4] C. Rader, “Recovery of undersampled periodic wave-forms,” IEEE Transactions on Acoustics, Speech, andSignal Processing , vol. 25, no. 3, pp. 242–249, Jun1977.[5] A. J. Silva, “Reconstruction of undersampled periodicsignals,” NASA STI/Recon Technical Report N , vol. 87,Jan. 1986.[6] D. Bhatta, J. W. Wells, and A. Chatterjee, “Time domaincharacterization and test of high speed signals using in-coherent sub-sampling,” in ,Nov 2011, pp. 21–26.[7] N. Tzou, D. Bhatta, S. W. Hsiao, H. W. Choi, andA. Chatterjee, “Low-cost wideband periodic signal re-construction using incoherent undersampling and back-end cost optimization,” in , Nov 2012, pp. 1–10.[8] H. Choi, A. V. Gomes, and A. Chatterjee, “Signal ac-quisition of high-speed periodic signals using incoher-ent sub-sampling and back-end signal reconstruction al-gorithms,” IEEE Transactions on Very Large Scale In-tegration (VLSI) Systems , vol. 19, no. 7, pp. 1125–1135,Jul 2011.[9] P. Vandewalle, L. Sbaiz, J. Vandewalle, and M. Vetterli,“How to take advantage of aliasing in bandlimited sig-nals,” in , May 2004, vol. 3,pp. iii–948–51 vol.3.[10] P. Vandewalle, L. Sbaiz, J. Vandewalle, and M. Vetterli,“Super-resolution from unregistered and totally aliasedsignals using subspace methods,” IEEE Transactions onSignal Processing , vol. 55, no. 7, pp. 3687–3703, July2007.[11] M. W. Rupniewski, “Triggerless random interleavedsampling,” in ICASSP 2020 - 2020 IEEE InternationalConference on Acoustics, Speech and Signal Processing(ICASSP) , 2020, pp. 5605–5609.[12] M. W. Rupniewski, “Reconstruction of periodic sig-nals from asynchronous trains of samples,” Submittedto IEEE Signal Processing Letters, 2020. [13] M. W. Rupniewski, “Period and signal reconstructionfrom the curve of trains of samples,” Submitted toICASSP 2021.[14] Lian Fang and David C Gossard, “Multidimensionalcurve fitting to unorganized data points by nonlinearminimization,” Computer-Aided Design , vol. 27, no. 1,pp. 48 – 58, 1995.[15] A.A. Goshtasby, “Grouping and parameterizing irregu-larly spaced points for curve fitting,” ACM Transactionson Graphics , vol. 19, no. 3, pp. 185–203, 2000.[16] H. Pottmann and T. Randrup, “Rotational and helicalsurface approximation for reverse engineering,” Com-puting (Vienna/New York) , vol. 60, no. 4, pp. 307–322,1998.[17] Siu-Wing Cheng, Stefan Funke, Mordecai Golin, PiyushKumar, Sheung-Hung Poon, and Edgar Ramos, “Curvereconstruction from noisy samples,” Computational Ge-ometry , vol. 31, no. 1–2, pp. 63 – 100, 2005.[18] Trevor Hastie and Werner Stuetzle, “Principal curves,” Journal of the American Statistical Association , vol. 84,no. 406, pp. 502–516, 1989.[19] D. Levin, “The approximation power of moving least-squares,” Mathematics of Computation , vol. 67, no. 224,pp. 1517–1531, 1998.[20] I.-K. Lee, “Curve reconstruction from unorganizedpoints,” Computer Aided Geometric Design , vol. 17,no. 2, pp. 161–177, 2000.[21] M. W. Rupniewski, “Curve reconstruction from noisyand unordered samples,” in ICPRAM 2014 - Proceed-ings of the 3rd International Conference on PatternRecognition Applications and Methods , 2014, pp. 183–188.[22] Richard H. Bartels, John C. Beatty, and Brian A. Barsky, An Introduction to Splines for Use in Computer Graph-ics &Amp; Geometric Modeling , Morgan KaufmannPublishers Inc., San Francisco, CA, USA, 1987.[23] Larry Wasserman,