Estimation of roughness measurement bias originating from background subtraction
EEstimation of roughness measurement biasoriginating from background subtraction
D Neˇcas
RG Plasma Technologies, CEITEC, Masaryk University, Kamenice 5, 625 00 Brno,Czech RepublicCEITEC, Brno University of Technology, Purkyˇnova 123, 612 00 Brno, CzechRepublicE-mail: [email protected]
P Klapetek
Czech Metrology Institute, Okruˇzn´ı 31, 638 00 Brno, Czech RepublicE-mail: [email protected]
M Valtr
Czech Metrology Institute, Okruˇzn´ı 31, 638 00 Brno, Czech RepublicCEITEC, Brno University of Technology, Purkyˇnova 123, 612 00 Brno, CzechRepublicE-mail: [email protected]
Abstract.
When measuring the roughness of rough surfaces, the limited sizes ofscanned areas lead to its systematic underestimation. Levelling by polynomialsand other filtering used in real-world processing of atomic force microscopy dataincreases this bias considerably. Here a framework is developed providing explicitexpressions for the bias of squared mean square roughness in the case of levellingby fitting a model background function using linear least squares. The framework isthen applied to polynomial levelling, for both one-dimensional and two-dimensionaldata processing, and basic models of surface autocorrelation function, Gaussian andexponential. Several other common scenarios are covered as well, including medianlevelling, intermediate Gaussian–exponential autocorrelation model and frequencyspace filtering. Application of the results to other quantities, such as Rq, Sq, Raand Sa is discussed. The results are summarized in overview plots covering a range ofautocorrelation functions and polynomial degrees, which allow graphical estimation ofthe bias.
Keywords : Scanning probe microscopy; data processing; roughness; bias; levelling;autocorrelation a r X i v : . [ phy s i c s . d a t a - a n ] J a n oughness bias originating from background subtraction
1. Introduction
Surface roughness is a ubiquitous phenomenon which influences many interactions of anobject with outer world—mechanical [1–3], optical [4–6], chemical [7], biological [8], andothers. Its influence is particularly large in the nanoscience and nanotechnology fields,where object sizes are comparable to characteristic dimensions of roughness (heightand/or lateral) which arise naturally during deposition and processing of materials.Whether roughness is considered a defect to be minimized or potentially usefulproperty to be optimized, it must be measured. Scanning Probe Microscopy (SPM)techniques, such as Atomic Force Microscopy (AFM), allow direct measurement ofnanoscale roughness—while optical techniques allow its characterisation in the frequencydomain [9]. Larger-scale roughness can be measured by profilometry techniques, of whichmechanical profilometry in some sense analogous to SPM.Two approaches to measurement should be distinguished. In an industrial contextreproducibility is key and thus the focus is on procedures and parameters defined bystandards [10–13]. It may be of less concern if these parameters are those occurringin theoretical models or if they correspond to parameters of a hypothetical randomprocess. On the other hand, in basic research the instruments, methods and samplesare frequently all non-standard. Simultaneously, it is important to estimate parametersthat correspond to theoretical descriptions. This can be either because they arethemselves interesting, for instance in determining the universality class for a growthmechanism [9,14]. Or they appear in physical theories describing interactions with roughsurfaces. Probably the most interesting parameter is squared mean square roughness σ which directly appears in optics—together with similar quadratic quantities such asspectral densities of spatial frequencies and (cross)correlation functions [4, 15, 16]. Herewe will approach roughness from this second standpoint.Surface roughness is never measured using data from an infinitely large region withinfinite resolution. The resolution is always finite—in contact scanning methods (AFMor profilometry) limited by finite probe size, in optical methods by finite wavelength. Themeasurement area is also always finite and seldom even encompasses the entire sample, inparticular in direct measurements. In fact, in AFM we regularly measure tiny fractionsof the surface—and instead of rigorous statistical justification for representativeness ofthe results we just have hope that no evil forces conspired to plant non-representativesurface regions under the probe.Still, conceptually, the statistical character of roughness parameters is acknowl-edged [9, 17]. We imagine an infinite ensemble of surfaces (possibly infinite themselves),usually corresponding formally to a random process, which may or may not be wide-sense stationary. Measurement of non-stationary fractal surfaces in an interval of scalesin which they do exhibit self-affinity adds its own set of difficulties [9, 18]. Here we willfocus on roughness generated by stationary processes—and estimation of their param-eters using a finite measurement of one realization. In particular, we will study theconsequences of finite measurement area and levelling/background subtraction. oughness bias originating from background subtraction σ , estimated from a profile of length L byˆ σ = 1 L (cid:90) L z ( x ) d x (1)is itself a random variable, as we denote with a hat. It has a dispersion, which is possiblylarge [9]. Definition (1) corresponds to mean square roughness Rq for profiles [10, 11, 13]and Sq for images [19] defined by roughness measurement standards, with the subtleconceptual difference discussed above.The estimate is also biased. Heights z ( x ) entering (1) are levelled to have zero meanvalue. This alone introduces bias, which is well known and discussed for correlated datain classical signal processing textbooks [9, 20–22]. However, subtraction of the meanvalue is rarely the only preprocessing applied to topographical data before roughnessevaluation. Often local defects are removed first, although this may be unnecessaryas evaluation algorithms for irregular regions allow excluding arbitrary image partsfrom processing [23]. Almost universally the mean plane is subtracted to correct tilt—and frequently not just a plane but a higher order polynomial to correct scanner bow(or sample warping) [17]. Misaligned scan lines need to be aligned for 2D processing,although not necessarily for line-by-line evaluation. Furthermore, any of specific formremoval methods can be utilized, from frequency-space filtering to wavelet processingto subtraction of specific geometrical shapes such as sphere.Some of these steps remove background arising from measurement imperfections(scanner bow), some remove real base shape of the measured object. Often they removeboth to some degree—and by intentionally removing certain degrees of freedom theyalso always remove inadvertently a part of the roughness. For instance in the case ofthe mean value, we subtract it because the measured surface height h ( x ) is not theroughness signal z ( x ). It is offset by some background, in this case a constant baseheight B : h ( x ) = z ( x ) + B . (2)The background B is non-random (at least from the roughness standpoint), butunknown. We estimate it as the mean value of the heightsˆ B = 1 L (cid:90) L h ( x ) d x (3)because the expected value of h is E[ h ] = B . However, the subtraction of ˆ B insteadof true B removes not just B but also a part of the roughness. Although the expectedvalue of z is zero, the mean value of z ( x ) over a finite interval is a random variable, notzero—yet we make it zero anyway.This is illustrated in (1) for a second-order polynomial B . A second-orderpolynomial background was added to an ‘ideal’ rough signal. Then a polynomialbackground was fitted and removed. We seldom know the exact background type andeach choice levels the surface differently. Furthermore, even if the correct degree ischosen, the levelled surface differs from the original ideal one. oughness bias originating from background subtraction True topographyTrue background B Measured height h & fitted background Bˆ Corrected data z
Figure 1.
General scheme of data distortion by background removal. The truetopography and true background (here bow) combine in the measured data. Theexact background type is not known and must be chosen from a set of models fitted tothe data, here polynomials of degrees 1, 2 and 3. The fitting is greedy and subtractsnot just the true background, but also roughness components which randomly matchit. The corrected data are then missing these components.
As already noted above, real AFM or profilometry data are sets of discrete values z k , not continuous functions z ( x ). Integrals such as (1) or (3) are approximated bysummations, for instanceˆ σ = 1 N N (cid:88) k =1 z k . (4)There is a certain arbitrariness in the correspondence between the region [0 , L ] and theset of points where heights z k were measured. If we state that z k were obtained in thecentres of sampling steps (or pixel centres for images), the ‘measured region’ covers z k and extends a further half-step to each side. Formula (4) then becomes the midpointquadrature rule [24] with only second-order error and approximates well the integral aslong as the sampling step ∆ = L/N is small compared to the autocorrelation length∆ (cid:28) T . Since this work focuses on the effect of finite measurement area, i.e. loss oflow-frequency information, we will not dwell on the loss of high-frequency informationand will assume the sampling is sufficiently fine. Therefore, continuous functions willbe used in the following analysis (instead of sets of discrete sampled values z k ).
2. Mean value subtraction
The root mean square roughness σ is estimated from heights z in a finite-size region[0 , L ] ˆ σ = 1 L (cid:90) L [ z ( x ) − ˆ µ ] d x , where ˆ µ = 1 L (cid:90) L z ( x ) d x . (5)For simplicity, we will consider one-dimensional (1D) data here. oughness bias originating from background subtraction σ is now always presented with explicit subtraction of of ˆ µ . Insteadwe simply say that ‘the mean value of heights is zero’. However, this confounds twodistinct statements: • the expected value of roughness signal z is zero E[ z ] = 0, and • the mean value of measured data is made zero by preprocessing.The second is the source of bias since ˆ µ is a random variable, not identically equal tozero even when its expected value is: E[ˆ µ ] = 0.We now briefly reproduce the classical result for the bias caused by mean valuesubtraction [9, 20–22]. The derivation provides an outline for how the more complexcases will be treated in section 3. Expanding the square in (5) givesˆ σ = 1 L (cid:90) L z ( x ) d x − ˆ µ . (6)We would like to know the expected value of the estimate E[ˆ σ ]. The expected valueof the first term on the right hand side of (6) is σ . Hence the second term gives thebias—which is always negative. Writingˆ µ = 1 L (cid:90) L z ( x ) d x × L (cid:90) L z ( x (cid:48) ) d x (cid:48) = 1 L (cid:90) L (cid:90) L z ( x ) z ( x (cid:48) ) d x (cid:48) d x (7)and using coordinate transformation ( x, x (cid:48) ) = ( v, u + v ) we obtainˆ µ = 1 L (cid:90) L (cid:90) L − u z ( v ) z ( v + u ) d v d u + 1 L (cid:90) L (cid:90) Lu z ( v ) z ( v − u ) d v d u . (8)Expected value calculation can be interchanged with integration. The expected valueof either integrand is the autocorrelation function (ACF) of the signal G ( t ) = E [ z ( x ) z ( x + t )] = E [ z ( x ) z ( x − t )] . (9)Note that G (0) = E[ z ( x ) ] = σ . Substituting this result into (6) gives the classic finalexpression [20, 21]E[ˆ σ ] = σ − L (cid:90) L (cid:18) − tL (cid:19) G ( t ) d t = σ − (cid:90) (1 − t ) G ( Lt ) d t (10)Since the bias is always negative and proportional to σ , it is convenient to introducethe relative bias β E[ˆ σ ] = σ (1 − β ) (11)to simplify notation. If we know β , replacing ˆ σ with ˆ σ / (1 − β ) corrects the bias.In order to see how the bias typically behaves, we evaluate it for a simple prabolicmodel of ACF G ( t ) = σ (1 − t /T ) for t < T and zero otherwise. Then for L ≥ Tβ = 43 α (cid:18) − α (cid:19) ≈ α and β = πα (cid:18) − α (cid:19) ≈ πα (12)in one and two dimensions, respectively. The approximations hold for α = T /L small.The numerical factors such as 4 / π change somewhat with the exact form of theACF, but generally are of the same order of magnitude. oughness bias originating from background subtraction σ due to finite-area biasbehaves approximately as α and α for 1D and 2D data, respectively. More generally, itbehaves like α D where D denotes the dimension [9]. It does not depend on the numberof measured values N (provided it is sufficiently large). Increasing N without makingthe measurement area larger is of no help and Bessel’s correctionˆ σ = 1 N − N (cid:88) k =1 z k , (13)which replaces N with N − − /N , but akin to 1 − cα or 1 − cα , where c is some constant of order of unity.Almost all roughness measurements involve correlated data. If we measure withsuch a large sampling step that the height values are uncorrelated we lose all spatialinformation about the roughness. This is rarely desirable—and also rarely possible,since in scanning methods the feedback loop then cannot keep up with the surfacetopography, whereas in optical methods this usually means averaging too large regionsof the surface in one pixel.
3. Real-world background subtraction
Background subtraction methods used for SPM are much more complex than meremean value subtraction [17, 25–27]. In order to evaluate roughness correctly, we musttake into account which degrees of freedom or spatial frequencies would contribute to thedesired result, but were removed by preprocessing. This is not trivial to start with andcertainly not helped by AFM data processing software, which can apply plane levellingor row alignment automatically, possibly without the user even noticing (depending onthe software and settings). And, of course, no AFM software currently attempts toestimate the resulting bias.It is common to process AFM image data row by row because roughnessproperties can often be determined more reliably in the direction of the fast scanningaxis [9, 17, 23, 28]. This means that levelling is applied to individual image rows insteadof (or in addition to) the entire image. The operation of mutual alignment of scan linesis colloquially referred to as ‘flatten’ [25–27]. However, we explicitly call it scan linecorrection for clarity.Results for individual rows then may be summed or averaged. The result for eachrow is biased as if we processed 1D data, not 2D. The same holds if any row-wisepreprocessing is applied, such as removal of mean value from each individual row. It is,therefore, quite rare that the bias corresponds to the 2D case, even for image data.
Removal of tilt, bow or higher order polynomial backgrounds has two basic steps. Fittinga background function B ( x ) to the data using the linear least squares method, andsubtraction of the fitted (estimated) background ˆ B ( x ). This section presents a general oughness bias originating from background subtraction B ( x ) = (cid:88) j a j ϕ j ( x ) = (cid:88) j a j ∂B ( x ) ∂a j , (14)where ϕ j are basis functions (for instance powers of x ) and a j the correspondingcoefficients—fitting parameters. The fit minimises the residual sum of squares, therefore ∂∂a j (cid:90) L [ z ( x ) − ˆ B ( x )] d x = 0 . (15)These two relations allow expanding the expression for ˆ σ as follows:ˆ σ = 1 L (cid:90) L [ z ( x ) − ˆ B ( x )] d x = 1 L (cid:90) L z ( x ) d x − L (cid:90) L ˆ B ( x ) d x . (16)Again, the second term gives the bias.The linear fit corresponds to an orthogonal projection onto a linear functionsubspace spanning ϕ j . It can, therefore, be assumed without loss of generality that ϕ j are orthonormal—and we will do so in order to simplify notation. Some setsof ϕ j naturally come as orthonormal, for instance sines and cosines in frequency-space filtering, and this holds also for some wavelet bases. If required, any set oflinearly independent basis functions can be made orthonormal by an orthogonalizationprocess such as Gram–Schmidt, followed by normalization. In the case of polynomialbackgrounds, orthonormal polynomials can be directly chosen as the basis ϕ j .For orthonormal ϕ j the estimated coefficients are simple scalar productsˆ a j = (cid:90) L z ( x ) ϕ j ( x ) d x (17)and thus (cid:90) L ˆ B ( x ) d x = (cid:88) j ˆ a j = (cid:88) j (cid:90) L z ( x ) ϕ j ( x ) d x (cid:90) L z ( x (cid:48) ) ϕ j ( x (cid:48) ) d x (cid:48) . (18)Note that these ˆ a j are not the best estimators of the coefficients—the problem dual toours, i.e. linear fitting of correlated data, has a more complex solution [29]. However,(17) corresponds to levelling methods used in practice.Transforming this expression in the same manner as (7) results in (cid:90) L ˆ B ( x ) d x = (cid:88) j (cid:90) L (cid:90) L − u z ( v ) z ( v + u ) ϕ j ( v ) ϕ j ( v + u ) d v d u . (19)In calculation of the expected value we note that ϕ j are not realizations of a randomprocess and can be factored outE[ z ( v ) z ( v + u ) ϕ j ( v ) ϕ j ( v + u )] = E[ z ( v ) z ( v + u )] ϕ j ( v ) ϕ j ( v + u ) , (20) oughness bias originating from background subtraction σ ] = σ − D (cid:90) G ( tL ) C ( t ) d t . (21)In each dimension we sum two integrals in (8), so each gives factor 2. In D dimensionsintervals and integrals are D -dimensional, interval [0 ,
1] stands for [0 , D , etc. Function C is determined entirely by the set of orthonormal basis functions and the interval C ( t ) = (cid:88) j (cid:90) L (1 − t )0 ϕ j ( v ) ϕ j ( v + Lt ) d v = (cid:88) j c j ( t ) . (22)By considering the single constant basis function ϕ = 1 / √ L we recover mean valuesubtraction formulae from section 2.Since roughness is evaluated under the assumption ‘mean value of z is zero’,the basis always includes the constant function. If we subtract some other type ofbackground, there are two possibilities. Either this already ensures zero mean value andthen the linear span indeed includes constant functions. Or it does not and we mustsubtract the mean value afterwards to make it zero. However, the constant function isthen independent and can be simply added to the basis, merging the two setps. Function C ( t ) is a curious characteristic of the background removal method. Althoughit is evaluated using a concrete orthornomal basis ϕ j in (22), it does not depend on thechoice of the basis—this can be easily seen if we express ϕ j in a different basis. Thefunction describes the subtraction of projection onto the linear function space spannedby ϕ j . For instance, it is immaterial whether we actually fit orthonormal polynomialsor just plain powers x j during background subtraction because their linear span is thesame.In this manner C ( t ) characterizes the correlations in an entire linear subspace offunctions. On an infinite interval it has perhaps a clearer interpretation. In such casewe can express ϕ j using the Fourier transform ϕ j ( v ) = (cid:90) ∞−∞ exp( − π i ξv )Φ j ( ξ ) d ξ . (23)Subtituting it into formula (22) gives according to the correlation theorem [30] C ( t ) = (cid:88) j (cid:90) ∞∞ exp( − π i ξv ) | Φ j ( ξ ) | d ξ , (24)where | Φ j | is the spectral density of φ j . Therefore, C ( t ) is the Fourier transform of W ( ξ ) = (cid:88) j | Φ j ( ξ ) | , (25)which is the total spectral density of the orthonormal basis, i.e. in some sense the spectraldensity of the linear subspace. This is an useful intuition which can be transferred tofinite intervals, even though the formulae from this paragraph do not hold exactly there,polynomials are not a useful basis on infinite intervals, etc. oughness bias originating from background subtraction Table 1.
Polynomials d j corresponding to individual Legendre polynomials accordingto (27) and used to construct expressions for specific polynomial background removaltypes. The argument of d j is x = t . j d j ( x )0 01 12 2 − x − x + 10 x − x + 55 x − x − x + 181 x − x + 126 x − x + 461 x − x + 1176 x − x − x + 1001 x − x + 6126 x − x + 1716 x − x + 1946 x − x + 23451 x − x + 22737 x − x Table 2.
Polynomials C j describing 1D polynomial background removal of degree j . j C j ( t )0 1 − t − t + 2 t − t + 12 t − t − t + 40 t − t + 20 t − t + 100 t − t + 200 t − t − t + 210 t − t + 1080 t − t + 252 t − t + 392 t − t + 4200 t − t + 3528 t − t − t + 672 t − t + 13200 t − t + 26208 t − t + 3432 t − t + 1080 t − t + 35640 t − t + 137592 t − t + 61776 t − t Orthonormal polynomial basis on the interval [0 , L ] is formed by shifted and scaledLegendre polynomials P j [31]: ϕ j ( x ) = (cid:114) j + 1 L P j (cid:18) xL − (cid:19) . (26)Evaluation of integrals (22) leads to c j ( t ) = (cid:18) j + 12 (cid:19) (cid:90) − t − P j ( x )P j ( x + 2 t ) d x = 1 − t − t (1 − t ) d j ( t ) , (27)where functions d j are listed in table 1 for polynomial degree up to 8 (these and othertedious integrals were evaluated symbolically in Maxima [32]). Polynomials c j dependon the specific choice of orthonormal basis. Polynomials C j which are obtained bysumming them up to specific degree according to (22) depend only on the linear spancovered by the basis. They are listed for reference in table 2.In order to obtain concrete expressions for the bias, we still need to specify the oughness bias originating from background subtraction Table 3.
1D polynomials for Gaussian ACF. j g j ( α ) p j ( α )0 1 11 1 − α − α α − α + 4 α − α − α − α − α + 24 α − α α + 96 α + 336 α − α + 84 α − α + 336 α − α − α − α − α − α − α + 224 α − α + 3360 α − α Table 4.
1D polynomials for exponential ACF. j e j ( α ) q j ( α )0 1 1 − α α + 6 α − α + 6 α α + 96 α + 240 α + 240 α − α + 24 α − α α + 390 α + 2760 α + 11160 α + 25200 α + 25200 α − α + 60 α − α + 25200 α form of the ACF. Two common models are Gaussian and exponential [9, 17] G Gauss ( x ) = σ exp( − x /T ) and G exp ( x ) = σ exp( −| x | /T ) . (28)For G Gauss the relative bias resulting from the subtraction of polynomial degree 0 (i.e.mean value) is β = α √ π erf(1 /α ) + α exp( − /α ) − α . (29)More generally β j = ( j + 1) (cid:2) α √ π erf(1 /α ) + α exp( − /α ) g j ( α ) − α p j ( α ) (cid:3) , (30)where g j ( α ) and p j ( α ) are polynomials listed in table 3. The leading-orderapproximation for small α is β j ∼ ( j + 1) α (cid:20) √ π − ( j + 1) α + j ( j + 1)( j + 2)6 α (cid:21) . (31)For the exponential ACF we obtain β = 2 α exp( − /α ) + 2 α (1 − α ) (32)and more generally β j = 2( j + 1) (cid:2) αq j ( α ) + ( − j α exp( − /α ) e j ( α ) (cid:3) , (33)where q j ( α ) and e j ( α ) are polynomials listed in table 4. The leading-order approximationfor small α is β j ∼ j + 1) α (cid:2) − ( j + 1) α + j ( j + 1)( j + 2) α (cid:3) . (34)As an example, numerical results for the bias of ˆ σ are plotted in figure 2 for theGaussian ACF. The ‘true’ signals were generated by cutting segments from very long oughness bias originating from background subtraction R a t i o o f b i a s ed σ ˆ² t o t r ue σ ² Profile length measured in autocorrelation lengths L/Ttrue signalpoly 0 (offset)poly 1 (tilt)poly 2 (bow)poly 3 (cubic)poly 4 (quartic)poly 5 (quintic)30% trimmed meanmedian0.9991.0000.002 0.005 0.01 0.02 0.05 0.1 0.2 R a t i o a ft e r b i a s c o rr e c t i on Ratio α = T/L Figure 2.
Numerical results for polynomial background removal from profiles withGaussian ACF: estimated ˆ σ divided by true σ for several polynomial degrees (upper)and corrected estimate ˆ σ / (1 − β ) (lower). frequency-space synthesized data. The figure includes also results for median and 30%trimmed mean levelling, which are rather similar to mean value subtraction.The effectiveness of correcting the estimated ˆ σ by dividing with 1 − β is evident.There are small residual differences even for the true signal, stemming from it being stillfinite, albeit very long, and thus random. Furthermore, the corrections start to ceasebeing perfect for large α . This is an effect of discretization.Finally we note that the leading-order approximations (31) and (34) would not bechanged by putting exp( − /α ) = exp( − /α ) = 1 − erf(1 /α ) = 0. In fact, since theseterms are exponentially small in 1 /α , this change does not influence any terms in seriesexpansions in powers of α for α →
0. The same approximation (and conclusion) followsfrom writing (cid:90) G ( tL ) C ( t ) d t = (cid:90) ∞ G ( tL ) C ( t ) d t − (cid:90) ∞ G ( tL ) C ( t ) d t (35)in (21) and disregarding the second term, exponentially small compared to the first.The integral to infinity is generally much easier to evaluate, in particular in higherdimensions, and allows obtaining formulae for small α [9]. oughness bias originating from background subtraction Two-dimensional orthonormal polynomials on [0 , L ] × [0 , L ] can be constructed asseparable, i.e. products of 1D polynomials (26) ϕ j ,j ( x , x ) = ϕ j ( x ) ϕ j ( x ) . (36)Clearly then (cid:90) L (cid:90) L ϕ j ,j ( x , x ) ϕ k ,k ( x , x ) d x d x = δ j ,k δ j ,k (37)and other 2D expressions, such as the integrals in (22), reduce to products of 1Dexpressions in a similar manner.Usually the total degree of the polynomial is limited, leading to the following basisfunction sets (on square L = L = L ): • constant ϕ ( x ) ϕ ( x ) = 1 /L , • plane levelling, adding ϕ ( x ) ϕ ( x ) and ϕ ( x ) ϕ ( x ), • quadratic levelling, adding ϕ ( x ) ϕ ( x ), ϕ ( x ) ϕ ( x ) and ϕ ( x ) ϕ ( x ), • cubic levelling, adding the four cubic basis functions, • etc.Evaluation of the integral (22) then results in functions C ( t , t ) for 2D polynomiallevelling.However, there are other common choices for the set of polynomials. Frequentlythe maximum degrees of x and x are chosen separately, in particular when the imageis not square or there are other reasons for using different levelling along the two axes.Enumeration of all reasonable two-dimensional C ( t , t ) is not feasible. Therefore, weinstead describe a procedure for their construction:(i) Take the set of 2D terms x j x j which define the polynomial background.(ii) For each term look up the corresponding d j and d j in table 1.(iii) For each term calculate polynomials c j and c j according to (27). Multiply the twopolynomials.(iv) Sum the results for all terms.This procedure is applicable if the set of degrees is convex, i.e. if under the followingcondition: If x j x j is included then x j (cid:48) x j (cid:48) are included too for all degrees j (cid:48) ≤ j and j (cid:48) ≤ j . Otherwise the Legendre polynomials would not have the same linear span as themonomials. This condition is satisfied by all practical background subtraction methodsin AFM.For Gaussian ACF (28) and limited total degree we obtain the leading terms forsmall α β j ∼ ( j + 1)( j + 2)2 α (cid:20) π − j + 3) √ π α + j + 3 j + 33 α (cid:21) . (38) oughness bias originating from background subtraction Table 5.
Polynomials C r j describing 2D polynomial background removal in theradially symmetric case. j C r j ( t )0 π/ − t + t /
21 3 π/ − t + 7 t / t / − t π − t + 13 t + 56 t / − t − t / t /
33 5 π − t + 35 t + 72 t − t − t / t / t / − t π/ − t + 155 t / t / − t − t / t / t / − t − t / t /
55 21 π/ − t + 301 t / t / − t − t / t + 8320 t / − t − t / t + 2048 t / − t / The second term in the brackets must be small compared to 1 for the leading-orderapproximation to be valid. Unfortunately, Gaussian ACF may be the only interestingcase for which β has a closed form expression because Gaussian is the only separableradially symmetric function.For other radially symmetric ACF G ( t L , t L ) = G ( tL ), i.e. isotropic roughness,we can obtain leading-order terms using (35) and transformation to polar coordinates t = t cos ω and t = t sin ω . As in one dimension, this results in an asymptotic seriesfor β if G ( x ) decays faster than any power 1 /x n . The integral then becomes (cid:90) ∞ G ( tL ) (cid:34)(cid:90) π/ C ( t cos ω, t sin ω ) d ω (cid:35) t d t = (cid:90) ∞ G ( tL ) C r ( t ) t d t , (39)where the inner integral expressing C r is elementary because C is a polynomial.Polynomials C r are listed in table 5 for degrees up to 5 for reference.The outer integral is of the same type as in the 1D case for the same G . Forexponential ACF (28) this results in β j ∼ ( j + 1)( j + 2) α (cid:20) π − j + 3)3 α + 2( j + 3 j + 3) α (cid:21) . (40) Gaussian and exponential ACF belong to a one-parametric class of simple classicalACF models, usually called power-exponential or intermediate Gaussian–exponentialACF [9, 33]. The parameter is the power in the exponent: G p ( x ) = σ exp[ − ( x/T ) p ] . (41)Clearly p = 1 and 2 correspond to exponential and Gaussian (28), and p ∈ [1 , oughness bias originating from background subtraction α for α → β = 2 D G (0) (cid:90) ∞ G ( tL ) P ( t ) d t , (42)where P ( t ) is a polynomial–either C j ( t ) in 1D (table 2) or tC r j ( t ) in 2D (table 5)—and G is given by (41). Writing the polynomial P ( t ) = K (cid:88) k =0 a k t k , (43)we need to evaluate I = 2 D (cid:90) ∞ exp (cid:20) − (cid:18) xLT (cid:19) p (cid:21) K (cid:88) k =0 a k t k d t , (44)which can be easily transformed ( αt = u /p ) to I = 2 D K (cid:88) k =0 a k α k +1 p (cid:90) ∞ e − u u ( k +1) /p − d u = 2 D p K (cid:88) k =0 a k α k +1 Γ (cid:18) k + 1 p (cid:19) , (45)where Γ denotes the gamma function.Coefficients a k are given by the basis functions. For 1D polynomials the leadingcoefficients are a = j +1 , a = ( j +1) , a = 0 , and a = j ( j +1) ( j +2) / , (46)whereas for 2D polynomials a = 0 (47) a = π ( j + 1)( j + 2) / a = ( j + 1)( j + 2)(2 j + 3) / a = ( j + 1)( j + 2)( j + 3 j + 3) /
12 (50)Substituting them into (45) gives leading order terms for 1D bias β j ∼ α j + 1) p (cid:20) Γ (cid:18) p (cid:19) + ( j + 1) α (cid:20) Γ (cid:18) p (cid:19) + α j ( j + 2)6 Γ (cid:18) p (cid:19)(cid:21)(cid:21) (51)and for 2D bias β j ∼ α ( j + 1)( j + 2) p (cid:20) π Γ (cid:18) p (cid:19) + α (cid:20) j + 3)3 Γ (cid:18) p (cid:19) + α j + 3 j + 33 Γ (cid:18) p (cid:19)(cid:21)(cid:21) . (52)These expressions can be used to reproduce (31) and (38) with p = 2, (34) and (40)with p = 1, and similar expressions for ACF of the form (41) for other p . oughness bias originating from background subtraction α = T/L 4567891011120.002 0.005 0.01 0.02 0.05 0.1 0.2poly 5 (quintic) R e l a t i v e b i a s f a c t o r β / α Figure 3.
Relative bias β of ˆ σ for 1D measurements or 2D measurements with 1Dprocessing (plotted without the leading factor, i.e. as β/α ). Line colour distinguishesthe degree of subtracted polynomial. Line type represents the ACF type—Gaussian( p = 2), exponential ( p = 1) and intermediate types p = 1 .
2, 1.4, 1.6 and 1.8.
Results of the preceding sections can be summarized in graphical form for a quickestimation of the bias in common measurement scenarios. It always begins withestimating the ratio α = T /L , usually by knowing L exactly and estimating T .For 1D processing figure 3 can be then used. It summarises the relative bias β forGaussian and exponential ACF, as well as intermediate ACF types with p step of 0.2.It was plotted using exact integrals and is, therefore, valid even for large α .After choosing the corresponding curve according to polynomial degree and ACFtype, one multiplies the value from figure 3 by α to obtain the relative bias β , andpossibly further by σ for an absolute number.An example of bias estimation using figure 3:(i) We measured a 20 × µ m AFM image and removed bow from each scan line.(ii) This means 1D processing, L = 20 µ m and polynomial degree of 2.(iii) We estimate correlation length as T ≈
340 nm. The surface is locally smooth androughness can be assumed not far from Gaussian.(iv) This gives α ≈ . . σ as 5 . × . ≈ α → / √ π ≈ .
13 of leading terms in (31) and(34). However, the difference actually decreases for larger α thanks to the higher order oughness bias originating from background subtraction α = T/L 204060801001200.005 0.01 0.02 0.05 0.1 0.2poly 5 (quintic) R e l a t i v e b i a s f a c t o r β / α Figure 4.
Relative bias β of ˆ σ for 2D measurements with only 2D backgroundsubtraction (plotted without the leading factor, i.e. as β/α ). Line colour distinguishesthe degree of subtracted polynomial. Line type represents the ACF type—Gaussian( p = 2), exponential ( p = 1) and intermediate types p = 1 .
2, 1.4, 1.6 and 1.8. terms (up to a cross-over point). Furthermore, the curves for a large range of powers p remain quite close to Gaussian, even up to p = 1 .
6. Assuming a Gaussian ACF can,therefore, often give a reasonable estimate even if the ACF deviates from Gaussian.Independently on the ACF type, the bias is quite high. Even for
L/T in the range ofhundreds, it remains at least a few percent and it becomes much larger as
L/T decreases.It is not difficult to find realistic scenarios in which it reaches 20, 30 or even 40 %.For 2D processing figure 4 can be used instead of figure 3. Although even for mostimage data processing the bias is dominated by 1D processing, a quick check of the2D levelling contribution is still useful. The factors in figure 4 must be multiplied by α (instead of α ) since the leading term is proportional to α in 2D. Otherwise theestimation procedure remain unchanged. Note that the polynomial degree in figure 4correspond to the limited total degree. For other combination of x and y degrees one canutilize the observation that the leading term is proportional to the number of coefficientsfitted.The dependence on ACF shape is evidently stronger in 2D than it is in 1D. Theratio of leading terms is now 2 between exponential and Gaussian ACF. It still holdsthat the curves remain closer to Gaussian ACF for relative large powers p . However, dueto larger absolute differences, it is no longer reasonable to universally assume GaussianACF.Also, the proportionality to α means that the bias remains quite low up to α around 0.05 (depending on polynomial degree). But once it becomes non-negligible, it oughness bias originating from background subtraction L/T sufficiently large can be a feasible strategy foravoiding bias caused by 2D data processing—in contrast to 1D processing.
Spatial frequency filtering removes (or suppresses) specific spatial frequencies. It isusually done in the frequency space utilizing the Fourier transform. It yields the bestrepresentation of the data using and sines and cosines in the least-squares sense andthus lies within the same framework. The basis functions are orthonormal and come inpairs ϕ cos j ( x ) = (cid:114) L cos 2 πjxL and ϕ sin j ( x ) = (cid:114) L sin 2 πjxL . (53)Therefore, for one particular spatial frequency (22) becomes C j ( t ) = (cid:90) L (1 − t )0 (cid:2) ϕ cos j ( v ) ϕ cos j ( v + Lt ) + ϕ sin j ( v ) ϕ sin j ( v + Lt ) (cid:3) d v (54)which evaluates to C j ( t ) = 2 L (1 − t ) cos(2 πjt ) . (55)Substituting this C j expression to (21) leads to bias β j = 4 L (cid:90) L G ( x ) (cid:16) − xL (cid:17) cos (cid:16) πj xL (cid:17) d x , (56)where G ( x ) = G ( x ) /G (0) is normalized ACF. Since ACF is the Fourier transform ofspectral density of spatial frequencies, removing one frequency from the spectral densitycorresponds to removing one frequency component from the ACF.However, expression (56) is not exactly the j -th Fourier coefficient of ACF. It wouldbe if G ( x ) was non-zero only when x/L (cid:28)
1, allowing replacing the integral with2 L (cid:90) L/ − L/ G ( x ) cos (cid:16) πj xL (cid:17) d x . (57)This corresponds to the case α →
0. The difference between (56) and spectral density atfrequency j is due to the limited length and for background removal, i.e. small frequencies j , it is of order α in 1D.Since the data spectral density usually has a maximum at the zero frequencyand then monotonically decreases, filtering of low frequencies is the backgroundremoval which most efficiently reduces ˆ σ because it always takes the largest remainingcomponent. Nevertheless, for the lowest frequencies the result is quite similar to thesubtraction of polynomials. oughness bias originating from background subtraction Instead of the mean value, other quantities are sometimes subtracted during levelling,for instance median or trimmed mean. The motivation is that they are less sensitiveto outliers. These operations are non-linear and thus outside the framework developedabove.For 1D data they are also inconsistent with ‘mean value of z is zero’. Furthermore,if we subtracted the mean value afterwards it would nullify the effect of subtractingsomething else first. However, they can be meaningful for 2D data. When correctingmisaligned scan lines, each is levelled individually using the non-linear operation. Thiseffect survives subsequent subtraction of mean value from the entire image—which, infact, then frequently has very little effect.The estimated ˆ σ is again expressed by (5) if we replace ˆ µ by the subtractedquantity, which will be denoted ˆ m (for median). It is useful to write ˆ m = ˆ µ + ˆ δ asboth ˆ µ and ˆ m are location estimates, so their difference ˆ δ is presumably small. Thisgives expected valueE[ˆ σ ] = σ − E[ˆ µ ] + E[ˆ δ ] (58)as all mixed terms cancel. Therefore, the negative bias is always slightly reducedcompared to mean value subtraction and the expected difference is simply E[ˆ δ ].Numerical results confirm this conclusion. Asymptotic expressions for E[ˆ δ ] are knownfor many distributions in the case of uncorrelated data. For instance for median andGaussian distribution E[ˆ δ ] = ( π/ − /N , where N is the number of data values.More generally, N E[ˆ δ ] tends to a constant for N → ∞ if the probability density decayssufficiently fast.For correlated data N again has to be replaced with α D . Numerical calculationsgive E[ˆ δ ] ≈ σ α D ( p − qα ) (59)as a reasonable approximations in most cases, with p = 0 .
349 and q = 0 .
646 for 1D andGaussian ACF, p = 0 .
293 and q = 0 .
436 for 2D and Gaussian ACF, and p = 0 .
156 and q = 0 .
385 for 2D and exponential ACF. The exception is exponential ACF in 1D forwhich E[ˆ δ ] ≈ σ α D p exp( − qα ) (60)with p = 0 .
182 and q = 3 .
34 is more suitable for covering a wider α range. In bothformulae p corresponds to the limit α → q = 0 for a rough estimate.Considering the small differences between biases for mean and median levelling,detailed analysis of trimmed means is unnecessary. The bias lies between values formean and median—and this is sufficient for its estimation. oughness bias originating from background subtraction
4. Bias of other quantities
So far, we only considered the bias of ˆ σ . It has a linear definition, making it suitable foraveraging, and it often arises naturally in physical calculations, for instance in optics.Replacing ˆ σ with ˆ σ / (1 − β ) corrects its bias. However, there are many other quantitiescharacterizing the extent or variance of heights of rough surfaces. We will considerunsquared σ and average roughness, denoted Ra (whether in 1D or 2D). Finally, we willintroduce a single symbol for σ to avoid confusing notation: s = σ .It might seem that if we correct ˆ s by dividing by 1 − β then ˆ σ corrected shouldsimply use √ − β . Unfortunately, this is only true in the limit α →
0. For finite α correcting by 1 / √ − β does not result in an unbiased estimate. The reason is thatˆ s has non-zero dispersion (proportional to α D/ [9]) and square root is a non-lineartransformation. Since square root is concave, the Jensen’s inequality [21, 34] states that √ ˆ s underestimates σ when ˆ s itself is unbiased.The Taylor expansion of √ s around E[ s ] gives an expression in term of n -th centralmoments µ n [ˆ s ] [35]E[ˆ σ ] = σ (cid:20) − µ [ˆ s ] s + 116 µ [ˆ s ] s − µ [ˆ s ] s + . . . (cid:21) . (61)Although the values entering (6) are correlated, the law of large numbers still meansˆ s will tend to the normal distribution (the dispersion of heights z is obviously finite).Therefore, we can estimate µ k +1 [ˆ s ] ≈ µ k [ˆ s ] ≈ Var[ˆ s ] k (2 k − s ] depends onthe dimension, ACF type, levelling method and is, of course, a function of α . Theleading term is [9]Var[ˆ s ] ∼ as α D , (62)where a is a constant for given ACF. In general, (62) is again a series in α . Togetherwith (61), this again give a series expressionE[ˆ σ ] = σ (1 − a α D − a α D − . . . ) (63)for the biased mean value of ˆ σ . The bias is again negative and can be corrected bydividing by the term in parentheses.One consequence of relation (62) which needs to be emphasized is that there is adifference between averaging M independent profiles and M correlated image rows. Thebias β is the same in both cases if row-wise processing is applied. However, the varianceof ˆ s is reduced by factor 1 /M for independent scan lines, whereas for the image it isreduced only by α = T / ∆ y × /M , where ∆ y is the vertical sampling step. So both thevariance and the bias originating from (62) are larger for correlated image rows.Nevertheless, the bias following from variance is largest for single profiles, whereno averaging reduces it. Its magnitude is illustrated in figure 5 for this 1D case andGaussian and exponential ACF. Even in this case it does not exceed 1–2 % for reasonable L/T ratios, although it is somewhat higher for the Gaussian ACF. For other cases thebias is negligible since it is smaller by at least another order of magnitude. oughness bias originating from background subtraction R a t i o o f E [ σ ˆ ] t o √ E [ s ˆ ] Image side measured in autocorrelation lengths L/Ttrue signalpoly 0 (offset)poly 1 (tilt)poly 2 (bow)poly 3 (cubic)poly 4 (quartic)poly 5 (quintic)0.9700.9750.9800.9850.9900.9951.0000.0050.01 0.02 0.05 0.1 0.2200 100 50 20 10 5 R a t i o o f R a and σ c o rr e c t i on s ( D ) Ratio α = T/L 0.9860.9880.9900.9920.9940.9960.9981.0000.01 0.02 0.05 0.1 0.2100 50 20 10 5 R a t i o o f R a and σ c o rr e c t i on s ( D ) Ratio α = T/L Figure 5.
Results of simulation showing the bias due to non-linearity bias not capturedin error propagation rule (top) and ratio of correction factors for Ra and σ . Dashedlines correspond to exponential ACF, full lines to Gaussian. Concerning Ra, the ratio Ra/ σ is a constant for any particular distribution ofheights. Therefore, in the limit α → σ . Of course,the main point of this work is that α cannot be considered zero. The distribution ofheights changes somewhat by levelling, so we must ask how much the correction factorschange with increasing α .The results of numerical calculations are plotted in figure 5. Fortunately, the ratiosof correction factors for Rq and Ra are close to unity, even though they are muchlarger for Gaussian ACF than for exponential. In 2D the effect can be probably safelydisregarded, in 1D it may be useful to consider it, depending on the ACF form.
5. Experimental example
A realistic example illustrating the impact of profile length on measured roughnessquantities is shown in 6. It was obtained by measuring a set of long profiles ofa surface roughness standard from Edmund Optics based on electroformed nickelplates representing different surface finishes. Measurements were done using theNanomeasuring and Nanopositioning Machine NMM1 [36] from SIOS company.Combined with a custom built AFM head (used in contact mode here with PPP-CONTRcantilevers), the instrument can be used for measurements over even a centimetre areas.The measured profiles were approximately 1.2 mm long, approximately 1000 × longer oughness bias originating from background subtraction M ea s u r ed r m s r oughne s σ [ n m ] Profile length [mm]true signalpoly 0 (offset)poly 1 (tilt)poly 2 (bow)poly 3 (cubic)poly 4 (quartic)poly 5 (quintic)median0204060 0.002 0.005 0.01 0.02 0.05 0.1 0.2 S t anda r d de v . o f σ [ n m ] Ratio α = T/L Figure 6.
Measured roughness and its standard deviation as a function of profilelength for a reference roughness sample with Ra of 50 nm, showing the bias evolutionfor shorter profile lengths and its dependence of the levelling. than the estimated correlation length of 12 µ m. Data corresponding to shorter evaluationlengths were then obtained by cutting short segments from these profiles.The dependency of measured mean square roughness σ on α = T /L is plottedin figure 6 for each polynomial degree from 0 to 5 and for illustration for medianlevelling as well (even though it does not satisfy the zero-mean assumption). Overall,the dependencies resemble the theoretical curves, as illustrated for instance in figure 2for Gaussian ACF. The decrease of σ for smallest α is an artefact caused by levelling ofthe long base profile. The longer profiles cut from it were already of comparable lengths;a longer base profile would be necessary for stable result.Figure 6 also illustrates the standard deviation of measured σ as a function of α . According to the asymptotic estimates [9] it should not depend on the levellingfor small α . This is confirmed as up to α ≈ .
02 the curves are indistinguishable.Considering contributions to measurement uncertainty, this random part predominates,at least for a single evaluation. However, it can be reduced by evaluating roughnessmultiple times, which is anyway recommended. In contrast, the bias is unaffected byrepeated measurement because it is inherently tied to
T /L .
6. Conclusion
Bias caused by limited measurement area is a universal and mostly unavoidable effectskewing measured roughness values. While the effect itself is well known (mostly for oughness bias originating from background subtraction α of correlation length to scan linelength, they allow obtaining the relative negative bias β of ˆ σ . For Ra, Sa, Rq or Sqthe bias is approximately half this value. Acknowledgements
This work was supported by the EURAMET joint research project ”Six degrees offreedom” funded from the European Union’s Seventh Framework Programme, ERA-NET Plus, under Grant Agreement No. 217257, and by the Ministry of Education,Youth and Sports of the Czech Republic under the project CEITEC 2020 (LQ1601).
References [1] Xiaoke He, Weixuan Jiao, Chuan Wang, and Weidong Cao. Influence of surface roughness onthe pump performance based on computational fluid dynamics.
IEEE Access , 7:105331–105341,2019.[2] Pedram Alipour, Davood Toghraie, Arash Karimipour, and Mehdi Hajian. Modeling differentstructures in perturbed Poiseuille flow in a nanochannel by using of molecular dynamicssimulation: Study the equilibrium.
Physica A: Statistical Mechanics and its Applications ,515:13–30, 2019.[3] T Pravinraj and R Patrikar. Modeling and characterization of surface roughness effect on fluidflow in a polydimethylsiloxane microchannel using a fractal based lattice boltzmann method.
AIP Advances , 8:065112, 2018.[4] Marcus Trost and Sven Schr¨oder. Roughness and scatter in optical coatings. In Miloslav Ohl´ıdal oughness bias originating from background subtraction and Olaf Stenzel, editors, Optical Characterization of Thin Solid Films , volume 64 of
SpringerSeries in Surface Sciences , pages 377–405. Springer, Cham, 2018.[5] G Macias, M Alba, L F Marsal, and A Mihi. Surface roughness boosts the sers performance ofimprinted plasmonic architectures.
Journal of Materials Chemistry C , 4:3970–3975, 2016.[6] F Tan, T Li, N Wang, S K Lai, C Chung Tsoi, W Yu, and X Zhang. Rough gold films as broadbandabsorbers for plasmonic enhancement of tio2 photocurrent over 400800 nm.
Scientific Reports ,6:33049, 2016.[7] Yanjun Shen, Yongzhi Wang, Yang Yang, Qiang Sun, Tao Luo, and Huan Zhang. Influenceof surface roughness and hydrophilicity on bonding strength of concrete-rock interface.
Construction and Building Materials , 213:156–166, 2019.[8] G¨ulistan Koer, Jeroen ter Schiphorst, Matthew Hendrikx, Hailu G. Kassa, Philippe Leclre,Albertus P. H. J. Schenning, and Pascal Jonkheijm. Lightresponsive hierarchically structuredliquid crystal polymer networks for harnessing cell adhesion and migration.
Advanced Materials ,29:1606407, 2017.[9] Yiping Zhao, Gwo-Ching Wang, and Toh-Ming Lu.
Characterization of Amorphous andCrystalline Rough Surface – Principles and Applications , volume 37 of
Experimental Methods inthe Physical Sciences . Academic Press, San Diego, 2000.[10] ISO 4287:1997. Geometrical product specification (GPS). surface texture. profile method. terms,definitions and surface texture parameters, 1997.[11] ASME B46.1 (2009). Surface texture (surface roughness, waviness, lay). Am. Soc. Mech. Eng.[12] ISO 25178:2012. Geometric product specifications (GPS) – surface texture: Areal, 2012.[13] ISO 19606:2017. Fine ceramics (advanced ceramics, advanced technical ceramics) test methodfor surface roughness of fine ceramic films by atomic force microscopy, 2017.[14] A.-L. Barab´asi and H. E. Stanley.
Fractal concepts in surface growth . Cambridge University Press,New York, 1995.[15] David Neˇcas and Ivan Ohl´ıdal. Consolidated series for efficient calculation of the reflection andtransmission in rough multilayers.
Opt. Express , 22:4499–4515, 2014.[16] Martin ˇCerm´ak, Jiˇr´ı Voh´anka, Ivan Ohl´ıdal, and Daniel Franta. Optical quantities of multi-layersystems with randomly rough boundaries calculated using the exact approach of the Rayleigh–Rice theory.
J. Mod. Opt. , 65:1720–1736, 2018.[17] Petr Klapetek.
Quantitative Data Processing in Scanning Probe Microscopy . Elsevier, 2nd edition,2018.[18] Christopher A. Brown, Hans N. Hansen, Xiang Jane Jiang, Franois Blateyron, Johan Berglund,Nicola Senin, Tomasz Bartkowiak, Barnali Dixon, Ga¨etan Le Goc, Yann Quinsat, W. JamesStemp, Mary Kathryn Thompson, Peter S. Ungar, and E. HassanZahouani. Multiscale analysesand characterizations of surface topographies.
CIRP Annals , 67:839–862, 2018.[19] ISO 25178-2:2012. Geometrical product specifications (GPS) – surface texture: Areal – part 2:Terms, definitions and surface texture parameters, 2012.[20] Theodore W. Anderson.
The Statistical Analysis of Time Series . Wiley series in probability andmathematical statistics. John Wiley & Sons, New York, 1971.[21] Venkatarama Krishnan and Kavitha Chandra.
Probability and Random Processes . Wiley, 2ndedition, 2015.[22] George E. P. Box, Gwilym M. Jenkins, Gregory C. Reinsel, and Greta M. Ljung.
Time SeriesAnalysis: Forecasting and Control . Wiley, 5th edition, 2015.[23] David Neˇcas and Petr Klapetek. One-dimensional autocorrelation and power spectrum densityfunctions of irregular regions.
Ultramicroscopy , 124:13–19, 2013.[24] Daniel Zwilinger.
Handbook of Integration . Jones and Bartlett Publishers, London, 1992.[25] K. Schouterden, B. M. Lairson, and M. H. Azarian. Optimal filtering of scanning probe microscopeimages for wear analysis of smooth surfaces.
J. Vac. Sci. Technol. B , 14:3445–3451, 1998.[26] Daniel P. Fogarty, Amanda L. Deering, Song Guo, Zhongqing Wei, Natalie A. Kautz, and S. AlexKandela. Minimizing image-processing artifacts in scanning tunneling microscopy using linear- oughness bias originating from background subtraction regression fitting. Rev. Sci. Instrum. , 77:126104, 2006.[27] Alejandro Gimeno, Pablo Ares, Ignacio Horcas, Adriana Gil, Jos´e M. G´omez-Rodr´ıguez, JaimeColchero, and Julio G´omez-Herrero. ‘flatten plus’: a recent implementation in WSxM forbiological research.
Bioinformatics , 31:2918–2920, 2015.[28] Ph. Dumas, B. Bouffakhreddine, C. Amra, O. Vatel, E. Andre, G. Galindo, and F. Salvan.Quantitative microroughness analysis down to the nanometer scale.
Europhysics Letters , 22:717–722, 1993.[29] Jaechoul Lee and Robert Lund. Revisiting simple linear regression with autocorrelated errors.
Biometrika , 91:240–245, 2004.[30] Ronald N. Bracewell.
The Fourier Transform & Its Applications . McGraw-Hill, Singapore, 1999.[31] Milton Abramowitz and Irene A. Stegun.
Handbook of mathematical functions with Formulas,Graphs, and Mathematical Tables . National Bureau of Standards, Washington, 1964.[32] Maxima, a computer algebra system, version 5.42.1. http://maxima.sourceforge.net/, 2019.[33] Giorgio Franceschetti and Daniele Riccio.
Scattering, Natural Surfaces, and Fractals . AcademicPress, London, 2007.[34] J. L. W. V. Jensen. Sur les fonctions convexes et les in´egalit´es entre les valeurs moyennes.
ActaMathematica , 30:175–193, 1906. in French.[35] S. M. Kendall and A. Stuart.
The Advanced Theory of Statistics, Vol. 1 Distribution Theory . C.Griffin and Co., London, 1977.[36] E. Manske, T. Hausotte, R. Mastylo, T. Machleidt, K.-H. Franke, and G. J¨ager. New applicationsof the nanopositioning and nanomeasuring machine by using advanced tactile and non-tactileprobes.