Estimation of fractal dimension and fractal curvatures from digital images
EEstimation of fractal dimension and fractal curvatures from digitalimages
Evgeny Spodarev
Ulm University, Institute of Stochastics, 89069 Ulm, Germany
Peter Straka
University of New South Wales, School of Mathematics and Statistics, Sydney NSW 2052, Australia
Steffen Winter ∗ Karlsruhe Institute of Technology, Department of Mathematics, Kaiserstr. 89, 76128 Karlsruhe, Germany
Abstract
Most of the known methods for estimating the fractal dimension of fractal sets are based on theevaluation of a single geometric characteristic, e.g. the volume of its parallel sets. We propose amethod involving the evaluation of several geometric characteristics, namely all the intrinsic volumes(i.e. volume, surface area, Euler characteristic etc.) of the parallel sets of a fractal. Motivated byrecent results on their limiting behaviour, we use these functionals to estimate the fractal dimensionof sets from digital images. Simultaneously, we also obtain estimates of the fractal curvatures ofthese sets, some fractal counterpart of intrinsic volumes, allowing a finer classification of fractal setsthan by means of fractal dimension only. We show the consistency of our estimators and test themon some digital images of self-similar sets.
Keywords: fractal curvatures, fractal dimension, sausage method, box counting method,lacunarity, dimension estimation, (non)linear regression, Minkowski dimension, Minkowski content
Primary 28A80; Secondary 28A75, 62J05, 62J02, 62M15, 68U10, 94A08
1. Introduction
For the classification of fractal sets, it is common to examine their fractal dimension. Two majoralgorithms for estimating fractal dimensions are well known and extensively used: The box countingalgorithm and the sausage method , whose name is due to the use of ε -parallel sets F ε := { x ∈ R d : d ( x, F ) ≤ ε } , ε ≥ , (1)for the approximation of a given fractal set F ⊂ R d (see e.g. [45]). The latter approach is related tothe Minkowski-dimension dim M F , which is the number s ≥ d ( F ε ) ∼ c · ε d − s , as ε (cid:38) c . (Here and throughout ∼ means that the quotient of left and right hand sideconverges to 1, vol d ( · ) is the d -dimensional Lebesgue measure and d ( x, F ) := inf y ∈ F | x − y | , where | · | is the Euclidean norm in R d ). The Minkowski dimension is known to be equivalent to thebox counting dimension for any bounded set F ⊂ R d . Thus both approaches estimate the same ∗ Corresponding author
Email addresses: [email protected] (Evgeny Spodarev), [email protected] (Peter Straka), [email protected] (Steffen Winter)
Preprint submitted to Elsevier August 28, 2014 a r X i v : . [ m a t h . M G ] A ug athematical object. Beside these two most popular algorithms, many other computation methodssuch as e.g. the local dimension method, and various refinements of the two basic methods areavailable, see e.g. [10, 31, 33, 42, 46, 17] as well as the books [45, 18] and the references therein.It has been observed in many applications that often the fractal dimension alone is not sufficientto distinguish or classify different fractal structures and additional texture parameters have beensuggested. One of the simplest and most prominent is the lacunarity , suggested by Mandelbrot in[29, 30]. For a set F ⊂ R d with Minkowski dimension dim M F = s it is defined as the reciprocalof its s -dimensional Minkowski content M s ( F ) := lim ε (cid:38) ε s − d vol d ( F ε ). That is, the lacunarity isessentially one over the constant c in the relation (2). In the sausage method, the y -intercept of theregression line is a reasonable estimator for log c and therefore, one gets the lacunarity almost forfree together with the estimate for the dimension s . Lacunarity can be understood as a measureof how fast the space around the fractal is occupied when the parallel set grows. It is able todistinguish fractal structures of equal dimension. Algorithms for determining lacunarity and relatedtexture parameters have been proposed and analyzed, e.g. in [3, 47, 4, 5] and used successfully invery different fields such as pattern recognition [4], signal processing [32], DNA classification [9],the analysis of aggregation clusters in statistical mechanics [41] and of breast tumors in medicine[43], to mention just a few recent studies. Despite these many positive examples, one can not hopethat a single texture parameter like lacunarity will always be able to successfully distinguish orclassify fractal structures of a given class. It is easy to construct sets with very different texture andvisual appearance but the same dimension and lacunarity, from which the need for more textureparameters is evident.In the present work, we suggest a whole vector of texture parameters for fractal sets based onthe recently introduced concept of fractal curvatures [48] and we propose some methods how toestimate these parameters separately or simultaneously from given binary images of a fractal set.Our approach may be viewed as a generalization of the sausage method: Given a bounded subset F ⊂ R d , we look at the parallel sets F ε of F for small radii ε >
0. While the sausage method considersthe behaviour of the volume of these parallel sets only, we propose to study the asymptotic behaviourof all (or at least several) total curvatures C ( F ε ) , . . . , C d ( F ε ) as ε (cid:38)
0. Total curvatures (or intrinsicvolumes ) are important geometric characteristics and are defined for different classes of boundedsets A ⊂ R d , e.g. convex and polyconvex sets, sets of positive reach and their unions, etc. They havethe following interpretations: C d ( A ) is the d -dimensional volume, C d − ( A ) is essentially the surfacearea and C ( A ) is the Euler characteristic of A . The remaining functionals have interpretations asintegrals of mean curvature. For a set A ∈ R this means for instance that C ( A ), C ( A ) and C ( A )are (up to normalization) area, boundary length and Euler characteristic of A , respectively. Theintrinsic volumes can be determined simultaneously from binary images using e.g. the algorithmsdescribed in [20].The proposed methods are based on the following ideas and definitions from [48]: For a fractalset F ⊂ R d with (Minkowski) dimension dim M F = s , the k -th total curvature C k ( F ε ) behavestypically like ε k − s as ε (cid:38)
0. Often, for instance for non-arithmetic self-similar sets, one has directproportionality, that is, the limit C k ( F ) := lim ε (cid:38) ε s − k C k ( F ε ) (3)exists and is then called the k -th fractal curvature of F . In case the limit in (3) does not exist, e.g. forthe Sierpinski gasket, C k ( F ε ) may still be of the order ε s − k . Then one often observes oscillations inthe geometry and hence in the total curvatures C k ( F ε ) which do not vanish as ε (cid:38)
0. Instead C k ( F ε )is asymptotic to some periodic function. This is the typical behaviour for instance for arithmeticself-similar sets. In this case one has C k ( F ε ) = Θ( ε k − s ) as ε (cid:38)
0, that is, the quotient | C k ( F ε ) | /ε k − s is bounded from above and below by some constants. Moreover, the following average limit typicallyexists: C k ( F ) := lim δ (cid:38) | log δ | (cid:90) δ ε s − k C k ( F ε ) dεε , (4)which is then called the k -th average fractal curvature of the set F . For k = d , the definitions (3)and (4) are just the well known Minkowski content and its averaged counterpart, which are thus2aturally included in the framework of (average) fractal curvatures. We refer to Section 2 and thereferences therein for more details and results on fractal curvatures.Based on these ideas, we propose two methods for estimating simultaneously the fractal dimen-sion and the (average) fractal curvatures of a given set F ⊂ R d from its digital approximations.The first method is based on a multivariate linear regression and tries to estimate simultaneously all(or at least several) fractal curvatures in (3) together with the dimension. This attempt does onlymake sense under the assumption that for F all the fractal curvatures exist which are to be includedin the regression. Since in many situations this assumption is not satisfied, even for self-similarsets, we propose a second method, which tries to estimate averaged fractal curvatures instead. Thesecond method is a more sophisticated quasi–linear regression inspired by a time series approachwith a linear drift and a truncated Fourier series as a seasonal part to model the oscillations in thegeometry. It allows to estimate the average fractal curvatures even when the limits in (3) do notexist. For this approach to be meaningful, the assumption on the existence of fractal curvaturesis replaced by the much weaker assumption that the expression ε s − k C k ( F ε ) is asymptotic to some(multiplicatively) periodic function p k ( ε ) as ε (cid:38)
0. This is what is typically observed in situationswhere some kind of self-similarity is present.Roughly speaking, the estimation procedure in the first method is as follows: Given a binaryimage of a fractal set F ⊂ R d , we first measure the values of C k ( F ε j ) for a set of dilation radii { ε , . . . , ε n } and all k ∈ { , . . . , d } . In this step, we employ the algorithm described in [25] and [20]which allows for a simultaneous computation of all intrinsic volumes in only one scan of each set F ε j . Second, we use the ( d + 1) asymptotic relations C k ( F ε ) ∼ C k ( F ) ε k − s , as ε (cid:38) , (5)implied by (3) for a linear regression. Multiplying by ε − k and taking logarithms of the absolutevalues on both sides in (5), we get the relation log (cid:0) ε − k | C k ( F ε ) | (cid:1) ∼ β k − s log ε as ε (cid:38)
0, where β k := log |C k ( F ) | , which suggests to compare the expression log (cid:0) ε − k | C k ( F ε ) | (cid:1) to the line β k + sx inthe variable x := − log ε . Similarly, by combining all the data, the set of vectors (cid:8)(cid:0) log( ε − j | C ( F ε j )) | , log( ε − j | C ( F ε j )) | , . . . , log( ε − dj | C d ( F ε j )) | (cid:1)(cid:9) j =1 ,...,n plotted against x j = − log ε j provides a point cloud in R d +2 that resembles a line and a leastsquares fit will result in an estimate of the dimension s = dim M F of the fractal set F , as well as ofits fractal curvatures C k ( F ), k = 0 , . . . , d . Deviations from the line are due to image discretization,measurement errors and the described geometric oscillations of the intrinsic volumes that may onlyvanish as ε (cid:38)
0. These errors are supposed to be random. They can not be observed directly.Notice that this is the only source of randomness in this method, since the set F is deterministic.Due to these assumptions, statistical regression methodology can be used.In the second method, the linear regression step is replaced by a quasi–linear regression, whichcan be interpreted as fitting a periodic function to the above point cloud. For details of both methodswe refer to Section 3.Under suitable assumptions on the covariance structure of the error in our models and for suitablechoices of the radii ε j we prove the weak consistency of our estimators as the number of observations n tends to ∞ . Furthermore, we have implemented the algorithms and tested them on a number ofself-similar sets.The paper is organized as follows: In Section 2, some notions from fractal geometry are recalledand the relevant results on curvature measures and fractal curvatures are reviewed. In Section 3,we introduce the two methods for estimating the fractal dimension s ( F ) and the fractal curvatures C k ( F ), k = 0 , . . . , d of a fractal set F and discuss their asymptotic properties, based on a suitablemodel for the discretization errors. Section 4 is concerned with the implementation of the meth-ods and some simulation results: For some self-similar sets in R the fractal dimension and thefractal curvatures are estimated using the proposed methods and the results are compared to the(known) exact dimension and fractal curvatures as well as to estimates of the dimension providedby conventional methods. 3 . Fractal dimension and fractal curvatures In this section we provide some theoretical background required for the justification of our ap-proach. First we recall a few facts on fractal dimensions and self-similar sets. Then we discusscurvature measures and review some recent results on fractal curvatures.
Box counting dimension and Minkowski dimension..
For a bounded set F ⊂ R d and ε > ε -parallel set F ε from (1). The numberdim M F := d − lim ε (cid:38) log vol d ( F ε )log ε is called the Minkowski dimension of F , provided the limit exists (cf. [12]). It is well known thatthe Minkowski dimension of any set F coincides with its box dimension dim B F (provided one ofthese numbers exists), which is defined bydim B F := lim δ (cid:38) log N δ ( F ) − log δ . Here N δ ( F ) is the number of boxes in a δ -grid of R d that intersect F . Moreover, dim M F is alwaysan upper bound for the Hausdorff dimension dim H F of F . See e.g. [12] for more details on fractaldimensions and their properties and interrelations. Self-similar sets..
Let S i : R d → R d , i = 1 , . . . , N , be contracting similarities. Denote thecontraction ratio of S i by r i ∈ (0 , { S , . . . , S N } of similarities there exists a unique, non-empty, compact subset F of R d satisfying the invariancerelation S ( F ) = F , where S is the set mapping defined by S ( A ) = N (cid:91) i =1 S i ( A ) , A ⊆ R d .F is called the self-similar set generated by the system { S , . . . , S N } . Moreover, the unique solution s of (cid:80) Ni =1 r si = 1 is called the similarity dimension of F .The system { S , . . . , S N } is said to satisfy the open set condition (OSC) if there exists an open,non-empty, bounded subset O ⊂ R d such that S i O ⊆ O for i = 1 , . . . , N and S i O ∩ S j O = ∅ for all i (cid:54) = j. The OSC ensures that the images S F, . . . , S N F of F do not overlap too much. It is well known, thatfor self-similar sets F satisfying OSC, all the different dimensions coincide, i.e. one has dim M F =dim H F = s , where s is the similarity dimension of F . For sets not satisfying OSC much less isknown; still s is an upper bound for Hausdorff and Minkowski dimension, but these two may bestrictly smaller than s and, furthermore, they might differ. In the sequel we shall assume that theself-similar sets satisfy OSC.Let h >
0. A finite set of positive numbers { y , ..., y N } is called h -arithmetic if h is the largestnumber such that y i ∈ h Z for i = 1 , . . . , N . If no such number h exists for { y , ..., y N } , the setis called non-arithmetic . We attribute these properties to the system { S , . . . , S N } or to F if theset {− log r , . . . , − log r N } has them. In this sense, each self-similar set F is either h –arithmeticfor some h > urvature measures and intrinsic volumes.. We first recall the notion of curvature measuresfor polyconvex sets, as this class of sets includes the parallel sets of digitized sets and hence is asufficiently general setting for the implementation of our algorithms. For a general definition offractal curvatures we briefly discuss curvature measures on more general classes of sets in the nextparagraph.Let K d denote the class of compact, convex subsets of R d and R d the class of sets that can berepresented as finite unions of sets in K d . R d is called the convex ring , its elements are polyconvexsets . For sets K ∈ K d , the volume vol d of the ε -parallel sets of K is given by the so called Steinerformula . For ε ≥
0, vol d ( K ε ) is a polynomial in ε :vol d ( K ε ) = d (cid:88) k =0 ε d − k κ d − k C k ( K ) . (6)The coefficients C ( K ) , . . . , C d ( K ) are called the intrinsic volumes or total curvatures of K . κ j denotes the j -dimensional volume of the unit ball in R j .The total curvatures are the total masses of certain measures called the curvature measures of K . They satisfy a local Steiner formula which is due to Federer [13]. Let p K denote the metricprojection onto the set K . For any Borel set A ⊂ R d , the volume of the set K ε ∩ p − K ( A ) is again apolynomial in ε : vol d ( K ε ∩ p − K ( A )) = d (cid:88) k =0 ε d − k κ d − k C k ( K, A ) . (7)The coefficients C ( K, · ) , . . . , C d ( K, · ) are measures in the second argument. They are called the curvature measures of K . Their total masses are the total curvatures, C k ( K, R d ) = C k ( K ). Cur-vature measures are additive , i.e. if M, K and M ∪ K ∈ K d , then M ∩ K ∈ K d and the curvaturemeasures satisfy C k ( M ∪ K, · ) = C k ( M, · ) + C k ( K, · ) − C k ( M ∩ K, · ) , k = 0 , . . . , d. This allows to extend curvature measures additively to R d . Iterating the above formula gives theinclusion-exclusion formula for sets K , . . . , K m ∈ K d such that their union K := K ∪ . . . ∪ K m isin K d : C k ( K, · ) = (cid:88) I ⊂{ ,...,m } ( − | I |− C k ( (cid:92) i ∈ I K i , · ) , k = 0 , . . . , d. (8)Here | I | denotes the cardinality of the set I . Now, if the left hand side is not defined, i.e. if K is in R d but not in K d , take the right hand side as its definition. Groemer [19] has shown that this extensionis well defined, i.e. that the left hand side is independent of the chosen representation of K by convexsets. In general, for K ∈ R d the curvature measure C k ( K, · ) is a signed measure, k = 0 , . . . , d − C var k ( K, · ) its total variation and put C var k ( K ) := C var k ( K, R d ), k = 0 , . . . , d . Sets with positive reach..
For X ⊂ R d , let Unp( X ) be the set of points y ∈ R d which have aunique nearest point in X . Unp( X ) consists of those points for which the metric projection onto X is well defined. The supremum over all radii ε ≥ X ε ⊂ Unp( X ) is called the reach of X ,reach( X ), and X is said to have positive reach , if reach( X ) >
0. For any set K with positive reach,the local Steiner formula (7) holds for all ε such that 0 < ε < reach( K ), which allows to define thecurvature measures C ( K, · ) , . . . , C d ( K, · ) of K just as before, see [13]. These curvature measureshave similar properties, in particular they are additive, motion invariant and homogeneous of degree k (meaning C k ( rK ) = r k C k ( K ) for r > K ⊂ R d , a radius ε > regular , if ε is a regular value of the distancefunction of K in the sense of Morse theory, see [15]. According to [15], for d ≤ K ⊂ R d acompact set, almost all ε > K . Regularity of a radius ε for K implies, that theboundary of K ε is a Lipschitz manifold and the closed complement (cid:102) K ε of K ε has positive reach.Therefore, the curvature measures of (cid:102) K ε are well defined in the sense of Federer and the curvaturemeasures of K ε are then given by means of the following reflection principle (see [39]): C k ( K ε , · ) = ( − d − k − C k ( (cid:102) K ε , · ) , k = 0 , . . . , d − . As before, we denote by C k ( K ε ) := C k ( K ε , R d ) the total masses of the measures C k ( K ε , · ), whichare also called the Lipschitz-Killing curvature measures of K ε . We continue to use the term totalcurvatures for the C k ( K ε ). Recall that C var k ( K ε , · ) is the total variation measure of the (in generalsigned) measure C k ( K ε , · ) and C var k ( K ε ) its total mass. Scaling exponents and fractal curvatures..
For the definition of fractal curvatures for a set,it is necessary that sufficiently many of its close parallel sets admit curvatures measures. For acompact set F ⊂ R d , we assume that almost all ε > d ≤
3, this is always satisfied.) Then, for each k ∈ { , . . . , d } , the k -th curvature scalingexponent s k = s k ( F ) of the set F is defined by s k ( F ) := inf (cid:26) t ∈ R : esslim ε (cid:38) ε t C var k ( F ε ) = 0 (cid:27) , (9)cf. e.g. [48, p.13] or [34, eq. (1.5)]. The typical value of s k ( F ) for a fractal set F of dimensiondim M F = s is s k = s − k . Although this relation may fail for certain sets F , it is useful toconcentrate on the following essential limit (avoiding the irregular ε ) and call it the k-th fractalcurvature of F in case it exists: C k ( F ) := esslim ε (cid:38) ε s − k C k ( F ε ) . (10)In general, the limit in (10) does not exist. Even for self-similar sets, it often fails to exist. Therefore,the following Cesaro averaged version of the limit is considered, which has a better convergencebehaviour. For k = 0 , . . . , d , the k -th average fractal curvature of F is the number C k ( F ) := lim δ (cid:38) | log δ | (cid:90) δ ε s − k C k ( F ε ) dεε (11)provided this limit exists. Note that if C k ( F ) exists, then C k ( F ) exists as well and both numbers co-incide. The functionals C k ( F ) and C k ( F ) deserve to be called curvatures , since they share some of thedesirable properties of total curvatures. In particular, they are motion-invariant and homogeneous,though in general C k is of degree k + s k , cf. [48]. As fractal curvatures are limits of classical totalcurvatures, they are expected to carry important geometric information about the fractal set F andtherefore they are natural candidates to be considered as geometric indices or texture parameters. Remark 2.1.
The definition of the fractal curvatures here is slightly different to the one given in[48], where the exponent s k is put in general instead of s − k . This slightly changed point of view(taken up e.g. in [50] and [49]) emphasizes the generic case. It may give a zero for some fractalcurvature in the exceptional cases where the exponent s − k is not optimal. However, recent resultsin [34] show that these exceptional cases are rare and can be classified completely at least in R and R . In general, one could define for each t ∈ R the t -dimensional k -th fractal curvature of a set F by lim ε (cid:38) ε t C k ( F ε ) (just in the same way as for the t -dimensional Minkowski content of F ). Thenthe two different definitions would be special cases (which often coincide) of this general notion. Asimilar remark applies to average fractal curvatures. he fractal curvatures of self-similar sets.. In general it is difficult to determine the fractaldimension or the Minkowski content of a set exactly and it is even more difficult for fractal curvatures.However, for self-similar sets some rigorous results have been established, which exemplify that theabove definitions are reasonable and useful. So let now F be a self-similar set satisfying OSC. In[48], self-similar sets with polyconvex parallel sets have been considered. Polyconvexity is a ratherrestrictive condition, which ensures however that curvature measures are well defined for all parallelsets F ε of F . Note also that polyconvexity is easy to check, as the following criterion holds, see [27]: F ε is polyconvex for all ε > ε > F ε is polyconvex.For such sets it was shown that for k = 0 , . . . , d , the expression ε s − k C var k ( F ε ) is uniformlybounded as ε (cid:38)
0, implying in particular that s − k is a general upper bound for the k -th scalingexponent s k , that is, we have s k ≤ s − k. (12)Moreover, a characterization was given of when (average) fractal curvatures exist for these sets: Theorem 2.2. [48, Theorem 2.3.6 and Remark 4.1.5] Let F ⊂ R d be a self–similar set generatedby the system { S , . . . , S N } with contraction ratios r i and similarity dimension s . Suppose that F satisfies the OSC and has polyconvex parallel sets. Then, for each k ∈ { , . . . , d } , there exists abounded function p k : (0 , → R such that ε s − k C k ( F ε ) ∼ p k ( ε ) as ε (cid:38) . Moreover, the following holds:(i) If F is h -arithmetic (for some h > ), then p k can be chosen multiplicatively periodic withperiod h , i.e. p k ( hε ) = p k ( ε ) , and C k ( F ) exists.(ii) If F is non-arithmetic, then p k can be chosen constant, and C k ( F ) exists.The value of C k ( F ) (and in the non-arithmetic case of C k ( F ) ) is given by the integral η (cid:90) ε s − k − (cid:32) C k ( F ε ) − N (cid:88) i =1 (0 ,r i ] ( ε ) C k (( S i F ) ε ) (cid:33) dε, (13) where η = − (cid:80) Ni =1 r si log r i . Note that the last formula allows explicit (but rather tedious) calculations of C k ( F ). Theseexact values will be compared with the values estimated from binary images of some fractals F inSection 4. It follows from [27, Lemma 3.2] that for an h -arithmetic self-similar set F the value of C k ( F ) is equivalently given by C k ( F ) = 1 h (cid:90) h p k (cid:0) e − x (cid:1) dx, (14)where h = − log h .Theorem 2.2 extends to a more general class of self-similar sets. The polyconvexity can bereplaced by the weaker regularity assumption mentioned above, that almost all ε are regular for F .In this general situation, one needs to assume additionally that a rather technical curvature boundis satisfied. We refer to [50, 40] for further details. In [50], analogous results have been establishedfor random self-similar sets and in [26, 8] for self-conformal sets. For the cases k = d and k = d − F ⊂ R d , the assertions of Theorem 2.2 hold for any self-similar set satisfyingOSC regardless of any polyconvexity or regularity assumption, see [16] for the case k = d and [37]for the case k = d −
1. In these two cases it has also been shown that C k ( F ) (as well as C k ( F ), if itexists) are strictly positive. Moreover, some deep connections between C d − ( F ) and the Minkowskicontent C d ( F ) have been established in [37]. In particular, for any self-similar set F ⊂ R d (with7SC) one has the equality s d − = s d − s −
1, provided s < d . Moreover, C d − ( F ) and C d ( F )coincide up to some normalisation constant: C d − ( F ) = d − s C d ( F ) . (15)The same relation holds for the average counterparts, see [37, Theorems 4.5 and 4.7]. In [38] it isshown, that the relation (15) holds in fact for arbitrary bounded sets F ⊂ R d with dim M F < d ,that is, whenever one of these two fractal curvatures exists (as a positive and finite number) thenthe other one exists as well and equation (15) holds.
3. Estimators of dimension and fractal curvatures
Let F ⊂ R d be a fractal set satisfying the following assumptions:(A1) The parallel sets F ε of F are sufficiently regular for curvature measures C ( F ε , · ) , . . . , C d ( F ε , · )to be well defined for almost all ε > k = 0 , . . . , d , the expression C k ( F ε ) (as a function of ε ) is either strictly positive orstrictly negative.(A3) The fractal curvatures C ( F ) , . . . , C d ( F ) exist, i.e., for each k = 0 , . . . , d , the essential limit in(10) exists.Assumption (A1) is a condition on the regularity of the parallel sets. The notion of fractalcurvature does not make sense for F if the curvature measures of its parallel sets are not definedin some way. However, most sets that one can think of satisfy this assumption. As outlined inSection 2, in R d , d ≤
3, almost all ε > F and therefore this condition is satisfied.In higher dimensions the condition may fail but the construction of counterexamples is difficult.Therefore, from the point of view of applications, assumption (A1) imposes no restriction. Notein particular that the parallel sets of the digitized fractal images are always polyconvex such thatcurvature measures are well defined for all ε > k can simply be excluded from the estimation when C k ( F ε ) fail to satisfycondition (A2), see Remark 3.1 below. We point out that this assumption is always satisfied for thetwo uppermost indices d − d : for any set F ⊂ R d and any ε >
0, volume C d ( F ε ) and surfacearea C d − ( F ε ) are always strictly positive. As the curvature measures C k ( F ε , · ), k ≤ d − ε (cid:38) F is bounded from above by s − k ,compare Section 2, where it is also outlined that a rigorous mathematical proof of the existence offractal curvatures has up to now only been given for non-arithmetic self-similar sets. Later on weshall replace (A3) by the weaker assumption (A3’), which ensures that the average fractal curvaturesexist, a setting which includes in particular all self-similar sets. First method: Ordinary least squares..
Assumption (A3) implies that the asymptotic relation (5)holds for k = 0 , . . . , d which (after taking absolute values, multiplying by ε − k and taking logs) canbe written as log( ε − k | C k ( F ε ) | ) ∼ log |C k ( F ) | − s log ε , as ε (cid:38) , (16)8or each k = 0 , . . . , d . This allows for a linear regression.Given a set of ε -values ε , . . . , ε n , we set x j := − log ε j , (17) y kj := log (cid:0) ε − kj | C k ( F ε j ) | (cid:1) , k = 0 , . . . , d. (18)Relation (16) suggests that if the radii ε j are small enough, then the points ( y j , y j , . . . , y dj ) ∈ R d +1 , j = 1 , . . . , n will lie close to a line (see Figure 1). Setting β k := log |C k ( F ) | , k = 0 , . . . , d , we expect y j y j ... y dj = β β ... β d + sx j + δ j δ j ... δ dj , j = 1 , . . . , n, whereas the discretisation and computation errors { δ kj : k = 0 , . . . , d, j = 1 , . . . , n } are correlatedrandom variables with E δ kj = 0 and var δ kj ∈ (0 , ∞ ) for all k = 0 , . . . , d and j = 1 , . . . , n . Noticethat we may assume δ kj to be random since a fractal F and its parallel sets can be digitized in manydifferent ways. To do that, it suffices to move F with respect to the digitization lattice of pixels(voxels) arbitrarily. The assumption E δ kj = 0 reflects the fact that fractal curvatures and dimensionare motion invariant. The errors are correlated because parallel sets are monotone increasing withrespect to inclusion: F ε i ⊂ F ε j if ε i < ε j .
567 y0 4 3 2 65678910 y1 y27 8 9 10 11
Figure 1: The point cloud { ( y j , y j , y j ) } nj =1 for a (2-dimensional) 3000 × .
73 to 89 . To estimate the fractal dimension s and the quantities β k , k = 0 , . . . , d (which encode the fractalcurvatures), we fit a line of the form y = β β ... β d + sx
9o the point cloud { ( y j , y j , . . . , y dj ) } nj =1 by the ordinary least squares method. That is, we findthe values of s and ( β , . . . , β d ) for which the expression e n ( β , . . . , β d , s ) := 1( d + 1) n d (cid:88) k =0 n (cid:88) j =1 ( y kj − ( β k + sx j )) (19)is minimal. Standard least-squares calculations show that these uniquely defined values areˆ s ( n ) = (cid:80) dk =0 (cid:80) nj =1 y kj ( x j − ¯ x n )( n − d + 1) S n , (20)ˆ β ( n ) k = ¯ y kn − ¯ x n ˆ s ( n ) , k = 0 , . . . , d, (21)where ¯ x n := n (cid:80) nj =1 x j , S n = n − (cid:80) nj =1 ( x j − ¯ x n ) and ¯ y kn := n (cid:80) nj =1 y kj . We propose ˆ s ( n ) as anestimator of the fractal dimension s and (cid:92) |C k ( F ) | = exp( ˆ β ( n ) k ) as an estimator of |C k ( F ) | , k = 0 , . . . , d . Remark 3.1.
If for some indices k ∈ { , . . . , d } assumption (A2) is not satisfied, one can simplyabandon these indices and consider the least-squares estimators based on the remaining data. Ingeneral, for a subset J ⊆ { , . . . , d } of indices, one can consider the estimators ˆ s ( J,n ) = (cid:80) k ∈ J (cid:80) nj =1 y kj ( x j − ¯ x n )( n − | J | S n , (22)ˆ β ( J,n ) k = ¯ y kn − ¯ x n ˆ s ( J,n ) , k ∈ J , (23) which minimize the sum | J | n (cid:88) k ∈ J n (cid:88) j =1 ( y kj − ( β k + sx j )) . (24) Here | J | denotes the cardinality of the finite set J . If J = { d } , i.e. if only the volume C d ( F ε ) isconsidered, the estimators specialize to those provided by the sausage method. Hence the sausagemethod is included as a special case in our considerations. We use the notation ˆ s ( { d } ,n ) and ˆ M ( n ) :=exp( ˆ β ( { d } ,n ) d ) for the sausage method estimators of the dimension s and the Minkowski content M ( F ) = C d ( F ) of the set F .Second method: Quasi–linear regression.. We assume now that F ⊂ R d is a set satisfying assump-tions (A1) and (A2) but not (A3). Instead, we assume the following:(A3’) For some h ∈ (0 ,
1) and each k = 0 , . . . , d , one has the asymptotic relation ε s − k C k ( F ε ) ∼ p k ( ε ) as ε (cid:38) p k : (0 , ∞ ) → R is a bounded, (multiplicatively) periodic function with period h .This assumption is on the one hand motivated by the known results for arithmetic self-similar sets,where the existence of such a periodic function was shown. On the other hand, condition (A3’)ensures the existence of the average fractal curvatures as defined in (11). Indeed, C k ( F ) is given interms of the function p k by (14).We point out that assumption (A3’) is strictly weaker than (A3). For any set F satisfying (A3),the relations (25) hold for the constant functions p k ≡ C k ( F ) (and some arbitrary h > ε -values ε , . . . , ε n , recall the notation x j = − log ε j and y kj = log (cid:0) ε − kj | C k ( F ε j ) | (cid:1) , k = 0 , . . . , d , from (17) and (18). The relation (25) suggests that if the radii ε j are sufficiently small,10hen the points ( y j , y j , . . . , y dj ), j = 1 , . . . , n will lie close to the graph of a function which is thesum of a linear and a periodic function (see Figure 1): y j y j ... y dj ≈ sx j + log | p ( ε j ) | log | p ( ε j ) | ...log | p d ( ε j ) | , j = 1 , . . . , n. For k = 0 , . . . , d , let the functions g k : R → (0 , ∞ ) and f k : R → R be given by g k ( x ) = | p k ( e − x ) | and f k ( x ) = log g k ( x ) − β k , (26)where β k := 1 h (cid:90) h log g k ( x ) dx. Note that the multiplicative periodicity of p k (with period h = e − h ) implies that f k (as well as g k )is additive periodic with period h >
0. The reason to subtract β k in (26) is to center f k , that is,to have (cid:82) h f k ( x ) dx = 0. Observe that, in case assumption (A3) is satisfied, that is, when p k is aconstant function, β k has the same meaning as before in the first method.It is plausible to expect the following regression structure y ki = β k + s · x i + f k ( x i ) + δ ki , i = 1 , . . . , n, (27)where T k ( x ) := β k + s · x is the polynomial part and f k ( x ) is the seasonal part of the above timeseries, whereas the errors δ kj are assumed to be correlated random variables with E δ kj = 0 andvar δ kj ∈ (0 , ∞ ) for k = 0 , . . . , d and j = 1 , . . . , n .The seasonal part f k is assumed to be the finite Fourier series f k ( x ) = m (cid:88) j =1 (cid:16) ˜ a kj cos (2 πjx/h ) + ˜ b kj sin (2 πjx/h ) (cid:17) for some m ∈ N . Standard operations with trigonometric functions allow to write f k in the form f k ( x ) = m (cid:88) j =1 b kj cos ( µ j x + ϕ kj )with µ j = 2 πj/h and some b kj , ϕ kj ∈ R .First we assume that the number m and the period h (and thus all µ j ) are known. Standardmethods of time series analysis (cf. e.g. [28, Chapter 9]) can be used to design the least squaresestimators ˆ β ( n ) k , ˆ s ( n ) k , ˆ b ( n ) kj , ˆ ϕ ( n ) kj of β k , s , b kj , ϕ kj so thatˆ f ( n ) k ( x ) = m (cid:88) j =1 ˆ b ( n ) kj cos (cid:16) µ j x + ˆ ϕ ( n ) kj (cid:17) (28)is an estimator of f k ( x ). Namely, regression (27) is interpreted as a linear regression with respectto the parameters β k , s , b kj cos ϕ kj , b kj sin ϕ kj , and those are estimated by ordinary least squares.By the relation (14) and assumption (A2), we have |C k ( F ) | = exp { β k } h (cid:90) h exp { f k ( x ) } dx, which allows for the following estimator of (the absolute value of) the k –th fractal curvature | (cid:98) C ( n ) k ( F ) | = exp { ˆ β ( n ) k } h (cid:90) h exp { ˆ f ( n ) k ( x ) } dx, k = 0 , . . . , d. (29)11n most situations, the numbers m and h will be unknown. They may be estimated as follows.First, the coefficients of the polynomial part are estimated by means of ordinary least squares fromthe linear regression equation (27) where f k ( x j ) + δ kj are interpreted as new random errors. Then,the estimated polynomial part is subtracted from the variables y kj , and the period h (and hencefrequences µ j ) are estimated by means of the periodogram of the resulting time series { ˜ y kj , j ∈ Z } without trend, see [28, Section 9.1] and Figure 2. Namely, if { δ ki } is a stationary ARMA processthen the periodogram I n ( t ) = 12 πn (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n (cid:88) j =1 e − ijt ˜ y kj (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , t ∈ [ − π, π ]behaves on average as E I n ( t ) = O ( n ) as n → ∞ for t = ± µ j and t = 0 whereas it is bounded for allother t , cf. [28, relation (1.5), p. 204]. A more precise result can be found in [35, Theorem 5, p. 54]for stationary mixing sequence { δ ki } :sup t ∈ J n I n ( t ) = O (log log n ) a.s.where J n = ( − log a n/ (2 n ) ± µ j , log a n/ (2 n ) ± µ j ) for some a ≥ j = 1 , . . . , m . Thus, thepositions ( t –values) of high peaks of I n yield estimators for µ j (and hence h ).For the case of stationary regression errors { δ ki } , the book [35, p. 53-54] proposes an estimate (cid:98) h = 2 π/ (cid:98) µ with (cid:98) µ = argmax µ m (cid:88) j =1 I n ( jµ ) . (30)We refer the reader to [35] for a discussion and comparison of other estimation methods of thefrequency µ .Then the estimated values of h and µ j should be plugged into the formula (29). The value of m should be taken reasonably large. Its order can be estimated counting the number l of high peaksof I n and setting (cid:98) m = (cid:98) ( l − / (cid:99) where (cid:98) b (cid:99) denotes the integral part of a real number b , cf. [28,p. 205]. − − − − frequency pe r i odog r a m bandwidth = 0.00646y0y1y2 Figure 2: The periodogram of the centered and rescaled data ( y , y , y ) from the Sierpi´nski gasket. The datais log(2)-periodic, hence the maxima occur at multiples of 1 / log(2) (vertical dashed lines). The periodogram wascalculated using the R function spec.pgram() , with option pad=10 . The procedure of estimating the dimension s and the (absolute values of) average fractal cur-vatures C k ( F ) from a given binary image ˜ F of a fractal F now runs as follows: Dilate ˜ F by a ball12 ε (0) for the given set of radii ε ∈ { ε , . . . , ε n } and measure the corresponding intrinsic volumes | C k ( ˜ F ε ) | , k = 0 , . . . , d . Calculate the point cloud { ( y j , y j , . . . , y dj ) } nj =1 and apply the regression(27) either to the data sets { y kj } nj =1 separately for each k = 0 , . . . , d or to the whole cloud as de-scribed above. This results in the estimators ˆ s ( n ) k , | (cid:98) C ( n ) k ( F ) | , k = 0 , . . . , d in the first case and ˜ s ( n ) , | (cid:101) C ( n ) k ( F ) | , k = 0 , . . . , d in the second case of simultaneous regression. Remark 3.2.
Using k separate regressions for estimating s and |C k ( F ) | , k = 0 , . . . , d , notice thatthe fractal dimension s of F does not depend on the order k of a considered curvature measure.However, the estimator ˆ s ( n ) k depends on k = 0 , . . . , d as a solution of (27) . Hence, the estimationprocedure for s can be made more robust by setting ˆ s ( n ) to be the empirical median of the sample { ˆ s ( n ) k , k = 0 , . . . , d } . The first method can be seen as a special case of the second one when the seasonal part isassumed to be zero.
The main cause for the inaccuracy of the estimators of Section 3.1 is that a large amount ofinformation is lost in the digitization procedure. The dilation radius can not be taken arbitrarilysmall in practice, which would be necessary for the calculation of a limit. The resolution of thedigitized set determines a lower bound for the dilation radii. However, if we assume that theradii can be chosen arbitrarily small, i.e., if the resolution increases to infinity, then the weakconsistency (together with rates of convergence for certain choices of the sequence of radii) of theabove estimators can be shown under some rather mild assumptions on the covariance structureof the (random) discretization and computation errors. Here the choice of the radii can not becompletely arbitrary. For simplicity, we only consider monotone sequences. In the first method,some bound on the speed of decay of the radii will be sufficient, while in the second method we onlyallow for radii forming an arithmetic sequence on the logarithmic scale.
Weak consistency in the linear regression..
Consider the classical multivariate linear regression offull rank, i.e. y l = X l β + δ l (31)with y l and δ l being random vectors of dimension l ∈ N , X l the deterministic regression matrix ofsize l × q and rank q ≤ l , and β being the q –dimensional vector of regression parameters. Note thatrank( X l ) = q implies that the q × q matrix X (cid:62) l X l is positive definite and has thus strictly positiveeigenvalues, which we denote by λ , . . . , λ q .Let ˆ β ( l ) be the least-squares estimator of β :ˆ β ( l ) = (cid:0) X (cid:62) l X l (cid:1) − X (cid:62) l y l . (32)Assume that the coordinates δ lj , j = 1 , . . . , l of δ l are a sequence of (dependent) random variableswith positive finite variance satisfying E ( δ lj ) = 0. Assume that the covariance matrices Q l := E ( δ l δ (cid:62) l ) satisfy 0 < inf l ∈ N ν min ( Q l ) ≤ sup l ∈ N ν max ( Q l ) =: ν ∗ < ∞ , (33)where ν min ( Q l ) and ν max ( Q l ) are the smallest and the largest eigenvalues of Q l , respectively. Lemma 3.3. ([11, Theorem 3.1]) The estimator ˆ β ( l ) is weakly consistent for β if and only if λ min ( X (cid:62) l X l ) := min i =1 ,...,q λ i → ∞ as l → ∞ . The following result provides an estimate for the rate of convergence of ˆ β ( l ) to β . We write tr( A )for the trace of a matrix A . 13 emma 3.4. For each ε > , P ( | ˆ β ( l ) − β | > ε ) ≤ ν ∗ ε tr( X (cid:62) l X l ) (cid:0) λ min ( X (cid:62) l X l ) (cid:1) . (34) Proof.
It is easy to see that ˆ β ( l ) − β = (cid:0) X (cid:62) l X l (cid:1) − X (cid:62) l δ l and the covariance matrix of ˆ β ( l ) − β isgiven by Cov( ˆ β ( l ) − β ) = (cid:0) X (cid:62) l X l (cid:1) − X (cid:62) l Q l X l (cid:0) X (cid:62) l X l (cid:1) − . Using the Markov inequality, we get P ( | ˆ β ( l ) − β | > ε ) ≤ tr (cid:16) Cov( ˆ β ( l ) − β ) (cid:17) /ε . Since the matrices Q l and X Tl X l are symmetric, they can be diagonalized, that is, there existorthogonal matrices C, B and diagonal matricesΛ = diag( λ , . . . , λ q ) and N = diag( ν , . . . , ν l )such that X (cid:62) l X l = C (cid:62) Λ C and Q l = B (cid:62) N B , respectively, where the eigenvalues λ i of X Tl X l and ν j of Q l are strictly positive, since X Tl X l has full rank and due to assumption (33), respectively.Note that (cid:0) X (cid:62) l X l (cid:1) − = C (cid:62) Λ − C with Λ − = diag(1 /λ , . . . , /λ q ). Using the cyclic commutativityproperty of the trace one gets after standard calculations thattr (cid:0) Cov( ˆ β ( l ) − β ) (cid:1) = tr (cid:0) Λ − CX (cid:62) l B (cid:62) N BX l C (cid:62) (cid:1) ≤ max i =1 ,...,q λ − i · tr (cid:0) CX (cid:62) l B (cid:62) N BX l C (cid:62) (cid:1) = (cid:0) λ min ( X (cid:62) l X l ) (cid:1) − tr (cid:0) X l X (cid:62) l B (cid:62) N B (cid:1) ≤ (cid:0) λ min ( X (cid:62) l X l ) (cid:1) − max i =1 ,...,q ν i · tr (cid:0) BX l X (cid:62) l B (cid:62) (cid:1) ≤ (cid:0) λ min ( X (cid:62) l X l ) (cid:1) − ν ∗ · tr (cid:0) X l X (cid:62) l (cid:1) = (cid:0) λ min ( X (cid:62) l X l ) (cid:1) − ν ∗ tr (cid:0) X (cid:62) l X l (cid:1) . In the derivation we have used in particular that the matrices CX (cid:62) l B (cid:62) N BX l C (cid:62) and BX l X (cid:62) l B (cid:62) are covariance matrices with nonnegative entries on the diagonal. This completes the proof of (34). (cid:3) Asymptotic normality..
Imposing additional assumptions on the dependence structure of regressionerrors in (31), one can prove the asymptotic normality of the least squares estimator ˆ β ( l ) . Forsimplicity, we do it for δ l = √ Q l γ l , where √ Q l is the square root of the symmetric positive definitematrix Q l ∈ R l × l and γ l = ( γ l , . . . , γ ll ) (cid:62) is a random vector with iid coordinates γ lj , E ( γ lj ) = 0, E ( γ lj ) = 1 for all j = 1 , . . . , l . This corresponds to the case when for each l ( δ lj ) is a linear processwith a finite range of dependence. Theorem 3.5.
Under the assumptions on Q l in the paragraph above, it holds t (cid:62) ( ˆ β ( l ) − β ) (cid:113) t (cid:62) (cid:0) X (cid:62) l X l (cid:1) − X (cid:62) l Q l X l (cid:0) X (cid:62) l X l (cid:1) − t d −→ Z ∼ N (0 , , l → ∞ (35) for all t ∈ R q \ { } such that (cid:107) A (cid:62) l t (cid:107) (cid:107) A (cid:62) l t (cid:107) ∞ → ∞ , l → ∞ , (36) where A l = (cid:0) X (cid:62) l X l (cid:1) − X (cid:62) l √ Q l ∈ R q × l , (cid:107) · (cid:107) and (cid:107) · (cid:107) ∞ are the Euclidean and the maximum normin R l , respectively, and ˆ β ( l ) is the estimator (32) .Proof. The result follows from the central limit theorem with an application of the Lindeberg con-dition. Define the vector b l = A (cid:62) l t (cid:107) A (cid:62) l t (cid:107) ∈ R l . and write b lj for its j -th coordinate. Let Y lj := b lj γ lj . Since (cid:107) A (cid:62) l t (cid:107) = t (cid:62) (cid:0) X (cid:62) l X l (cid:1) − X (cid:62) l Q l X l (cid:0) X (cid:62) l X l (cid:1) − t,
14e notice that Y l := (cid:80) lj =1 Y lj equals the left hand side of (35). The doubly indexed sequence { Y lj : 1 ≤ j ≤ l } satisfies E Y lj = 0, E Y lj = b lj , (cid:80) lj =1 b lj = 1 for all j = 1 , . . . , l and l ∈ N . It alsosatisfies the Lindeberg condition (see e.g. [14]), which can be written in the following form: for each ε >
0, lim l →∞ l (cid:88) j =1 E (cid:0) Y lj ( Y lj > ε ) (cid:1) = 0 . Indeed, employing condition (36), for any ε > l (cid:88) j =1 E (cid:0) Y lj ( Y lj > ε ) (cid:1) = l (cid:88) j =1 b lj E (cid:0) γ ( γ > ε /b lj ) (cid:1) ≤ E (cid:0) γ ( γ > ε (cid:107) A (cid:62) l t (cid:107) / (cid:107) A (cid:62) l t (cid:107) ∞ ) (cid:1) · l (cid:88) j =1 b lj = E (cid:0) γ ( γ > ε (cid:107) A (cid:62) l t (cid:107) / (cid:107) A (cid:62) l t (cid:107) ∞ ) (cid:1) → l → ∞ due to the integrability of γ . (cid:3) The following statement gives a sufficient condition for (36).
Corollary 3.6. If Q l = σ I l for some σ > , where I l is the identity matrix, then condition (36) in Theorem 3.5 is satisfied for all t (cid:54) = 0 provided that λ min ( X (cid:62) l X l ) / (cid:107) X l (cid:107) ∞ → ∞ , l → ∞ , (37) where (cid:107) X l (cid:107) ∞ = max j ∈ ....,l (cid:80) qk =1 | ( X l ) jk | is the maximum absolute row sum of the matrix X l and λ min ( X (cid:62) l X l ) denotes (as before) the smallest eigenvalue of X (cid:62) l X l .Proof. Suppose that Q l = σ I l . Using the diagonalisation of ( X (cid:62) l X l ) − = C T Λ − C from theproof of Lemma 3.4, the matrix A l A (cid:62) l (defined in Theorem 3.5) can be represented by A l A (cid:62) l = σ ( X (cid:62) l X l ) − = σ C (cid:62) Λ − C . This yields (cid:107) A (cid:62) l t (cid:107) = t (cid:62) A l A (cid:62) l t = σ t (cid:62) C (cid:62) Λ − Ct = σ q (cid:88) i =1 b i λ j where b = ( b , . . . , b q ) (cid:62) := Ct . Moreover, we have A (cid:62) l = σX l ( X (cid:62) l X l ) − = σX l C (cid:62) Λ − C, and hence (cid:107) A (cid:62) l t (cid:107) ∞ ≤ σ (cid:107) X l (cid:107) ∞ (cid:107) C (cid:62) Λ − Ct (cid:107) ∞ ≤ c σ (cid:107) X l (cid:107) ∞ (cid:107) C (cid:62) Λ − Ct (cid:107) ≤ c σ (cid:107) X l (cid:107) ∞ b (cid:62) Λ − b = c σ (cid:107) X l (cid:107) ∞ q (cid:88) i =1 b i λ i for some constant c >
0, using the fact that all norms in finite-dimensional spaces are equivalent.After some more algebra, we arrive at the desired estimate: (cid:107) A (cid:62) l t (cid:107) (cid:107) A (cid:62) l t (cid:107) ∞ ≥ (cid:80) qi =1 b i /λ i c (cid:107) X l (cid:107) ∞ (cid:80) qi =1 b i /λ i = (cid:80) qi =1 b i (cid:16)(cid:81) j (cid:54) = i λ j (cid:17) / ( λ · . . . · λ q ) c (cid:107) X l (cid:107) ∞ (cid:80) qi =1 b i (cid:16)(cid:81) j (cid:54) = i λ j (cid:17) / ( λ · . . . · λ q ) = λ · . . . · λ q c (cid:107) X l (cid:107) ∞ (cid:80) qi =1 b i (cid:81) j (cid:54) = i λ j (cid:80) qi =1 b i (cid:81) j (cid:54) = i λ j = 1 c (cid:107) X l (cid:107) ∞ (cid:80) qi =1 λ i (cid:81) j (cid:54) = i λ j b i (cid:80) qi =1 (cid:81) j (cid:54) = i λ j b i ≥ min i =1 ,...,q λ i c (cid:107) X l (cid:107) ∞ . (cid:3) Remark 3.7. If Q l is known or can be consistently estimated from m independent copies of y l by ˆ Q l,m P −→ Q l as m → ∞ for each l , then the above theorem (together with a Slutsky argument) canbe used in a standard way (see e.g. [7, Section 6.3.2, p. 398] ) to construct an asymptotic confidenceinterval for the coordinates of β l as well as a large sample Wald’s test of the hypothesis H : β j = β j, vs. H : β j (cid:54) = β j, for a fixed β j, and j = 0 , . . . , q . .3. Asymptotics of the first method Let δ kj , k = 0 , . . . , d , j ∈ N be a sequence of (dependent) random variables with positive finitevariance satisfying E ( δ kj ) = 0 and assumption (33). Let n ≥
2. Since the measured intrinsic volumesshow an almost periodic oscillatory behaviour with a “period” much larger than the step width ofthe radii, assuming no correlations would not be very reasonable. We suppose that the randomvariables y kj can be represented in the form y kj = β k + x j s + δ kj , k = 0 , . . . , d, j ∈ N . Restricting the consideration to the first n observations (i.e. to the data derived from the first n radii), this is more conveniently expressed in matrix form (31) with l = n ( d + 1), q = d + 2, y l := y ... y d ... y n ... y dn , X l := . . . x . . . x ...1 0 . . . x n . . . x n , δ l := δ ... δ d ... δ n ... δ dn , and β := ( β , . . . , β d , s ) (cid:62) . The vector β contains the total curvatures and fractal dimension tobe estimated. In the sequel, we adopt the notation slightly and write X n (instead of X l ) for theregression matrix based on the first n radii (of size n ( d + 1) × d + 2) and similarly Q n for the n ( d + 1) × n ( d + 1) matrix describing the covariance structure of the errors δ lj . Similarly, we willwrite ˆ β ( n ) := ( ˆ β ( n )0 , . . . , ˆ β ( n ) d , ˆ s ( n ) ) (cid:62) for the corresponding least-squares estimator (32) of β based onthe first n radii. Theorem 3.8.
Let F ⊂ R d be a set satisfying the assumptions (A1)-(A3). Let ε > ε > . . . > be a decreasing sequence of radii satisfying the condition ¯ x n (cid:101) S n = O ( n µ ) as n → ∞ , (38) for some µ ∈ [0 , , where x j = − log ε j , j ∈ N , ¯ x n = 1 /n n (cid:80) i =1 x i , and (cid:101) S n = 1 /n n (cid:80) i =1 ( x i − ¯ x n ) .Suppose that the covariance matrices Q n of the errors satisfy the assumption (33). Then with thenotation above, the sequence ˆ β ( n ) of least-squares estimators of β is weakly consistent, i.e., for each ε > , P ( | ˆ β ( n ) − β | > ε ) → as n → ∞ . Proof.
By Lemma 3.3, ( ˆ β ( n ) ) is a consistent sequence of estimators of β if and only iflim n →∞ λ ∗ min ( X (cid:62) n X n ) = ∞ , λ ∗ min ( X (cid:62) n X n ) denotes the smallest positive eigenvalue of X (cid:62) n X n . We have X (cid:62) n X n = n . . . n ¯ x n . . . n n ¯ x n n ¯ x n . . . n ¯ x n v n ∈ R ( d +2) × ( d +2) with v n := ( d +1) (cid:80) nj =1 x j . Since rank( X n ) = d +2, the symmetric matrix X (cid:62) n X n is positive definiteimplying that all its eigenvalues are positive. Sincedet( X (cid:62) n X n − λI )= (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n − λ . . . . . . n ¯ x n − ( n − λ ) . . . . . . ... 0... 0 . . . . . . ... ...... ... . . . . . . 0 ... − ( n − λ ) 0 . . . n − λ n ¯ x n . . . . . . n ¯ x n v n − λ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n − λ . . . n ¯ x n . . . n − λ d + 1) n ¯ x n n ¯ x n . . . n ¯ x n v n − λ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = ( n − λ ) ( d +1) ( v n − λ ) − ( d + 1)( n ¯ x n ) ( n − λ ) d = ( n − λ ) d (cid:16) λ − ( n + v n ) λ + n ( d + 1) (cid:101) S n (cid:17) , the eigenvalues of X (cid:62) n X n are λ ( n )0 = n (of multiplicity d ) and λ ( n )1 / = n + v n ± (cid:114) ( n + v n ) − n ( d + 1) (cid:101) S n . (39)Since, obviously, λ ( n )0 → ∞ as n → ∞ and λ ( n )1 ≥ λ ( n )2 >
0, it suffices to show that λ ( n )2 → ∞ as n → ∞ . We have 2 λ ( n )2 = n + v n − (cid:113) ( n + v n ) − n ( d + 1) (cid:101) S n = 4 n ( d + 1) (cid:101) S n n + v n + (cid:113) ( n + v n ) − n ( d + 1) (cid:101) S n (40) ≥ n ( d + 1) (cid:101) S n n + v n ) = 2( d + 1) n (cid:101) S n v n /n , where the inequality is due to the fact that the expression under the root is non-negative and notlarger than ( n + v n ) . Since v n = n ( d + 1) (cid:101) S n + n ( d + 1)¯ x n , we obtain λ ( n )2 ≥ ( d + 1) n / (cid:101) S n + ( d + 1) (cid:16) x n / (cid:101) S n (cid:17) ≥ n / (cid:101) S n + (cid:16) x n / (cid:101) S n (cid:17) −→ ∞ (41)provided that 1 / (cid:101) S n + (cid:16) x n / (cid:101) S n (cid:17) = 1 + 1 + ¯ x n (cid:101) S n = O ( n µ ) , as n → ∞ , for some 0 ≤ µ <
1. The last condition is satisfied due to assumption (38). (cid:3)
Example 3.9.
Condition (38) is satisfied in particular for any arithmetic sequence of the form x j = a + a · j , j ∈ N where a ≥ and a > . Without loss of generality, we demonstrate this for a = 0 , a = 1 , that is x j = j , j ∈ N . In this case we have ¯ x n = ( n + 1) / and (cid:101) S n = ( n + 1)(4 n − n − / n , ence ¯ x n / (cid:101) S n = 3 n ( n + 1) / (4 n − n − → / as n → ∞ . The condition (38) is satisfied with µ = 0 . This means that the estimator ˆ β ( n ) is weakly consistent for a sequence of dilation radii ε j = e − a − a · j , j ∈ N , a ≥ , a > . Recall that the relation f = Θ( g ) for f, g : R → R means that there exist constants c , c > c | g ( x ) | ≤ | f ( x ) | ≤ c | g ( x ) | for all sufficiently large x . Theorem 3.10.
Let F ⊂ R d be a set satisfying the assumptions (A1)-(A3). Let ε > ε > . . . > be a decreasing sequence of radii. Suppose that the covariance matrices Q l of the errors satisfy theassumption (33). Then, for any ε > , P ( | ˆ β ( n ) − β | > ε ) ≤ ν ∗ ε ( d + 1) n (cid:16) x n + (cid:101) S n (cid:17) (cid:32) / (cid:101) S n + ( d + 1) (cid:32) x n (cid:101) S n (cid:33)(cid:33) . (42) If there are constants γ, µ ≥ such that the sequence of radii satisfies the conditions (cid:101) S n = Θ( n γ ) , and ¯ x n = O ( n µ ) , as n → ∞ (43) with α := 1 − max { γ, µ } − { , µ − γ } > , then the sequence ˆ β ( n ) of least-squares estimatorsof β is weakly consistent with the rate of convergence P ( | ˆ β ( n ) − β | > ε ) = O ( n − α ) as n → ∞ (44) for any ε > .Proof. To show (42), we apply Lemma 3.4, compute tr( X (cid:62) l X l ) and estimate λ min ( X (cid:62) l X l ) frombelow. The trace can be read off directly from the matrix. Using that v n = n ( d + 1) (cid:101) S n + n ( d + 1)¯ x n ,we get tr( X (cid:62) l X l ) = n ( d + 1) + v n = n ( d + 1) (cid:16) x n + (cid:101) S n (cid:17) . For the eigenvalues of X (cid:62) n X n , we claim that λ min ( X (cid:62) n X n ) ≥ λ ( n )2 , (45)where λ ( n )2 is given in (39). Indeed, from the proof of Theorem 3.10, we have λ min ( X (cid:62) n X n ) =min { n, λ ( n )1 , λ ( n )2 } with λ ( n )1 ≥ λ ( n )2 . To prove the claim, it therefore suffices to show that n ≥ λ ( n )2 / λ ( n )2 /n ≤
2. From (40) it is easily seen that λ ( n )2 n ≤ n ( d + 1) (cid:101) S n v n = 2 (cid:80) nj =1 x j − n ¯ x n (cid:80) nj =1 x j ≤ , since the last numerator is clearly positive and smaller than the denominator. This proves (45). Tocomplete the proof of (42), it suffices now to combine (45) with (41) to see that( λ min ( X (cid:62) n X n )) − ≤ (cid:16) λ ( n )2 (cid:17) − ≤ (cid:16) / (cid:101) S n + ( d + 1) (cid:16) x n / (cid:101) S n (cid:17)(cid:17) ( d + 1) n . The relation (44) follows easily from (42) and (43). The expression in the first parentheses ofthe right-hand side in (42) is bounded from above (up to some constant) by n max { µ,γ } , while theexpression in the second parentheses is bounded up to a constant by n max { ,µ − γ } . (cid:3) Example 3.11.
Condition (43) is satisfied for x j = O ( j δ ) , δ ∈ (0 , / with γ = µ = 2 δ , and α = 1 − δ . This means that the estimator ˆ β ( n ) is weakly consistent for a sequence of dilation radii ε j = e − cj δ , j ∈ N , c > with the rate of convergence O (cid:0) n − (1 − δ ) (cid:1) .Unfortunately, Lemma 3.4 and Theorem 3.10 are not strong enough to provide a rate of con-vergence in the case of an arithmetic sequence ( x j ) . For x j = j, j ∈ N , one has tr( X (cid:62) n X n ) =( d + 1) n (6 + ( n + 1)(2 n + 1)) / O ( n ) as n → ∞ , while it can be shown that λ min ( X (cid:62) n X n ) = Θ( n ) as n → ∞ , meaning that the right hand side in the estimate (34) still grows linearly as n → ∞ . orollary 3.12. Under the assumptions of Theorem 3.10, the estimators (cid:92) |C k ( F ) | = exp( ˆ β ( n ) k ) , k = 0 , . . . , d of the (absolute values of ) the fractal curvatures are weakly consistent, i.e., for any ε > P ( (cid:12)(cid:12)(cid:12) (cid:92) |C k ( F ) | − |C k ( F ) | (cid:12)(cid:12)(cid:12) > ε ) = O ( n − α ) as n → ∞ , k = 0 , . . . , d. Similarly, the estimator ˆ s ( n ) of the dimension s is weakly consistent with the same convergence rate.Proof. Let δ >
0. According to Taylor’s theorem, for each t with | t | ≤ δ , there is a ξ = ξ ( t ) ∈ [ − δ, δ ]such that e t = 1 + t + t e ξ /
2. For x, y ∈ R such that | x − y | ≤ δ this gives e x − y = 1 + ( x − y ) +( x − y ) e ξ / e x − e y = e y (cid:0) x − y + ( x − y ) e ξ / (cid:1) for some ξ = ξ ( x, y ) ∈ [ − δ, δ ]. Since e − δ ≤ e ξ ≤ e δ , we infer that e y (cid:0) x − y + ( x − y ) e − δ / (cid:1) ≤ e x − e y ≤ e y (cid:0) x − y + ( x − y ) e δ / (cid:1) and thus | e x − e y | ≤ e y max s ∈{− δ, + δ } (cid:8)(cid:12)(cid:12) x − y + ( x − y ) e s / (cid:12)(cid:12)(cid:9) ≤ e y | x − y | + ( x − y ) e y + δ / . (46)Now set x = ˆ β ( n ) k and y = β k for brevity. Using the relation (46) we infer that, for any ε > P ( | e x − e y | > ε ) ≤ P ( | e x − e y | > ε, | x − y | ≤ δ ) + P (cid:0) | e x − e y | > ε (cid:12)(cid:12) | x − y | > δ (cid:1) P ( | x − y | > δ ) ≤ P ( e y | x − y | > ε/ , | x − y | ≤ δ ) + P (cid:0) e y + δ ( x − y ) > ε, | x − y | ≤ δ (cid:1) + P ( | x − y | > δ ) ≤ P (cid:0) | x − y | > e − y ε/ (cid:1) + P (cid:0) | x − y | > √ εe − ( y + δ ) / (cid:1) + P ( | x − y | > δ ) . Now we apply the estimate (42) to each of the terms in the last sum. Noting that | ˆ β ( n ) − β | ≥| ˆ β ( n ) k − β k | , we obtain, for each ε > δ > P (cid:16) | exp( ˆ β ( n ) k ) − exp( β k ) | > ε (cid:17) ≤ c k ν ∗ ( d + 1) n (cid:16) x n + (cid:101) S n (cid:17) (cid:32) / (cid:101) S n + ( d + 1) (cid:32) x n (cid:101) S n (cid:33)(cid:33) . where the constant c k := (cid:0) e β k ε − + e β k + δ ε − + δ − (cid:1) depends on β k (and the chosen δ ) but not n . Now the claimed convergence rate follows from condition (43) in the same way as in the proof of(44) above. The convergence rate for the dimension estimators is just a reformulation of (44) takinginto account that s = β d +2 and ˆ s ( n ) = ˆ β ( n ) d +2 . (cid:3) Note that the same consistency results hold for the estimators ˆ s ( J,n ) and ˆ β ( J,n ) k for any subset J ⊆ { , . . . , d } such that assumption (A2) is satisfied for all k ∈ J , cf. Remark 3.1. In particular, itapplies to the sausage method. In this case, we can formulate the result in greater generality. Theassumption (A1) is not needed (as the volume is always well defined) and (A2) is always satisfied(as the volume is positive). (A3) simplifies to the existence of the Minkowski content of F . Corollary 3.13.
Let F ⊂ R d be a set whose Minkowski content exists and let ε > ε > . . . > be a decreasing sequence of radii. Suppose that the conditions (33) and (43) are satisfied. Thenthe sausage method estimators ˆ s ( { d } ,n ) and ˆ M ( n ) ( F ) are weakly consistent. More precisely, for each ε > , P ( | ˆ s ( { d } ,n ) − s | > ε ) = O ( n − α ) and P ( | ˆ M ( n ) − M| > ε ) = O ( n − α ) as n → ∞ , with α as in Theorem 3.10. As a last result for the first method, we show the asymptotic normality of the estimators ˆ β ( n ) using Corollary 3.6, for which stronger assumptions on the covariance structure of the errors (no cor-relation) are required. However, these assumptions are not realistic since δ ki are clearly dependent,see Remark 3.15. A more general correlation structure Q n would require to verify the condition (36)which seems to be quite tedious. 19 heorem 3.14. Let F ⊂ R d be a set satisfying the assumptions (A1)-(A3). Assume that Q n = σ I for some σ > , where I is the identity matrix. Let ε > ε > . . . > be a decreasing sequence ofradii and suppose there are constants γ, µ ≥ with max { µ, µ − γ } < such that (cid:101) S n = Θ( n γ ) , and x n = O ( n µ ) , as n → ∞ . (47) Then, for each t ∈ R q \ { } , t (cid:62) ( ˆ β ( l ) − β ) σ (cid:113) t (cid:62) ( X (cid:62) n X n ) − t d −→ Z ∼ N (0 , , as n → ∞ . (48) Proof.
By Corollary 3.6, it suffices to show that λ min ( X (cid:62) n X n ) / (cid:107) X n (cid:107) ∞ → ∞ , as n → ∞ . It is obvious from the monotonicity of the sequence ( x i ) i that the maximal row sum of X n is (cid:107) X n (cid:107) ∞ = 1 + x n and since the assumptions imply x n → ∞ , we have (cid:107) X n (cid:107) ∞ ≤ x n for n sufficientlylarge. Combining this with (45) and (41), and noting that ¯ x n ≤ x n , we obtain λ min ( X (cid:62) n X n ) (cid:107) X n (cid:107) ∞ ≥ λ ( n )2 (1 + x n ) ≥ n (1 + x n ) (cid:16) x n (cid:101) S n (cid:17) ≥ n (1 + x n ) + (1+ x n ) (cid:101) S n . The last expression tends to ∞ as n → ∞ , since, by assumption (47), we have n − (cid:16) (1 + x n ) + (1 + x n ) / (cid:101) S n ) (cid:17) ≤ C n µ − + C n µ − γ − , for some constants C , C , which tends to 0, since max { µ, µ − γ } <
1. This completes the proof. (cid:3)
Remark 3.15.
To show strong consistency results, independence of discretization and measurementerrors δ kj is required, cf. [11]. Unfortunately, this assumption is not realistic in our case, sincediscretizations of F ε j clearly depend on each other for various j . Additionally, intrinsic volumes C k ( F ε j ) are obviously dependent for different j and k .3.4. Asymptotics of the second method Assume that the period h > m ∈ N are known. For simplicity, weprove weak consistency results only for each curvature measure C k , k = 0 , . . . , d separately (separateregressions). So fix some k ∈ { , . . . , d } . We fix the sequence of dilation radii to be arithmetic atthe logarithmic scale, i.e., x i = a + a · i , i ∈ N where a ≥ a >
0. Let n ≥ m + 2. Theregression (27) can be written in terms of the parameters β = ( β k , s, α , . . . , α m , γ , . . . , γ m ) (cid:62) (49)with α j = b kj cos ϕ kj , γ j = b kj sin ϕ kj , j = 1 , . . . , m as y ki = β k + s · x i + m (cid:88) j =1 ( α j cos( µ jx i ) − γ j sin( µ jx i )) + δ ki , i = 1 , . . . , n , with µ = 2 π/h and dependent errors { δ ki } of zero mean satisfying condition (33). This can beequivalently rewritten in form (31) with l = n , q = 2 m + 2, and the design matrix X n = x cos( µ x ) cos( µ x ) . . . cos( µ mx ) − sin( µ x ) − sin( µ x ) . . . − sin( µ mx )1 x cos( µ x ) cos( µ x ) . . . cos( µ mx ) − sin( µ x ) − sin( µ x ) . . . − sin( µ mx )... ... ... ... . . . ... ... ... . . . ...1 x n cos( µ x n ) cos( µ x n ) . . . cos( µ mx n ) − sin( µ x n ) − sin( µ x n ) . . . − sin( µ mx n ) . emma 3.16. Let F ⊂ R d be a set satisfying the assumptions (A1), (A2) and (A3’). Assume that x i = a + a · i , i ∈ N , where a ≥ and a > such that aj/h / ∈ Z for j = 1 , . . . , m . Then underthe above conditions on the sequence of errors { δ kj } , the least squares estimator ˆ β ( n ) = ( ˆ β ( n ) k , ˆ s ( n ) k , ˆ α ( n ) k, , . . . , ˆ α ( n ) k,m , ˆ γ ( n ) k, , . . . , ˆ γ ( n ) k,m ) (cid:62) in (32) of the parameter vector (49) is weakly consistent.Proof. Without loss of generality, we only consider the case a = 0, a = 1, that is, x i = i , i = 1 , . . . , n .(A constant a (cid:54) = 0 can be incorporated in the parameters α k,j and γ k,j , and a (cid:54) = 1 can be included inthe constant µ , such that the same arguments as below work for slightly transformed parameters.)By Lemma 3.3, it suffices to show that λ min ( X (cid:62) n X n ) → ∞ as n → ∞ . We claim that it is in factsufficient to show that tr (cid:0) ( X (cid:62) n X n ) − (cid:1) → n → ∞ . (50)Indeed, if λ , . . . , λ m +2 are the eigenvalues of X (cid:62) n X n (which are all strictly positive since X (cid:62) n X n ispositive definite), then 1 /λ , . . . , /λ m +2 are the eigenvalues of ( X (cid:62) n X n ) − and we have λ min ( X (cid:62) n X n ) = min j =1 ,..., m +2 λ j = 1max j ( λ j ) ≥ (cid:80) j λ j = 1tr (( X (cid:62) n X n ) − ) , which tends to + ∞ as n → ∞ , if (50) holds.Recall now that, by Cramer’s rule, tr (cid:0) ( X (cid:62) n X n ) − (cid:1) is given bytr (cid:0) ( X (cid:62) n X n ) − (cid:1) = 1det( X (cid:62) n X n ) m +2 (cid:88) j =1 M j,jn , (51)where M j,jn is the ( j, j ) minor of X (cid:62) n X n , j = 1 , . . . , m + 2. In the sequel we will show that, for each j = 1 , . . . , m + 2, M j,jn = O ( n m +3 ) whereas det( X (cid:62) n X n ) = Θ( n m +4 ) as n → ∞ , (52)from which (50) follows at once.The symmetric matrix X (cid:62) n X n =: ( ξ jk ) is given as follows X (cid:62) n X n = A V (cid:62) W (cid:62) V D F (cid:62)
W F G , where A = (cid:18) n (cid:80) ni =1 x i (cid:80) ni =1 x i (cid:80) ni =1 x i (cid:19) = (cid:18) n n ( n + 1) / n ( n + 1) / n ( n + 1)(2 n + 1) / (cid:19) ∈ R × ,V = ( v j,k ) ∈ R m × with v j, = n (cid:88) i =1 cos( µ ij ) and v j, = n (cid:88) i =1 i · cos( µ ij ) , j = 1 , . . . , m ,W = ( w j,k ) ∈ R m × with w j, = − n (cid:88) i =1 sin( µ ij ) and w j, = − n (cid:88) i =1 i · sin( µ ij ) , j = 1 , . . . , m ,D = ( d j,k ) ∈ R m × m with d j,k = n (cid:88) i =1 cos( µ ij ) · cos( µ ik ) , j, k = 1 , . . . , m ,F = ( f j,k ) ∈ R m × m with f j,k = − n (cid:88) i =1 sin( µ ij ) · cos( µ ik ) , j, k = 1 , . . . , m , and G = ( g j,k ) ∈ R m × m with g j,k = n (cid:88) i =1 sin( µ ij ) · sin( µ ik ) , j, k = 1 , . . . , m . j/h / ∈ Z , the sums in the coefficients v j, and w j, can be simplified as follows,cf. e.g. [28, p.206]: v j, = 12 (cid:18) sin( µ ( n + ) j )sin( µ j ) − (cid:19) and w j, = 12 cos( µ ( n + ) j ) − cos( µ j )sin( µ j ) , j = 1 , . . . , m. The condition j/h / ∈ Z ensures also that sin( µ j ) = sin( π jh ) (cid:54) = 0. Hence | v j, | ≤ | sin( µ ( n + ) j ) − sin( µ j ) || sin( µ j ) | ≤ | sin( µ j ) | and | w j, | ≤ | sin( µ j ) | , j = 1 , . . . , m. This means that the coefficients v j, and w j, are bounded from above and below by constantsindependent of n for each j = 1 , . . . , m . In fact, they are all bounded by the same constant κ := (cid:18) min j =1 ,..., m | sin( µ j ) | (cid:19) − .Using the relations cos x cos y = (cos( x + y ) + cos( x − y )), sin x cos y = (sin( x + y ) + sin( x − y ))and sin x sin y = (cos( x − y ) − cos( x + y )) and the above formulas, one obtains analogously thatthe coefficients d j,k , f j,k and g j,k are bounded from above and below by constants independent of n , whenever j (cid:54) = k and for f j,k also in the case j = k . This is ensured by the fact, that j + k ≤ m and so, by the assumptions of the lemma, ( j − k ) /h , ( j + k ) /h / ∈ Z . In particular, f j,k = − n (cid:88) i =1 sin( µ i ( j + k )) − n (cid:88) i =1 sin( µ i ( j − k )) , and so for j = k the second sum on the right vanishes, while the first sum is absolutely boundedby κ (similarly as w j, ). Hence all entries of X (cid:62) n X n except those on the diagonal and in the secondrow and column are bounded absolutely by constant κ independent of n . On the diagonal, we havesimilarly as for v j, and w j, d j,j = 12 n (cid:88) i =1 cos(0) + 12 n (cid:88) i =1 cos( µ i (2 j )) = n (cid:18) sin( µ ( n + )2 j )sin( µ j ) − (cid:19) , j = 1 , . . . , m, and g j,j = 12 n (cid:88) i =1 cos(0) − n (cid:88) i =1 cos( cµ i (2 j )) = n − (cid:18) sin( µ ( n + )2 j )sin( µ j ) − (cid:19) , j = 1 , . . . , m. Hence, as n → ∞ , ξ j,j = Θ( n ) for j (cid:54) = 2 and ξ , = Θ( n ). For the second column of X (cid:62) n X n , we use[28, Lemma 2.3(a), p.220] (which states that for any sequence ( a n ) n of real numbers whose partialsums are bounded from above and below by a constant C , one has | (cid:80) ni =1 ia i | ≤ nC ) to concludethat | v j, | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n (cid:88) i =1 i · cos( µ ij ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ κn and | w j, | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n (cid:88) i =1 i · sin( µ ij ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ κn, for j = 1 , . . . , m . Hence, as n → ∞ , ξ j, = O ( n ) for j = 3 , , . . . , m , while ξ , = Θ( n ) and ξ , = Θ( n ).Having computed the order of growth of all coefficients of X (cid:62) n X n , it is now easily seen, thatdet( X (cid:62) n X n ) = Θ( n m +4 ) as n → ∞ . Indeed, we have for the product of the diagonal entries (cid:81) m +1 j =1 ξ j,j = Θ( n m +4 ) as n → ∞ and this product is the only term in the Leibnitz expansion ofdet( X (cid:62) n X n ) with this order of growth. All other terms are at most of the order of n m +3 as n → ∞ .Hence the order of growth cannot be reduced by cancellations with other terms.For the ( j, j ) minors M j,jn of X (cid:62) n X n we can argue similarly. If the j -th row and column aredeleted, in the remaining matrix the diagonal entries are still those with the maximal order of22rowth in each row and column. Hence the order of growth of the determinant M j,jn is boundedby the product of the orders of its diagonal entries, that is M j,jn = O ( n m +3 ) as n → ∞ for each j = 1 , . . . , m . (For j = 2, we even have M j,jn = O ( n m +1 ).) This completes the proof of (52) andthus of the weak consistency of the estimator ˆ β ( n ) as stated. (cid:3) Theorem 3.17.
Under the assumptions of Lemma 3.16, for any k ∈ { , . . . , d } , the estimators ˆ s ( n ) k of s and | (cid:98) C ( n ) k ( F ) | of |C k ( F ) | are weakly consistent as n → ∞ .Proof. By Lemma 3.16, the estimators ˆ β ( n ) k , ˆ s ( n ) k and ˆ f ( n ) k ( x ) are weakly consistent as n → ∞ in theregression model (27). More precisely, the estimators ˆ b ( n ) kj and ˆ ϕ ( n ) kj in (28) are given byˆ b ( n ) kj = (cid:113) (ˆ α ( n ) j ) + (ˆ γ ( n ) j ) , ˆ ϕ ( n ) kj = arctan (cid:32) ˆ γ ( n ) j ˆ α ( n ) j (cid:33) , j = 1 , . . . , m. We split the estimation error into two parts as follows: | (cid:98) C ( n ) k ( F ) | − |C k ( F ) | = I ,n + I ,n , where by (29) I ,n = exp { ˆ β ( n ) k } h (cid:90) h (cid:16) exp { ˆ f ( n ) k ( x ) } − exp { f k ( x ) } (cid:17) dx,I ,n = exp { ˆ β ( n ) k } − exp { β k } h (cid:90) h exp { f k ( x ) } dx. To see the convergence I ,n P −→ n → ∞ , observe that the sequence (cid:16) ˆ β ( n ) k (cid:17) n converges to β k as n → ∞ and is thus bounded. Furthermore, ˆ f ( n ) k ( x ) P −→ f k ( x ) for any x ∈ [0 , h ] as n → ∞ , andthis convergence is uniform with respect to x , since (cid:12)(cid:12)(cid:12) ˆ f ( n ) k ( x ) − f k ( x ) (cid:12)(cid:12)(cid:12) ≤ m (cid:88) j =1 (cid:16) | ˆ b ( n ) kj cos ˆ ϕ ( n ) kj − b kj cos ϕ kj | + | ˆ b ( n ) kj sin ˆ ϕ ( n ) kj − b kj sin ϕ kj | (cid:17) =: ψ n P −→ n → ∞ for any x ∈ [0 , h ]. Noting that, for each x ∈ [0 , h ] and each n ∈ N , there exists a number ξ n ( x ) between 0 and ˆ f ( n ) k ( x ) − f k ( x ) such that, by the mean value theorem, | e ˆ f ( n ) k ( x ) − e f k ( x ) | = | e f k ( x ) ( e ˆ f ( n ) k ( x ) − f k ( x ) − | = e f k ( x ) e ξ n ( x ) | ˆ f ( n ) k ( x ) − f k ( x ) | and that ξ n ( x ) ≤ | ˆ f ( n ) k ( x ) − f k ( x ) | ≤ ψ n for each x ∈ [0 , h ], we conclude that | I ,n | ≤ e ψ n ψ n (cid:90) h exp { f k ( x ) } dx P −→ n → ∞ . For the convergence of I ,n simply observe that, by the continuous mapping theorem, | e ˆ β ( n ) k − e β k | P −→ n → ∞ . (cid:3) Remark 3.18.
We believe that under the assumptions of Theorem 3.17, also the estimators ˜ s ( n ) , | (cid:101) C ( n ) k ( F ) | , k = 0 , . . . , d in the case of simultaneous regression are weakly consistent as n → ∞ andthat this can be proved with essentially the same arguments as in the case of separate regression inthe proof of Theorem 3.17. However, in view of the rather long and technical arguments in the ‘easy’case of separate regression, we did not attempt to verify all the details. emark 3.19. The fractal curvature estimates exp( ˆ β ( n ) k ) only rarely deviate significantly from theestimates obtained by the second method. Theoretically, since the ˆ β ( n ) k are now obtained throughaveraging over log | p k ( e − x ) | plus some errors, an application of Jensen’s inequality to the convexfunction exp suggests that the estimates of the absolute value of fractal curvature are now systemat-ically too small; practically, however, this discrepancy between first and second method is not visiblein the examples we consider in Section 4. Let m ∈ N be fixed and h > β ( n ) (after estimation and subtraction of the linear trend described in Section 3.1, i.e. setting formally β k = s = 0) as well as of ˆ h estimated by (30) is proven under the assumption that { δ ki } forms astationary regular sequence with zero mean. Ergodicity of { δ ki } together with further assumptionssuch as e.g. the continuity of its spectral density f δ imply the asymptotic normality of (cid:98) µ j and ˆ β ( n ) ,see [21, Theorem 2]. A law of iterated logarithm for (cid:98) µ j is given in [35, p. 57].Now let m be unknown. If { δ ki } is a stationary Gaussian linear process with known positivespectral density f δ then an a.s. consistent estimate of m (as n → ∞ ) is given in [35, Theorem 15,p. 75]. Its idea is to set (cid:98) m to be the smallest possible value of m such that the log likelihood of { ˜ y ki } decreases when gradually reducing m . For { δ ki } being an AR process with Gaussian innovations,see [35, p. 80].If parameters h and m are consistently estimated then the consistency of the estimators of thefractal curvatures can be proven similarly as in Theorem 3.17.
4. Numerical simulations and results
Binary images of fractals..
We assume that binary images consist of pixels which belong to therectangular grid Z , endowed with the Euclidean metric inherited from R . This means that thedistance between neighbouring pixels is 1, which we henceforth adopt as the unit of length. Pixelscan assume the two values 0 (white) and 1 (black). A binary image is a map from the lattice Z tothe set { , } . We say that a binary image ˜ F is a discretization of a subset F ⊂ R if, for any pixel( k, l ) ∈ Z , ˜ F ( k, l ) = 1, whenever the square [ k, k + 1) × [ l, l + 1) has non-empty intersection with F .Binary images of self-similar sets can easily be generated on a computer using iterated functionsystems; for algorithms see e.g. [6]. For the generation of the sample images in this paper we haveused the free software Fractal Explorer [1]. We have generated binary images of three arithmeticand three non–arithmetic fractals on a 3000 by 3000 pixel canvas (see Figure 3).
Obtaining the data..
Let ˜ F be a discretized fractal set. For ε >
0, we approximate the dilated set F ε by the dilated binary images ˜ F ε , which we calculate as follows (cf. e.g. [44]): First, compute thedistance transform of ˜ F , D ˜ F : Z → R p (cid:55)→ d (cid:0) p, F − ( { } ) (cid:1) , which records the distance of each pixel on the canvas to the nearest black pixel. Then ˜ F ε resultsfrom setting all pixels p to black which satisfy D F ( p ) ≤ ε .We generated discretized dilated images ˜ F ε i for a set of dilation radii ε i = e − x i . The x i wereevenly spaced with distance 0 .
02 ranging from around − . − ε i ranging from 87 down to 2 . ε > e . ≈
90 the scalingbehaviour of the intrinsic volumes approached that of a full 2-dimensional set, and for too small ε < e ≈ .
71 the discretization errors were too large. We note that especially the choice of the largestdilation radius ε needs to be adapted to each fractal F , since there is no good a priori choice: If ε is too small this will result in a shortage of data, whereas a too large ε will distort the estimates.We note that there is a set (cid:110)(cid:112) /π, (cid:112) /π, (cid:112) /π, (cid:112) /π, . . . (cid:111) of radii which is special in the sensethat discrete and continuous disks with these radii have the same area. Stoyan [45] recommends thischoice of radii for the sausage method, and it might also be considered for the methods discussed inthis paper, especially if only a small set of data is to be collected due to computational limitations.24 a) Sierpi´nski Gasket (b) Sierpi´nski Carpet (c) modified Sierpi´nski Carpet(d) Triangle (e) Cross set (f) SupergasketFigure 3: The sample fractals. Sets (a) – (c) are arithmetic and sets (d) – (f) are non-arithmetic. The next step is to measure the intrinsic volumes C k ( F ε j ) for each ε j and each k = 0 , . . . , d .We employ the algorithms described in [25] and [20] which determine for a fixed set F ε j all func-tionals C k ( F ε j ), k = 0 , . . . , d simultaneously. The relevant data set of y ki -values is then determinedaccording to equation (18). The estimates..
We have implemented the simultaneous linear least squares regression estimators(LRE) from eqs. (22) and (23) and the simultaneous non-linear least squares regression estimators(NRE) of the second method as given by (27), which were then included in the software librariesof project
GeoStoch [2] of Ulm University. We applied both LRE and NRE to the data set of eachfractal, regardless of whether it was an arithmetic or a non-arithmetic set. The resulting estimatesfor the fractal dimension and fractal curvatures are collected in Tables 1 and 2.The results suggest that for dimension estimation, LRE and NRE perform equally well. Thedimension estimates based on boundary length data only ( k = 1) and Euler characteristic data only( k = 0) are less reliable than estimates based on the volume data ( k = 2), which corresponds tothe sausage method. The dimension estimates based on all three intrinsic volumes ( k ∈ { , , } ),however, seem to be comparable in accuracy to the method “ k = 2” and the standard box countingmethod, for which we used the free software FracLac [24].Estimates of the fractal curvatures typically result in a relative error of around 10% to 20%. Anexception is the 0-th curvature of the supergasket, which is rather dramatically overestimated. Theproblem are the pointed angles in this set, which lead to large discretization errors for the Eulercharacteristic.We remind the reader that in both methods (NRE and LRE), fractal curvatures and fractaldimension are estimated simultaneously . In order to test the stability of curvature estimates withrespect to the dimension estimate, we have compared the outputs of NRE and LRE to their outputsconditional on a known fractal dimension s (s. Table 2). Noticable differences were only found for25xact 1.585 1.893 1.893 1.588 1.794 1.893box-counting 1.54 1.88 1.83 1.57 1.78 1.84LRE, J = { , , } J = { , , } J = { } J = { } J = { } J = { } J = { } J = { } Table 1: Estimates of fractal dimension. The first row contains the known exact dimension of each fractal, roundedto three decimals. The set J describes the orders k of intrinsic volumes C k used in the estimate. LRE and NRE referto the first and second method from Section 3, respectively. For the method NRE, the period h was estimated fromthe data, and the detail parameter was chosen as m = 4. the modified Sierpi´nski carpet. We interpret this as some evidence for the curvature estimates beingreasonably stable with respect to errors in the dimensional estimate.Moreover, we noticed that for the arithmetic fractals the periodicity was by far more evidentin the Euler characteristic than in the boundary length or area, which explains why the differencesbetween the two methods are most apparent for the 0-th curvature estimate. This is consistent withthe observation that the peaks in the periodograms of the time series of Euler characteristics are morepronounced than the peaks of the other time series, see Figure 2, making the Euler characteristic auseful data set for the estimate of the period of arithmetic fractals.Finally, we remark that non-arithmetic fractals yield virtually the same output for both NREand LRE models. Hence NRE should be preferred over LRE whenever there is some doubt aboutwhether a self-similar fractal is arithmetic or not.In the examples, we have included three different sets of equal dimension, namely the two carpets(b) and (c) and the supergasket (f), cf. Figure 3. The structure of the sets (b) and (c) is rathersimilar, while the set (f) looks very different. The differences in the geometry are also visible in thefractal curvatures. While the fractal curvatures of (b) and (c) only differ slightly, those of the set (c)take completely different values. One can easily distinguish (f) from the other two using any of theestimated fractal curvatures. The sets (b) and (c) are best distinguished by the estimated fractalEuler number, compare Table 2.
5. Summary and outlook
We have introduced two methods for estimating the fractal dimension and the fractal curvaturesof a given fractal set based on binary images. We have shown the consistency of our methods undersuitable assumptions on the covariance structure of the errors and the choice of the radii. We haveimplemented and tested these methods on some toy examples of self-similar sets. While for theestimation of the fractal dimension our methods perform equally well as the standard methods, suchas box counting, we obtain at the same time estimates of the fractal curvatures which we proposeto use as additional geometric characteristics for image classification. The theoretical backgroundprovided by singular curvature theory is a strong argument for using these characteristics in favourof or in addition to other texture parameters suggested in the literature.We point out that our consistency results only show that the suggested estimators estimateindeed the fractal curvatures if the resolution goes to infinity and the sequence of radii tends to zeroin a suitable way. We make no claim about how well our estimators perform if the resolution is keptfinite, that is, in any scenario found in practice. Also, we did not address at all the question of howwell the implemented algorithms perform with respect to computational costs or running time. We26 known, s, h known, s unknown, s, h unknownexact LRE NRE LRE NREk=0 -13197 -10868 -11389 -10848 -11386k=1 117230 124471 124557 124235 123251k=2 564100 572880 572845 571798 566781k=0 -58716 -47745 -58126 -45242 -54107k=1 262770 363432 364825 344376 339293k=2 4900200 5210885 5209660 4937666 4847744k=0 -50742 -41177 -47178 -35060 -36587k=1 260960 363062 361158 309123 312815k=2 4871275 5095192 5092666 4338213 4418800k=0 -9843 -8828 N/A -8555 -8554k=1 100416 104144 N/A 100919 100908k=2 487649 495583 N/A 480237 480184k=0 ? -17555 N/A -19366 -19377k=1 ? 381454 N/A 420805 420571k=2 ? 3387112 N/A 3736520 3735336k=0 -9388 -16261 N/A -15677 -15681k=1 159663 147590 N/A 142288 142156k=2 2497116 2513942 N/A 2423634 2421933 Table 2: Estimates of the k -th fractal curvatures for k = 0 , ,
2. The first column contains the exact value ofthe corresponding fractal curvature, rounded to accuracy 1, which was available for all sets except the Cross set.Columns two and three contain the LRE and NRE estimates based on the knowledge of the true regression parameters s (dimension) and h (period). (Since non-arithmetic sets do not have a period, no values appear for those sets incolumn three.) For NRE, the detail level parameter m was chosen as 4; this seemed reasonable as increasing m furtherchanged the estimates only very slightly. Columns four and five contain simultaneous LRE and NRE estimates of allcurvatures, where the dimension s and the period h were also estimated as explained in Section 3.1. have implemented our methods in the most obvious way, computing the intrinsic volumes for eachdilation radius separately, for which each time a scan of the whole image is necessary. This allowedto use for this step existing algorithms in the GeoStoch library [2]. Probably, a lot of optimizationis possible in the step of determining the intrinsic volumes of the parallel sets. It may be possibleto obtain the curvature data of all parallel sets in a single scan of the image.Notice that so far the theoretical foundations (that is, the existence of fractal curvatures) are laidfor fractal sets exhibiting some form of self-similarity, including strictly self-similar sets [48, 50, 40],self-conformal sets [26, 8] and also some random self-similar fractals, as described in [50]. For fractalsets exhibiting some weaker form of self-similarity, similar results are expected to hold and thereforethe described methods may be used whenever some form of self-similarity is present. However, oneshould be aware that for general (random) fractals F of dimension s , the (expected or almost sure)scaling exponents s k ( F ) might not necessarily be equal to s − k or if they are, that the fractalcurvatures may not exists, not even the averaged versions. For the Brownian path in R d , d ≥
2, forinstance, the fractal dimension is s = 2 (almost surely and in the mean) and the scaling exponentsare s k = s − k for the volume ( k = d ) and the surface area ( k = d −
1) for all dimensions d >
2, cf.[36, 37, 22]. For d = 2, however, the corresponding fractal curvatures are zero because the correctscaling is − log ε for the area C ( F ε ) (almost surely and in the mean) and ε log ε for the perimeter2 C ( F ε ) as ε → s k = s − k (implied by (A3) and (A3’)) is satisfied for agiven set and some k . For this purpose simply a separate regression for the index k (that is with J = { k } in the sense of Remark 3.1) can be carried out and the estimate of the fractal dimensioncan be compared to the dimension estimate of the simultaneous regression or to one of the sausagemethod ( J = { d } ). It is for instance not too difficult to check that the parallel sets of the Koch curvehave Euler characteristic 1, which means C ( F ε ) = 1 for each ε >
0. Hence a separate regression for27 = 0 applied to an image of the Koch curve F should find an estimate for s k ( F ) very close to 0.This is indeed what we observed. Also the violation of assumption (A2) can easily be checked fromthe data and the relevant indices can be excluded from the estimation. Acknowledgements.
The authors would like to thank Martina Z¨ahle for stimulating discussionson fractals and geometric measure theory. During the work on this article SW was supported by aDFG grant, project no. WI 3264/2-2.
References [1]
Fractal Explorer. Software , (last accessed: 28 Aug 2014).[2] GeoStoch. Java library , University of Ulm. (last accessed: 28 Aug 2014).[3] C. Allain and M. Cloitre,
Characterizing the lacunarity of random and deterministic fractalsets , Phys. Rev. A (3) (1991), no. 6, 3552–3558.[4] A. R. Backes, A new approach to estimate lacunarity of texture images , Pattern RecognitionLetters (2013), no. 13, 1455–1461.[5] A. R. Backes, P. C. Cortez, and J. J. de Mesquita Sa Junior, Color texture classification basedon gravitational collapse , Pattern Recognition (2013), no. 6, 1628–1637.[6] P. Barnsley, Fractals Everywhere , Morgan Kaufmann, 2000.[7] P. J. Bickel and K. A. Doksum,
Mathematical statistics. Basic ideas and selected topics , 2 ed.,vol. 1, Prentice Hall, New Jersey, 2001.[8] T. Bohl,
Fractal curvatures and Minkowski content of self-conformal sets , Preprint (2013),arXiv:1211.3421.[9] C. Cattani and G. Pierro,
On the fractal geometry of DNA by the binary image analysis , Bull.Math. Biol. (2013), no. 9, 1544–1570.[10] C. D. Cutler and D. A. Dawson, Estimation of dimension for spatially distributed data andrelated limit theorems , Journal of Multivariate Analysis (1989), no. 1, 115–148.[11] H. Drygas, Weak and strong consistency of the least squares estimators in regression models ,Z. Wahrscheinlichkeitstheorie verw. Gebiete (1976), no. 2, 119–127.[12] K. Falconer, Fractal Geometry: Mathematical Foundations and Applications , Wiley, 2003.[13] H. Federer,
Curvature Measures , Transactions of the American Mathematical Society (1959),no. 3, 418–491.[14] W. Feller, An Introduction to Probability Theory and Its Applications, Vol. 1 , 3 ed., Wiley,1971.[15] J. H. G. Fu,
Tubular neighborhoods in Euclidean spaces , Duke Math. J. (1985), 1025–1046.[16] D. Gatzouras, Lacunarity of self-similar and stochastically self-similar sets , Pac. J. Math. (2000), 397–410.[17] T. Gneiting, H. ˇSevˇc´ıkov´a, and D. B. Percival, Estimators of fractal dimension: assessing theroughness of time series and spatial data , Statist. Sci. (2012), no. 2, 247–277.[18] J.-F. Gouyet, Physics and fractal structures , Masson, Paris, 1996.2819] H. Groemer,
On the extension of additive functionals on classes of convex sets , Pac. J. Math. (1978), 397–410.[20] R. Guderlei, S. Klenk, J. Mayer, V. Schmidt, and E. Spodarev, Algorithms for the computa-tion of Minkowski functionals of deterministic and random polyconvex sets , Image and VisionComputing (2007), 464–474.[21] E. J. Hannan, The estimation of frequency , J. Appl. Prpbab. (1973), no. 3, 510–519.[22] O. Honzl and J. Rataj, Almost sure asymptotic behaviour of the r -neighbourhood surface areaof Brownian paths , Czechoslovak Math. J. (2012), no. 1, 67–75.[23] J. E. Hutchinson, Fractals and self similarity , Indiana Univ. Math. J. (1981), 713–747.[24] A. Karperien, FracLac , http://rsb.info.nih.gov/ij/plugins/fraclac/fraclac.html (lastaccessed: 28 Aug 2014).[25] S. Klenk, V. Schmidt, and E. Spodarev, A new algorithmic approach to the computation ofMinkowski functionals of polyconvex sets , Comp. Geom. Th. Appl. (2006), no. 3, 127–148.[26] S. Kombrink, Fractal curvature measures and Minkowski content for limit sets of conformalfunction systems , PhD thesis, University of Bremen (2011).[27] M. Llorente and S. Winter,
A notion of Euler characteristic for fractals , Math. Nachr. (2007), no. 1-2, 152–170.[28] J.-U. L¨obus, ¨Okonometrie. Mathematische Theorie und Anwendungen , Vieweg, 2001.[29] B. B. Mandelbrot,
The fractal geometry of nature , W. H. Freeman and Co., San Francisco,Calif., 1982.[30] ,
Measures of fractal lacunarity: Minkowski content and alternatives , Fractal geometryand stochastics (Finsterbergen, 1994), Progr. Probab., vol. 37, Birkh¨auser, Basel, 1995, pp. 15–42.[31] F. Mart´ınez-L´opez, M. A. Cabrerizo-V´ılchez, and R. Hidalgo- ´Alvarez,
An improved method toestimate the fractal dimension of physical fractals based on the Hausdorff definition , Physica A:Statistical Mechanics and its Applications (2001), no. 3-4, 387–399.[32] J. D. B. Nelson and N. G. Kingsbury,
Fractal dimension, wavelet shrinkage and anomaly de-tection for mine hunting , IET Signal Process. (2012), no. 5, 484–493.[33] R. E. Plotnick, R. H. Gardner, W. W. Hargrove, K. Prestegaard, and M. Perlmutter, Lacunarityanalysis: A general technique for the analysis of spatial patterns , Physical Review E (1996),no. 5, 5461–5468.[34] D. Pokorny and S. Winter, Scaling exponents of curvature measures , J. Fractal Geom. (2014),no. 2, 177–219.[35] B. G. Quinn and E. J. Hannan, The estimation and tracking of frequency , Cambridge UniversityPress, Cambridge, 2001.[36] J. Rataj, V. Schmidt, and E. Spodarev,
On the expected surface area of the Wiener sausage ,Math. Nachr. (2009), no. 4, 591–603.[37] J. Rataj and S. Winter,
On volume and surface area of parallel sets , Indiana Univ. Math. J. (2010), no. 5, 1661–1686.[38] , Characterization of Minkowski measurability in terms of surface area , J. Math. Anal.Appl. (2013), no. 1, 120–132. 2939] J. Rataj and M. Z¨ahle,
General normal cycles and Lipschitz manifolds of bounded curvature ,Ann. Global Anal. Geom. (2005), 135–156.[40] J. Rataj and M. Z¨ahle, Curvature densities of self-similar sets , Indiana Univ. Math. J. (2012), no. 4, 1425–1449.[41] S. Rodriguez-Romo and A. Sosa-Herrera, Lacunarity and multifractal analysis of the large DLAmass distribution , Phys. A (2013), no. 16, 3316–3328.[42] K. Sandau and H. Kurz,
Measuring fractal dimension and complexity: an alternative approachwith an application , Journal of Microscopy (1997), no. 2, 164–176.[43] F. Soares, F. Janela, M. Pereira, J. Seabra, and M. M. Freire,
3D lacunarity in multifractalanalysis of breast tumor lesions in dynamic contrast-enhanced magnetic resonance imaging ,IEEE Trans. Image Process. (2013), no. 11, 4422–4435.[44] P. Soille, Morphological Image Analysis: Principles and Applications , 2 ed., Springer, NewYork, 2003.[45] D. Stoyan and H. Stoyan,
Fractals, Random Shapes, and Point Fields: Methods of GeometricalStatistics , Wiley, 1994.[46] C. Taylor and S. Taylor,
Estimating the dimension of a fractal , J. R. Statist. Soc. B (1991),no. 2, 353–364.[47] C. R. Tolle, T. R. McJunkin, and D. J. Gorsich, An efficient implementation of the gliding boxlacunarity algorithm , Phys. D (2008), no. 3, 306–315.[48] S. Winter,
Curvature measures and fractals , Diss. Math. (2008), 1–66.[49] S. Winter and M. Z¨ahle,
Fractal curvature measures of self-similar sets , Adv. Geom. (2013),no. 2, 229–244.[50] M. Z¨ahle, Lipschitz-Killing curvatures of self–similar random fractals , Trans. Amer. Math. Soc.363