Algorithmic Fractal Dimensions in Geometric Measure Theory
aa r X i v : . [ c s . CC ] J u l Algorithmic Fractal Dimensions in GeometricMeasure Theory
Jack H. Lutz and Elvira Mayordomo
Abstract
The development of algorithmic fractal dimensions in this century has hadmany fruitful interactions with geometric measure theory, especially fractal geome-try in Euclidean spaces. We survey these developments, with emphasis on connec-tions with computable functions on the reals, recent uses of algorithmic dimensionsin proving new theorems in classical (non-algorithmic) fractal geometry, and direc-tions for future research.
In early 2000, classical Hausdorff dimension [32] was shown to admit a new charac-terization in terms of betting strategies called martingales [51]. This characterizationenabled the development of various effective, i.e., algorithmic, versions of Haus-dorff dimension obtained by imposing computability and complexity constraints onthese martingales. These algorithmic versions included resource-bounded dimen-sions, which impose dimension structure on various complexity classes [52], the(constructive) dimensions of infinite binary sequences, which interact usefully withalgorithmic information theory [53], and the finite-state dimensions of infinite bi-nary sequences, which interact usefully with data compression and Borel normality[19]. Soon thereafter, classical packing dimension [96, 94] was shown to admit anew characterization in terms of martingales that is exactly dual to the martingalecharacterization of Hausdorff dimension [1]. This led immediately to the develop-ment of strong resource-bounded dimensions, strong (constructive) dimension, and
Jack H. LutzDepartment of Computer Science, Iowa State University, Ames, IA 50011 USA, e-mail: [email protected]
Elvira MayordomoDepartamento de Inform´atica e Ingenier´ıa de Sistemas, Instituto de Investigaci´on en Ingenier´ıa deArag´on, Universidad de Zaragoza, 50018 Zaragoza, SPAIN e-mail: [email protected] strong finite-state dimension [1], which are all algorithmic versions of packing di-mension. In the years since these developments, hundreds of research papers bymany authors have deepened our understanding of these algorithmic dimensions.Most work to date on effective dimensions has been carried out in the Cantorspace, which consists of all infinite binary sequences. This is natural, because ef-fective dimensions speak to many issues that were already being investigated in theCantor space. However, the classical fractal dimensions from which these effectivedimensions arose–Hausdorff dimension and packing dimension–are powerful quan-titative tools of geometric measure theory that have been most useful in Euclideanspaces and other metric spaces that have far richer structures than the totally discon-nected Cantor space.This chapter surveys research results to date on algorithmic fractal dimensions ingeometric measure theory, especially fractal geometry in Euclidean spaces. This isa small fraction of the existing body of work on algorithmic fractal dimensions, butit is substantial, and it includes some exciting new results.It is natural to identify a real number with its binary expansion and to use thisidentification to define algorithmic dimensions in Euclidean spaces in terms of theircounterparts in Cantor space. This approach works for some purposes, but it be-comes a dead end when algorithmic dimensions are used in geometric measure the-ory and computable analysis. The difficulty, first noted by Turing in his famouscorrection [98], is that many obviously computable functions on the reals (e.g., ad-dition) are not computable if reals are represented by their binary expansions [100].We thus take a principled approach from the beginning, developing algorithmic di-mensions in Euclidean spaces in terms of the quantity K r ( x ) in the following para-graph, so that the theory can seamlessly advance to sophisticated applications.Algorithmic dimension and strong algorithmic dimension are the most exten-sively investigated effective dimensions. One major reason for this is that these al-gorithmic dimensions were shown by the second author and others [68, 1, 57] tohave characterizations in terms of Kolmogorov complexity, the central notion of al-gorithmic information theory. In Section 2 below we give a brief introduction to theKolmogorov complexity K r ( x ) of a point x in Euclidean space at a given precision r . In Section 3 we use the above Kolmorogov complexity notion to develop thealgorithmic dimension dim ( x ) and the strong algorithmic dimension Dim ( x ) of eachpoint x in Euclidean space. This development supports the useful intuition that thesedimensions are asymptotic measures of the density of algorithmic information inthe point x . We discuss how these dimensions relate to the local dimensions thatarise in the so-called thermodynamic formalism of fractal geometry; we discuss thehistory and terminology of algorithmic dimensions; we review the prima facie casethat algorithmic dimensions are geometrically meaningful; and we discuss what isknown about the circumstances in which algorithmic dimensions agree with theirclassical counterparts. We then discuss the authors’ use of algorithmic dimensions toanalyze self-similar fractals [57]. This analysis gives us a new, information-theoreticproof of the classical formula of Moran [73] for the Hausdorff dimensions of self-similar fractals in terms of the contraction ratios of the iterated function systems that lgorithmic Fractal Dimensions in Geometric Measure Theory 3 generate them. This new proof gives a clear account of “where the dimension comesfrom” in the construction of such fractals. Section 3 concludes with a survey of thedimensions of points on lines in Euclidean spaces, a topic that has been surprisinglychallenging until a very recent breakthrough by N. Lutz and Stull [63].We survey interactive aspects of algorithmic fractal dimensions in Euclideanspaces in Section 4, starting with the mutual algorithmic dimensions developed byCase and the first author [13]. These dimensions, mdim ( x : y ) and Mdim ( x : y ) ,are analogous to the mutual information measures of Shannon information theoryand algorithmic information theory. Intuitively, mdim ( x : y ) and Mdim ( x : y ) areasymptotic measures of the density of the algorithmic information shared by points x and y in Euclidean spaces. We survey the fundamental properties of these mutualdimensions, which are analogous to those of their information-theoretic analogs.The most important of these properties are those that govern how mutual dimen-sions are affected by functions on Euclidean spaces that are computable in thesense of computable analysis [100]. Specifically, we review the information pro-cessing inequalities of [13], which state that mdim ( f ( x ) : y ) ≤ mdim ( x : y ) andMdim ( f ( x ) : y ) ≤ Mdim ( x : y ) hold for all computable Lipschitz functions f , i.e.,that applying such a function f to a point x cannot increase the density of algo-rithmic information that it contains about a point y . We also survey the conditionaldimensions dim ( x | y ) and Dim ( x | y ) recently developed by the first author and N.Lutz [56]. Roughly speaking, these conditional dimensions quantify the density ofalgorithmic information in x beyond what is already present in y .It is rare for the theory of computing to be used to answer open questions inmathematical analysis whose statements do not involve computation or related as-pects of logic. In Section 5 we survey exciting new developments that do exactlythis. We first describe new characterizations by the first author and N. Lutz [56] ofthe classical Hausdorff and packing dimensions of arbitrary sets in Euclidean spacesin terms of the relativized dimensions of the individual points that belong to them.These characterizations are called point-to-set principles because they enable one touse a bound on the relativized dimension of a single, judiciously chosen point x in aset E in Euclidean space to prove a bound on the classical Hausdorff or packing di-mension of the set E . We illustrate the power of the point-to-set principle by givingan overview of its use in the new, information-theoretic proof [56] of Davies’s 1971theorem stating that the Kakeya conjecture holds in the Euclidean plane [20]. Wethen discuss two very recent uses of the point-to-set principle to solve open prob-lems in classical fractal geometry. These are N. Lutz and D. Stull’s strengthenedlower bounds on the Hausdorff dimensions of generalized Furstenberg sets [63] andN. Lutz’s extension of the fractal intersection formulas for Hausdorff and packingdimensions in Euclidean spaces from Borel sets to arbitrary sets. These are, to thebest of our knowledge, the first uses of algorithmic information theory to solve openproblems in classical mathematical analysis.We briefly survey promising directions for future research in Section 6. These in-clude extending the algorithmic analysis of self-similar fractals [57] to other classesof fractals, extending algorithmic dimensions to metric spaces other than Euclideanspaces, investigating algorithmic fractal dimensions that are more effective than Jack H. Lutz and Elvira Mayordomo constructive dimensions (e.g., polynomial-time or finite-state fractal dimensions)in fractal geometry, and extending algorithmic methods to rectifiability and otheraspects of geometric measure theory that do not necessarily concern fractal geome-try. In each of these we begin by describing an existing result that sheds light on thepromise of further inquiry.Overviews of algorithmic dimensions in Cantor space appear in [23, 69], thoughthese are already out of date. Even prior to the development of algorithmic fractaldimensions, a rich network of relationships among gambling strategies, Hausdorffdimension, and Kolmogorov complexity was uncovered by reserach of Ryabko [79,80, 81, 82], Staiger [89, 90, 91], and Cai and Hartmanis [11]. A brief account of this“prehistory” of algorithmic fractal dimensions appears in section 6 of [53].
Algorithmic information theory has most often been used in the set { , } ∗ of allfinite binary strings. The conditionalKolmogorovcomplexity (or conditionalalgo-rithmicinformationcontent) of a string x ∈ { , } ∗ given a string y ∈ { , } ∗ isK ( x | y ) = min {| π | | π ∈ { , } ∗ and U ( π , y ) = x } . Here U is a fixed universal Turing machine, and | π | is the length of a binary “program” π . Hence K ( x | y ) is the minimum number of bits required to specify x to U , when y is provided as side information. We refer the reader to any of the standardtexts [49, 23, 75, 87] for the history and intuition behind this notion, including itsessential invariance with respect to the choice of the universal Turing machine U .The Kolmogorov complexity (or algorithmic information content) of a string x ∈{ , } ∗ is then K ( x ) = K ( x | λ ) , where λ is the empty string.Routine binary encoding enables one to extend the definitions of K ( x ) and K ( x | y ) to situations where x and y range over other countable sets such as N , Q , N × Q , etc.The key to “lifting” algorithmic information theory notions to Euclidean spacesis to define the Kolmogorovcomplexity of a set E ⊆ R n to beK ( E ) = min { K ( q ) | q ∈ Q n ∩ E } . (1)(Shen and Vereshchagin [88] used a very similar notion for a very different purpose.)Note that K ( E ) is the amount of information required to specify not the set E itself,but rather some rational point in E . In particular, this implies that E ⊆ F = ⇒ K ( E ) ≥ K ( F ) . Note also that, if E contains no rational point, then K ( E ) = ∞ . lgorithmic Fractal Dimensions in Geometric Measure Theory 5 The Kolmogorovcomplexity of a point x ∈ R n atprecision r ∈ N isK r ( x ) = K ( B − r ( x )) , (2)where B ε ( x ) is the open ball of radius ε about x , i.e., the number of bits requiredto specify some rational point q ∈ Q n satisfying | q − x | < − r , where | q − x | is theEuclidean distance of q − x from the origin. We now define the (constructive)dimension of a point x ∈ R n to bedim ( x ) = lim inf r → ∞ K r ( x ) r (1)and the strong(constructive)dimension of x to beDim ( x ) = lim sup r → ∞ K r ( x ) r . (2)We note that dim ( x ) and Dim ( x ) were originally defined in terms of algorithmicbetting strategies called gales [53, 1]. The identities (1) and (2) were subsequenttheorems proven in [57], refining very similar results in [68, 1]. These identitieshave been so convenient for work in Euclidean space that it is now natural to regardthem as definitions.Since K r ( x ) is the amount of information required to specify a rational pointthat approximates x to within 2 − r (i.e., with r bits of precision), dim ( x ) and Dim ( x ) are intuitively the lower and upperasymptoticdensitiesofinformation in the point x . This intuition is a good starting point, but the fact that dim ( x ) and Dim ( x ) aregeometrically meaningful will only become evident in light of the mathematicalconsequences of (1) and (2) surveyed in this chapter.It is an easy exercise to show that, for all x ∈ R n ,0 ≤ dim ( x ) ≤ Dim ( x ) ≤ n . (3)If x is a computable point in R n , then K r ( x ) = o ( r ) , so dim ( x ) = Dim ( x ) =
0. On theother hand, if x is a random point in R n (i.e., a point that is algorithmically randomin the sense of Martin-L¨of [65]), then K r ( x ) = nr − O ( ) , so dim ( x ) = Dim ( x ) = n .Hence the dimensions of points range between 0 and the dimension of the Euclideanspace that they inhabit. In fact, for every real number α ∈ [ , n ] , the dimensionlevelset DIM α = { x ∈ R n | dim ( x ) = α } (4) Jack H. Lutz and Elvira Mayordomo and the strongdimensionlevelsetDIM α str = { x ∈ R n | Dim ( x ) = α } (5)are uncountable and dense in R n [53, 1]. The dimensions dim ( x ) and Dim ( x ) can co-incide, but they do not generally do so. In fact, the set DIM ∩ DIM n str is a comeager(i.e., topologically large) subset of R n [37].Classical fractal geometry has local, or pointwise, dimensions that are useful,especially in connection with dynamical systems. Specifically, if ν is an outermea-sure on R n , i.e., a function ν : P ( R n ) → [ , ∞ ] satisfying ν ( /0 ) =
0, monotonicity( E ⊆ F = ⇒ ν ( E ) ≤ ν ( F ) ), and countable subadditivity ( E ⊆ ∪ ∞ k = E k = ⇒ ν ( E ) ≤ ∑ ∞ k = ν ( E k ) ), and if ν is locallyfinite (i.e., every x ∈ R n has a neighborhood N with ν ( N ) < ∞ ), then the lower and upperlocaldimensions of ν at a point x ∈ R n are ( dim loc ν )( x ) = lim inf r → ∞ log ( ν ( B − r ( x )) ) r (6)and ( Dim loc ν )( x ) = lim sup r → ∞ log ( ν ( B − r ( x )) ) r , (7)respectively, where log = log [25].Until very recently, no relationship was known between the dimensions dim ( x ) and Dim ( x ) and the local dimensions (6) and (7). However, N. Lutz recently ob-served that a very non-classical choice of the outer measure ν remedies this. Foreach E ⊆ R n , let κ ( E ) = − K ( E ) , (8)where K ( E ) is defined as in (1). Then κ is easily seen to be an outer measure on R n that is finite (i.e., κ ( R n ) < ∞ ), hence certainly locally finite, whence the localdimensions dim loc κ and Dim loc κ are well defined. In fact we have the following. Theorem 3.1. (N. Lutz[60]) For all x ∈ R n , dim ( x ) = ( dim loc κ )( x ) and Dim ( x ) = ( Dim loc κ )( x ) . There is a direct conceptual path from the classical Hausdorff and packing di-mensions to the dimensions of points defined in (1) and (2).The Hausdorffdimension dim H ( E ) of a set E ⊆ R n was introduced by Hausdorff[32] before 1920 and is arguably the most important notion of fractal dimension.Its classical definition, which may be found in standard texts such as [93, 25, 7],involves covering the set E by families of sets with diameters vanishing in the limit.In all cases, 0 ≤ dim H ( E ) ≤ n .At the beginning of the present century, in order to formulate versions of Haus-dorff dimensions that would work in complexity classes and other algorithmic set- lgorithmic Fractal Dimensions in Geometric Measure Theory 7 tings, the first author [52] gave a new characterization of Hausdorff dimension interms of betting strategies, called gales, on which it is easy to impose computabilityand complexity conditions. Of particular interest here, he then defined the construc-tive dimension cdim ( E ) of a set E ⊆ R n exactly like the gale characterization ofdim H ( E ) , except that the gales were now required to be lower semicomputable [53].He then defined the dimension dim ( x ) of a point x ∈ R n to be the constructive dimen-sion of its singleton, i.e., dim ( x ) = cdim ( { x } ) . The existence of a universal Turingmachine made it immediately evident that constructive dimension has the absolutestability property that cdim ( E ) = sup x ∈ E dim ( x ) (9)for all x ∈ R n . Accordingly, constructive dimension has since been investigatedpointwise. As noted earlier, the second author [68] then proved the characterization(1) as a theorem.Two things should be noted about the preceding paragraph. First, these earlypapers were written entirely in terms of binary sequences, rather than points in Eu-clidean space. However, the most straightforward binary encoding of points bridgesthis gap. (In this survey we freely use those results from Cantor space that do extendeasily to Euclidean space.) Second, although the gale characterization is essentialfor polynomial time and many other stringent levels of effectivization, constructivedimension can be defined equivalently by effectivizing Hausdorff’s original formu-lation [77]. In 2001, the first author conjectured that there should be a correspondenceprinciple(a term that Bohr had used analogously in quantum mechanics) assuring us that forsufficiently simple sets E ⊆ R n , the constructive and classical dimensions agree, i.e.,cdim ( E ) = dim H ( E ) . (10)Hitchcock [34] confirmed this conjecture, proving that (10) holds for any set E ⊆ R n that is a union of sets that are computably closed, i.e., that are Π in Kleene’sarithmetical hierarchy. (This means that (10) holds for all Σ sets, and also for setsthat are nonuniform unions of Π sets.) Hitchcock also noted that this result isthe best possible in the arithmetical hierarchy, because there are Π sets E (e.g., E = { z } , where z is a Martin-L¨of random point that is ∆ ) for which (10) fails.By (9) and (10) we have dim H ( E ) = sup x ∈ E dim ( x ) , (11)which is a very nonclassical, pointwise characterization of the classical Hausdorffdimensions of sets that are unions of Π sets. Since most textbook examples of Jack H. Lutz and Elvira Mayordomo fractal sets are Π , (11) is a strong preliminary indication that the dimensions ofpoints are geometrically meaningful.The packing dimension dim P ( E ) of a set E ⊆ R n was introduced in the early1980s by Tricot [96] and Sullivan [94]. Its original definition is a bit more involvedthat that of Hausdorff dimension [25, 7] and implies that dim H ( E ) ≤ dim P ( E ) ≤ n for all E ⊆ R n .After the development of constructive versions of Hausdorff dimension outlinedabove, Athreya, Hitchcock, and the authors [1] undertook an analogous developmentfor packing dimension. The gale characterization of dim P ( E ) turns out to be exactlydual to that of dim H ( E ) , with just one limit superior replaced by a limit inferior. Thestrongconstructivedimension cDim ( E ) of a set E ⊆ R n is defined by requiring thegales to be lower semicomputable, and the strong dimension of a point x ∈ R n isDim ( x ) = cDim ( { x } ) . The absolutestability of strong constructive dimension,cDim ( E ) = sup x ∈ E Dim ( x ) , (12)holds for all E ⊆ R n , as does the Kolmogorov complexity characterization (2). Allthis was shown in [1], but a correspondence principle for strong constructive di-mension was left open. In fact, Conidis [16] subsequently used a clever priorityargument to construct a Π set E ⊆ R n for which cDim ( E ) = dim P ( E ) . It is stillnot known whether some simple, logical definability criterion for E implies thatcDim ( E ) = dim P ( E ) . Staiger’s proof that regular ω -languages E satisfy this iden-tity is an encouraging step in this direction [92]. The first application of algorithmic dimensions to fractal geometry was the authors’investigation of the dimensions of points in self-similar fractals [57]. We give abrief exposition of this work here, referring the reader to [57] for the many missingdetails.Self-similar fractals are the most widely known and best understood classes offractals [25]. Cantor’s middle-third set, the von Koch curve, the Sierpinski triangle,and the Menger sponge are especially well known examples of self-similar fractals.Briefly, a self-similar fractal in a Euclidean space R n is generated from an initialnonempty closed set D ⊆ R n by an iteratedfunctionsystem (IFS), which is a finitelist S = ( S , S , . . . , S k − ) of k ≥ S i : D → D . Each of thesesimilarities S i is coded by the symbol i in the alphabet Σ = { , . . . , k − } , and each S i has a contractionratio c i ∈ ( , ) . The IFS S is required to satisfy Moran’s opensetcondition [73], which says that there is a nonempty open set G ⊆ D whose images S i ( G ) , for i ∈ Σ , are disjoint subsets of G .For example, the Sierpinski triangle is generated from the set D ⊆ R consistingof the triangle with vertices v = ( , ) , v = ( , ) , and v = ( / , √ / ) , togetherwith this triangle’s interior, by the IFS S = ( S , S , S ) , where each S i : D → D is lgorithmic Fractal Dimensions in Geometric Measure Theory 9 defined by S i ( p ) = v i + ( p − v i ) for p ∈ D . Note that Σ = { , , } and c = c = c = / G be the topological interiorof D . Each infinite sequence T ∈ Σ ∞ codes a point S ( T ) ∈ D that is obtained byapplying the similarities coded by the successive symbols in T in a canonical way.(See Figure 1.) The Sierpinski is the attractor (or invariant set) of S and D , whichconsists of all points S ( T ) for T ∈ Σ ∞ . Fig. 1
A sequence T ∈ { , , } ∞ codes a point S ( T ) in the Sierpinski triangle (from [57]). The main objective of [57] was to relate the dimension and strong dimension ofeach point S ( T ) ∈ R n in a self-similar fractal to the corresponding dimensions ofthe coding sequence T . As it turned out, the algorithmic dimensions in Σ ∞ had tobe extended in order to achieve this.The similarity dimension of an IFS S = ( S , . . . , S k − ) with contraction ratios c , . . . , c k − ∈ ( , ) is the unique solution sdim ( S ) = s of the equation k − ∑ i = o c si = . (13)The similarity probabilitymeasure of S is the probability measure on Σ that is im-plicit in (13), i.e., the function π S : Σ → [ , ] defined by π S ( i ) = c sdim ( S ) i (14)for each i ∈ Σ . If the contraction ratios of S are all the same, then π S is the uniformprobability measure on Σ , but this is not generally the case. We extend π S to thedomain Σ ∗ by setting π S ( w ) = | w |− ∏ m = π S ( w [ m ]) (15)for each w ∈ Σ ∗ . We define the Shannon S -self-information of each string w ∈ Σ ∗ to be the quantity l S ( w ) = log 1 π S ( w ) . (16)Finally, we define the dimension of a sequence T ∈ Σ ∞ withrespectto the IFS S tobe dim S ( T ) = lim inf j → ∞ K ( T [ .. j ]) l S ( T [ .. j ]) . (17)Similarly, the strongdimension of T withrespectto S isDim S ( T ) = lim sup j → ∞ K ( T [ .. j ]) l S ( T [ .. j ]) . (18)The dimension (17) is a special case of an algorithmic Billingsley dimension[6, 99, 12]. These are treated more generally in [57].A set F ⊆ R n is a computablyself-similar fractal if it is the attractor of some D and S as above such that the contracting similarities S , . . . , S k − are all computablein the sense of computable analysis.The following theorem gives a complete analysis of the dimensions of points incomputably self-similar fractals. Theorem 3.2. (J. Lutz and Mayordomo [57]) If F ⊆ R n is a computably self-similarfractal and S is an IFS testifying to this fact, then, for all points x ∈ F and all codingsequences T ∈ Σ ∞ for x, dim ( x ) = sdim ( S ) dim S ( T ) (19) and Dim ( x ) = sdim ( S ) Dim S ( T ) . (20)The proof of Theorem 3.2 is nontrivial. It combines some very strong codingproperties of iterated function systems with some geometric Kolmogorov complex-ity arguments.The following characterization of continuous functions on the reals is one of theoldest and most beautiful theorems of computable analysis. Theorem 3.3. (Lacombe [45, 46]) A function f : R n → R m is continuous if and onlyif there is an oracle A ⊆ N relative to which f is computable. Using Lacombe’s theorem it is easy to derive the classical analysis of self-similarfractals (which need not be computably self-similar) from Theorem 3.2.
Corollary 3.4. (Moran [73], Falconer [24]) For every self-similar fractal F ⊆ R n and every IFS S that generates F, lgorithmic Fractal Dimensions in Geometric Measure Theory 11 dim H ( F ) = dim P ( F ) = sdim ( F ) . (21) Proof.
Let F and S be as given. By Lacombe’s theorem there is an oracle A ⊆ N relative to which S is computable. It follows by a theorem by Kamo and Kawamura[41] that the set F is Π relative to A , whence the relativization of (11) tells us thatdim A H ( F ) = sup x ∈ F dim A ( x ) . (22)We then have dim H ( F ) ≤ dim P ( F )= dim A P ( F ) ≤ cDim A ( F )= sup x ∈ F Dim A ( x )= ( ) sup T ∈ Σ ∞ sdim ( S ) Dim S , A ( T )= sdim ( S )= sup T ∈ Σ ∞ sdim ( S ) dim S , A ( T )= ( ) sup x ∈ F dim A ( x )= ( ) dim A H ( F )= dim H ( F ) , so (21) holds.Intuitively, Theorem 3.2 is stronger than its Corollary 3.4, because Theorem 3.2gives a complete account of “where the dimension comes from”. The dimension level sets DIM α and DIM α str defined in (4) and (5) have been thefocus of several investigations. It was shown in [53, 1] that, for all 0 ≤ α ≤ n ,cdim ( DIM α ) = dim H ( DIM α ) = α and cDim ( DIM α str ) = dim P ( DIM α str ) = α . Hitchcock, Terwijn, and the first author [33] investigated the complexities ofthese dimension level sets from the viewpoint of descriptive set theory. Followingstandard usage [74], we write Σ and Π for the classes at the k th level ( k ∈ Z + ) of the Borel hierarchy of subsets of R n . That is, Σ is the class of all open subsets of R n , each Π is the class of all complements of sets in Σ , and each Σ + is the classof all countable unions of sets in Π . We also write Σ k and Π k for the classes ofthe k th level of Kleene’s arithmetical hierarchy of subsets of R n . That is, Σ is theclass of all computably open subsets of R n , each Π k is the class of all complementsof sets in Σ k , and each Σ k + is the class of all effective (computable) unions of setsin Π k .Recall that a real number α is ∆ -computable if there is a computable function f : N → Q such that lim k → ∞ f ( k ) = α .The following facts were proven in [33].1. DIM is Π but not Σ .2. For all α ∈ ( , n ] , DIM α is Π (and Π if α is ∆ -computable) but not Σ .3. DIM n str is Π and Π but not Σ .4. For all α ∈ [ , n ) , DIM α str is Π (and Π if α is ∆ -computable) but not Σ .Weihrauch and the first author [59] investigated the connectivity properties ofsets of the form DIM I = [ α ∈ I DIM α , where I ⊆ [ , n ] is an interval. After making the easy observation that each of thesets DIM [ , ) and DIM ( n − , n ] is totally disconnected, they proved that each of the setsDIM [ , ] and DIM [ n − , n ] is path-connected. These results are especially intriguing inthe Euclidean plane, where they say that extending either of the sets DIM [ , ) andDIM ( , ] to include the level set DIM transforms it from a totally disconnected setto a path-connected set. This suggests that DIM is somehow a very special subsetof R .Turetsky [97] investigated this matter further and proved that DIM is a con-nected set in R n . He also proved that DIM [ , ) ∪ DIM ( , ] is not a path-connectedsubset of R . Since effective dimension is a pointwise property, it is natural to study the dimensionspectrum of a set E ⊆ R n , i.e., the set sp ( E ) = { dim ( x ) | x ∈ E } . This study is farfrom obvious even for sets as apparently simple as straight lines. We review in thissection the results obtained so far, mainly for the case of straight lines in R .As noted in section 3.4, the set of points in R of dimension exactly one is con-nected, while the set of points in R with dimension less than 1 is totally discon-nected. Therefore every line in R contains a point of dimension 1. Despite thesurprising fact that there are lines in every direction that contain no random points[55], the first author and N. Lutz have shown that almost every point on any line withrandom slope has dimension 2 [56]. Still all these results leave open fundamental lgorithmic Fractal Dimensions in Geometric Measure Theory 13 questions about the structure of the dimension spectra of lines, since they don’t evenrule out the possibility of a line having the singleton set { } as its dimension spec-trum.Very recently this latest open question has been answered in the negative. N. Lutzand Stull [63] have proven the following general lower bound on the dimension ofpoints on lines in R . Theorem 3.5. (N. Lutz and Stull [63]) For all a , b , x ∈ R , dim ( x , ax + b ) ≥ dim a , b ( x ) + min { dim ( a , b ) , dim a , b ( x ) } . In particular, for almost every x ∈ R , dim ( x , ax + b ) = + min { dim ( a , b ) , } . Taking x = x a Martin-L¨of random real relative to ( a , b ) , Theorem 3.5gives us two points in the line, ( , b ) and ( x , ax + b ) , whose dimensions differ byat least one, so the dimension spectrum cannot be a singleton.We briefly sketch here the main intuitions behind the (deep) proof of Theorem3.5, fully based on algorithmic information theory. Theorem 3.5’s aim is to connectdim ( x , ax + b ) with dim ( a , b , x ) (i.e., a dimension in R with a dimension in R ).Notice that in the case dim ( a , b ) ≤ dim a , b ( x ) the theorem’s conclusion is close tosaying dim ( x , ax + b ) ≥ dim ( a , b , x ) .The proof is based on the property that says that under the following two condi-tions(i) dim ( a , b ) is small(ii) whenever ux + v = ax + b , either dim ( u , v ) is large or ( u , v ) is close to ( a , b ) it holds that dim ( x , ax + b ) is close to dim ( a , b , x ) .There is an extra ingredient to finish this intuition.While condition (ii) can beshown to hold in general, condition (i) can only be proven in a particular relativizedworld whereas the conclusion of the theorem still holds for every oracle.N. Lutz and Stull [62] have also shown that the dimension spectrum of a line isalways infinite, proving the following two results. The first theorem proves that ifdim ( a , b ) = Dim ( a , b ) then the corresponding line contains a length one interval. Theorem 3.6. (N. Lutz and Stull [62]) Let a , b ∈ R satisfy that dim ( a , b ) = Dim ( a , b ) .Then for every s ∈ [ , ] there is a point x ∈ R such that dim ( x , ax + b ) = s + min { dim ( a , b ) , } . The second result proves that all spectra of lines are infinite.
Theorem 3.7. (N. Lutz and Stull [62]) Let L a , b be any line in R . Then the dimensionspectrum sp ( L a , b ) is infinite. Just as the dimension of a point x in Euclidean space is the asymptotic density of thealgorithmic information in x , the mutual dimension between two points x and y in Euclidean spaces is the asymptotic density of the algorithmic information shared by x and y . In this section, we survey this notion and the data processing inequalities,which estimate the effect of computable functions on mutual dimension. We alsosurvey the related notion of conditional dimension. The mutual(algorithmic)information between two rational points p ∈ Q m and q ∈ Q n is I ( p : q ) = K ( p ) − K ( p | q ) . This notion, essentially due to Kolmogorov [44], is an analog of mutual entropyin Shannon information theory [86, 18, 49]. Intuitively, K ( p | q ) is the amount ofinformation in p not contained in q , so I ( p : q ) is the amount of information in p that is contained in q . It is well known [49] that, for all p ∈ Q m and q ∈ Q n ,I ( p : q ) ≈ K ( p ) + K ( q ) − K ( p , q ) (1)in the sense that the magnitude of the difference between the two sides of (2) is o ( min { K ( p ) , K ( q ) } ) . This fact is called symmetryofinformation, because it imme-diately implies that I ( p : q ) ≈ I ( q : p ) .The ideas in the rest of this section were introduced by Case and the first author[13]. In the spirit of (1) they defined the mutual information between sets E ⊆ R m and F ⊆ R n to beI ( E : F ) = min { I ( p : q ) | p ∈ Q m ∩ E and q ∈ Q n ∩ F } . This is the amount of information that rational points p and q must share in order tobe in E and F , respectively. Note that, for all E , E ⊆ R m and F , F ⊆ R n , (cid:2) ( E ⊆ E ) and ( F ⊆ F ) (cid:3) = ⇒ I ( E : F ) ≥ I ( E : F ) . The mutualinformation between two points x ∈ R m and y ∈ R n at precision r ∈ N is I r ( x : y ) = I ( B − r ( x ) : B − r ( y )) . This is the amount of information that rational approximations of x and y must share,merely due to their proximities (distance less than 2 − r ) to x and y .In analogy with (1) and (2), the lower and upper mutual dimensions betweenpoints x ∈ R m and y ∈ R m aremdim ( x : y ) = lim inf r → ∞ I r ( x : y ) r (2)and lgorithmic Fractal Dimensions in Geometric Measure Theory 15 Mdim ( x : y ) = lim sup r → ∞ I r ( x : y ) r , (3)respectively.The following theorem shows that the mutual dimensions mdim and Mdim havemany of the properties that one should expect them to have. The proof is involvedand includes a modest generalization of Levin’s coding theorem [47, 48]. Theorem 4.1. (Case and J. Lutz [13]) For all x ∈ R m and y ∈ R n , the followinghold.1. mdim ( x : y ) ≤ min { dim ( x ) , dim ( y ) } .2. Mdim ( x : y ) ≤ min { Dim ( x ) , Dim ( y ) } .3. mdim ( x : x ) = dim ( x ) .4. Mdim ( x : x ) = Dim ( x ) .5. mdim ( x : y ) = mdim ( y : x ) .6. Mdim ( x : y ) = Mdim ( y : x ) .7. dim ( x ) + dim ( y ) − Dim ( x , y ) ≤ mdim ( x : y ) ≤ Dim ( x ) + Dim ( y ) − Dim ( x : y ) .8. dim ( x ) + dim ( y ) − dim ( x , y ) ≤ Mdim ( x : y ) ≤ Dim ( x ) + Dim ( y ) − dim ( x : y ) .9. If x and y are independently random, then Mdim ( x : y ) = . The expressions dim ( x , y ) and Dim ( x , y ) in 7 and 8 above refer to the dimensionsof the point ( x , y ) ∈ R m + n . In 9 above, x and y are independentlyrandom if ( x , y ) isa Martin-L¨of random point in R m + n .More properties of mutual dimensions may be found in [13, 14]. The data processing inequality of Shannon information theory [18] says that, forany two probability spaces X and Y , any set Z , and any function f : X → Z ,I ( f ( X ) ; Y ) ≤ I ( X ; Y ) , (4)i.e., the induced probability space f ( X ) obtained by “processing the informationin X through f ” has no greater mutual entropy with Y than X has with Y . Moresuccintly, f ( X ) tells us no more about Y than X tells us about Y . The data process-ing inequality of algorithmic information theory [49] says that, for any computablepartial function f : { , } ∗ → { , } ∗ , there is a constant c f ∈ N (essentially thenumber of bits in a program that computes f ) such that, for all strings x ∈ dom f and y ∈ { , } ∗ , I ( f ( x ) : y ) ≤ I ( x : y ) + c f . (5)That is, modulo the constant c f , f ( x ) contains no more information about y than x contains about y .The data processing inequality for the mutual dimension mdim should say thatevery nice function f : R m → R n has the property that, for all x ∈ R m and y ∈ R k , mdim ( f ( x ) : y ) ≤ mdim ( x : y ) . (6)But what should “nice” mean? A nice function certainly should be computable inthe sense of computable analysis [10, 43, 100]. But this is not enough. For example,there is a function f : R → R that is computable and space-filling in the sense that [ , ] ⊆ range f [83, 17]. For such a function, choose x ∈ R such that dim ( f ( x )) = y = f ( x ) . Then mdim ( f ( x ) : y ) = mdim ( y : y )= dim ( y )= > ≥ dim ( x ) ≥ mdim ( x : y ) , so (6) fails.Intuitively, the above failure of (6) occurs because the function f is extremelysensitive to its input, a property that “ nice” functions do not have. A function f : R m → R n is Lipschitz if there is a real number c > x , x ∈ R m , | f ( x ) − f ( x ) | ≤ c | x − x | . The following data processing inequalities show that computable Lipschitz func-tions are “nice”.
Theorem 4.2. (Case and J. Lutz [13]) If f : R m → R n is computable and Lipschitz,then, for all x ∈ R m and y ∈ R k , mdim ( f ( x ) : y ) ≤ mdim ( x : y ) and Mdim ( f ( x ) : y ) ≤ Mdim ( x : y ) Several more theorems of this type and applications of these appear in [13].
A comprehensive theory of the fractal dimensions of points in Euclidean spacesrequires not only the dimensions dim ( x ) and Dim ( x ) and the mutual dimensionsmdim ( x : y ) and Mdim ( x : y ) , but also the conditional dimensions dim ( x | y ) andDim ( x | y ) formulated by the first author and N. Lutz [56]. We briefly describe theseformulations here.The conditional Kolmogorov complexity K ( p | q ) , defined for rational points p ∈ Q m and q ∈ Q n , is lifted to the conditional dimensions in the following four steps. lgorithmic Fractal Dimensions in Geometric Measure Theory 17
1. For x ∈ R m , q ∈ Q n , and r ∈ N , the conditionalKolmogorovcomplexityof x atprecision r given q isˆK r ( x | q ) = min { K ( p | q ) | p ∈ Q m ∩ B − r ( x ) } .
2. For x ∈ R m , y ∈ R n , and r , s ∈ N , the conditionalKolmogorovcomplexity of x atprecision r given y atprecision s isK r , s ( x | y ) = max (cid:8) ˆK r ( x | q ) | q ∈ Q n ∩ B − s ( y ) (cid:9) .
3. For x ∈ R m , y ∈ R n , and r ∈ N , the conditional Kolmogorov complexity of x given y at precision r is K r ( x | y ) = K r , r ( x | y ) .
4. For x ∈ R m and y ∈ R n , the lower and upperconditionaldimensions of x given y are dim ( x | y ) = lim inf r → ∞ K r ( x | y ) r and Dim ( x | y ) = lim sup r → ∞ K r ( x | y ) r , respectively.Steps 1, 2, and 4 of the above lifting are very much in the spirit of what has beendone in section 2, 3.1, and 4.1 above. Step 3 appears to be problematic, becauseusing the same precision bound r for both x and y makes the definition seem arbitraryand “brittle”. However, the following result shows that this is not the case. Theorem 4.3. ([56]) Let s : N → N . If | s ( r ) − r | = o ( r ) , then, for all x ∈ R m andy ∈ R n , dim ( x | y ) = lim inf r → ∞ K r , s ( r ) ( x | y ) rand Dim ( x | y ) = lim sup r → ∞ K r , s ( r ) ( x | y ) r . The following result is useful for many purposes.
Theorem 4.4. (chain rule for K r ) For all x ∈ R m and y ∈ R n , K r ( x , y ) = K r ( x | y ) + K r ( y ) + o ( r ) . (7)An oracle for a point y ∈ R n is a function g : N → Q n such that, for all s ∈ N , | g ( s ) − y | ≤ − s . The Kolmogorovcomplexity of a rational point p ∈ Q m relativetoa point y ∈ R n is K y ( p ) = max { K g ( p ) | g is an oracle for y } , where K g ( p ) is the Kolmogorov complexity of p when the universal machine hasaccess to the oracle g . The purpose of the maximum here is to prevent K y ( p ) fromusing oracles g that code more than y into their behaviors. For x ∈ R m and y ∈ R n ,the dimension dim y ( x ) relative to y is defined from K y ( p ) exactly as dim ( x ) wasdefined from K ( p ) in Sections 2 and 3.1 above. The relativized strong dimensionDim y ( x ) is defined analogously.The following result captures the intuition that conditioning on a point y is arestricted form of oracle access to y . Lemma 4.5. ([56]) For all x ∈ R m and y ∈ R n , dim y ( x ) ≤ dim ( x | y ) and Dim y ( x ) ≤ Dim ( x | y ) . The remaining results in this section confirm that conditional dimensions havethe correct information-theoretic relationships to dimensions and mutual dimen-sions.
Theorem 4.6. ([56]) For all x ∈ R m and y ∈ R n , mdim ( x : y ) ≥ dim ( x ) − Dim ( x | y ) and Mdim ( x : y ) ≤ Dim ( x ) − dim ( x | y ) . Theorem 4.7. (chain rule for dimension [56]) For all x ∈ R m and y ∈ R n , dim ( x ) + dim ( y | x ) ≤ dim ( x , y ) ≤ dim ( x ) + Dim ( y | x ) ≤ Dim ( x , y ) ≤ Dim ( x ) + Dim ( y | x ) . Many of the most challenging problems in geometric measure theory are prob-lems of establishing lower bounds on the classical fractal dimensions dim H ( E ) anddim P ( E ) for sets E ⊆ R n . Although such problems seem to involve global propertiesof the sets E and make no mention of algorithms, the dimensions of points have re-cently been used to prove new lower bound results for classical fractal dimensions.The key to these developments is the following pair of theorems of the first authorand N. Lutz. Theorem 5.1. (point-to-set-principle for Hausdorff dimension [56]) For every E ⊆ R n , dim H ( E ) = min A ⊆ N sup x ∈ E dim A ( x ) . (1) lgorithmic Fractal Dimensions in Geometric Measure Theory 19 Theorem 5.2. (point-to-set-principle for packing dimension [56]) For every E ⊆ R n , dim P ( E ) = min A ⊆ N sup x ∈ E Dim A ( x ) . (2)The relativized dimensions dim A ( x ) and Dim A ( x ) here are defined by substitutingK Ar ( x ) for K r ( x ) in (1) and (2).It is to be emphasized that these two theorems completely characterize dim H ( E ) and dim P ( E ) for all sets E ⊆ R n . These characterizations are called point-to-setprinciples because they enable one to use a lower bound on the relativized dimen-sion of a single, judiciously chosen point x ∈ E to establish a lower bound on theclassical dimension of the set E itself. More precisely, for example, Theorem 5.1says that, in order to prove a lower bound dim H ( E ) ≥ α , it suffices to show that, forevery oracle A ⊆ N and every ε >
0, there is a point x ∈ E such that dim A ( x ) ≥ α − ε .In some cases, it can in fact be shown that, for every oracle A ⊆ N , there is a point x ∈ E such that dim A ( x ) ≥ α . While the arbitrary oracle A is essential for the cor-rectness of such proofs, the discussion below shows that its presence has not beenburdensome in applications to date. The first application of the point-to-set principle was not a new theorem, but rathera new, information-theoretic proof of an old theorem. We describe this proof herebecause it illustrates the intuitive power of the point-to-set principle.A Kakeyaset in R n is a set K ⊆ R n that contains a unit segment in every direc-tion. Sometime before 1920, Besicovitch [4, 5] proved the then-surprising existenceof Kakeya sets of Lebesgue measure 0 in R n for all n ≥ R can have dimension less than 2 [20]. The famous Kakeyaconjecture (inits most commonly stated form) asserts a negative answer to this and the analo-gous questions in higher dimensions. That is, the Kakeya conjecture says that everyKakeya set in a Euclidean space R n has Hausdorff dimension n . This conjectureholds trivially for n = n =
2. The Kakeyaconjecture remains an important open problem for n ≥ R .This proof uses the following lower bound on the dimensions of points in a line y = mx + b . Lemma 5.3. (J. Lutz and N. Lutz [56]) Let m ∈ [ , ] and b ∈ R . For almost everyx ∈ [ , ] , dim ( x , mx + b ) ≥ lim inf r → ∞ K r ( m , b , x ) − K r ( b | m ) r . (3)We do not prove this lemma here, but note that the proof relativizes, so the lemmaholds relative to every oracle A ⊆ N . To prove Davies’s theorem, let K ⊆ R be a Kakeya set. By the point-to-set prin-ciple, fix A ⊆ N such that dim H ( K ) = sup ( x , y ) ∈ K dim A ( x , y ) . (4)Fix m ∈ [ , ] such that dim A ( m ) = . (5)(This holds for any m that is random relative to A .) Since K is Kakeya, there isa unit segment L ⊆ K of slope m . Let ( x , y ) be the left endpoint of L , let q ∈ Q ∩ [ x , x + / ] , and let L ′ be the unit segment of slope m whose endpoint is ( x − q , y ) . Then L ′ crosses the y -axis at the point b = mq + y . By Lemma 5.3(relativized to A ), fix x ∈ [ , / ] such thatdim A , m , b ( x ) = A ( x , mx + b ) ≥ lim inf r → ∞ K Ar ( m , b , x ) − K Ar ( b | m ) r . (7)(Such x exists, because almost every x ∈ [ , / ] satisfies (6) and (7).)In the language of section 5.1, our “judiciously chosen point” is ( x + q , mx + b ) ∈ L ⊆ K , and the point-to-set principle tells us that it suffices to prove thatdim A ( x + q , mx + b ) = . (8)But this is now easy. Since q is rational, (7) and two applications of the chain rule(7) tell us thatdim A ( x + q , mx + b ) = dim A ( x , mx + b ) ≥ lim inf r → ∞ K Ar ( m , b , x ) − K Ar ( b , m ) + K Ar ( m ) r = lim inf r → ∞ K Ar ( x | b , m ) + K Ar ( m ) r ≥ lim inf r → ∞ K A , m , br ( x ) r + lim inf r → ∞ K Ar ( m ) r = dim A , m , b ( x ) + dim A ( m ) , whence (5) and (6) tell us that (8) holds.This information-theoretic proof of Davies can be summarized in very intuitiveterms: Because K is Kakeya, it contains a unit segment L whose slope m has dimen-sion 1 relative to A . A rational shift of L to a unit segment L ′ crosses the y -axis atsome point b . Lemma 5.3 then gives us a point ( x , mx + b ) on L ′ that has dimension2 relative to A . The point on L from which ( x , mx + b ) was shifted lies in K and alsohas dimension 2 relative to A , so K has Hausdorff dimension 2. lgorithmic Fractal Dimensions in Geometric Measure Theory 21 The following two sections discuss recent uses of this method to prove new the-orems in classical fractal geometry.
We now consider two fundamental, nontrivial, textbook theorems of fractal geome-try. The first, over thirty years old and called the intersectionformula, concerns theintersection of one fractal with a random translation of another fractal.
Theorem 5.4. (Kahane [40], Mattila [66, 67]) For all Borel sets E , F ⊆ R n andalmost every z ∈ R n , dim H ( E ∩ ( F + z )) ≤ max { , dim H ( E × F ) − n } . The second theorem, over sixty years old and called the product formula, con-cerns the product of two fractals.
Theorem 5.5. (Marstrand [64]) For all E ⊆ R n and F ⊆ R n , dim H ( E × F ) ≥ dim H ( E ) + dim H ( F ) . In a recent breakthrough, algorithmic dimension was used to prove the followingextension of the intersection formula from Borel sets to all sets. We include thesimple (given the machinery that we have developed) and instructive proof here.
Theorem 5.6. (N. Lutz [61]) For all sets E , F ⊆ R n and almost every z ∈ R n , dim H ( E ∩ ( F + z )) ≤ max { , dim H ( E × F ) − n } . (9) Proof.
Let E , F ⊆ R n and z ∈ R n . The theorem is trivially affirmed if F + z is disjointfrom E , so assume not. By the point-to-set principle, fix an oracle A ⊆ N such thatdim H ( E × F ) = sup ( x , y ) ∈ E × F dim A ( x , y ) . (10)Let ε >
0. Since E ∩ ( F + z ) = /0, the point-to-set principle tells us that there is apoint x ∈ E ∩ ( F + z ) satisfyingdim A , z ( x ) > dim H ( E ∩ ( F + z )) − ε . (11)Now ( x , x − z ) ∈ E × F , so (10), Theorem 3.5, Lemma 4.5, and (11) tell us that dim H ( E × F ) ≥ dim A ( x , x − z )= dim A ( x , z ) ≥ dim A ( z ) + dim A ( x | z ) ≥ dim A ( z ) + dim A , z ( x ) > dim A ( z ) + dim H ( E ∩ ( F + z )) − ε . Since ε is arbitrary here, it follows thatdim H ( E ∩ ( F + z )) ≤ dim H ( E × F ) − dim A ( z ) . Since almost every z ∈ R n is Martin-L¨of random relative to A and hence satisfiesdim A ( z ) = n , this affirms the theorem.The paper [61] shows that the same method gives a new proof of the analog ofTheorem 5.6 for packing dimension. This result was already known to hold for allsets E and F [26], but the new proof makes clear what a strong duality betweenHausdorff and packing dimensions is at play in the intersection formulas.The paper [61] also gives a new, algorithmic proof of the following known ex-tension of Theorem 5.5. Theorem 5.7. (Marstrand [64], Tricot [96]) For all E ⊆ R m and F ⊆ R n , dim H ( E ) + dim H ( F ) ≤ dim H ( E × F ) ≤ dim H ( E ) + dim P ( F ) ≤ dim P ( E × F ) ≤ dim P ( E ) + dim P ( F ) . This new proof is much simpler than previously known proofs of Theorem 5.7,roughly as simple as previously known proofs of the restriction of Theorem 5.7 toBorel sets. The new proof is also quite natural, using the point-to-set principle toderive Theorem 5.7 from the formally similar Theorem 4.7.
For α ∈ ( , ] , a plane set E ⊆ R is said to be of Furstenbergtypewith parameter α or, more simply, α -Furstenberg, if, for every direction e ∈ S (where S is the unitcircle in R ), there is a line L e in direction e such that dim H ( L e ∩ E ) ≥ α .According to Wolff [101], the following well-known bound is probably due toFurstenberg and Katznelson. Theorem 5.8.
For every α ∈ ( , ] , every α -Furstenberg set E ⊆ R satisfies dim H ( E ) ≥ α + max { / , α } . lgorithmic Fractal Dimensions in Geometric Measure Theory 23 Note that every Kakeya set in the plane is 1-Furstenberg (since it contains a linesegment, which has Hausdorff dimension 1, in every direction e ∈ S ), so Davies’stheorem follows from the case α = α -Furstenberg sets in a natural way. For α , β ∈ ( , ] , a set E ⊆ R is ( α , β ) -generalizedFurstenberg if there is a set J ⊆ S such that dim H ( J ) ≥ β and, for every e ∈ J , there is a line L e in direction e suchthat dim H ( L e ∩ E ) ≥ α . They then proved the following lower bound. Theorem 5.9. (Molter and Rela [72]) For α , β ∈ ( , ] , every ( α , β ) -generalizedFurstenberg set E ⊆ R satisfies dim H ( E ) ≥ max { β / , α + β − } . Note that every α -Furstenberg set is ( α , ) -generalized Furstenberg, so Theorem5.8 follows from the case β = α , β ∈ ( , ) and β < α . Theorem 5.10. (N. Lutz and Stull [63]) For all α , β ∈ ( , ] , every ( α , β ) -generalizedFurstenberg set E ⊆ R satisfies dim H ( E ) ≥ α + min { β , α } . The proof of Theorem 5.10 uses the point-to-set principle and Theorem 3.5.
In previous sections we have analyzed the dimension of points in self-similar frac-tals, but interesting natural examples need more elaborated concepts that combineself-similarity with random selection. In [31] Gu, Moser, and the authors startedthe more challenging task of analyzing the dimensions of points in random frac-tals. They focused on fractals that are randomly selected subfractals of a given self-similar fractal.Let F ⊆ R n be a computably self-similar fractal as defined in section 3.3, with S = ( S , . . . , S k − ) the corresponding IFS, and Σ = { , . . . , k − } . Recall that eachpoint x ∈ F has a coding sequence T ∈ Σ ∞ meaning that the point x is obtained byapplying the similarities coded by the successive symbols in T . We are interested incertain randomly selected subfractals of the fractal F .The specification of a point in such a subfractal can be formulated as the outcomeof an infinite two-player game between a selector that selects the subfractal and a coder that selects a point within the subfractal. Specifically, the selector selects r outof the k similarities and this choice depends on the coder’s earlier choices, that is, aselector is a function σ : Γ ∗ → [ Σ ] r where [ Σ ] r is the set of all r -element subsets of Σ , alphabet Γ = { , . . . , r − } and each element in Γ ∗ represents a coder’s earlierhistory. A coder is a sequence U ∈ Γ ∞ , that is, the coder is selecting a point in thesubfractal by repeatedly choosing a similarity out of the r previously picked by theselector. Once a selector σ and a coder U have been chosen, the outcome of theselector-coder game is a point determined by the sequence σ ∗ U ∈ Σ ∞ , that can beprecisely defined as ( σ ∗ U )[ t ] = “the U [ t ] th element of σ ( U [ .. t − ]) ”for all t ∈ N .Each selector σ specifies (selects) the subfractal F σ of F consisting of all pointswith coding sequence T for which T is an outcome of playing σ against some coder, F σ = { S ( σ ∗ U ) | U ∈ Γ ∞ } .The focus of [31] is in randomly selected subfractals of F , by which we meansubfractals F σ of F for which the selector σ is random with respect to some prob-ability measure. That is, we are interested in the case where the coder is playing a“game against nature” (in order to make precise the idea of algorithmically randomselector each selector σ : Γ ∗ → [ Σ ] r is identified with its characteristic sequence χ σ ∈ ([ Σ ] r ) ∞ ).Gu et al. determine the dimension spectra of a wide class of such randomly se-lected subfractals, showing that each such fractal has a dimension spectrum that is aclosed interval whose endpoints can be computed or approximated from the param-eters of the fractal. In general, the maximum of the spectrum is determined by thedegree to which the coder can reinforce the randomness in the selector, while theminimum is determined by the degree to which the coder can cancel randomnessin the selector. This randomness cancellation phenomena has also arisen in othercontexts, notably dimension spectra of random closed sets [2, 21] and of randomtranslations of the Cantor set [22]. The main result in [31] concerns subfractals thatare similarity random, that is, F σ defined by a selector σ that is ˆ π S -random. Hereˆ π S is the natural extension of π s , the similarity probability measure on Σ defined inSection 3.3. Theorem 6.1. [31] For every similarity random subfractal F σ of F, the dimensionspectrum sp ( F σ ) is an interval satisfying [ s ∗ log ( k − ) − log ( r − + A ( k − r )) log a , s ∗ ] ⊆ sp ( F σ ) ⊆ [ s ∗ log k − log r log A , s ∗ ] , where s ∗ = sdim ( S ) , a = min { π S ( i ) | i ∈ Σ } , and A = max { π S ( i ) | i ∈ Σ } .In particular, if all the contraction ratios of F have the same value c, then ev-ery similarity-random (i.e., uniformly random) subfractal F σ of F has dimensionspectrum sp ( F σ ) = [ s ∗ ( − log r log k ) , s ∗ ] , lgorithmic Fractal Dimensions in Geometric Measure Theory 25 where s ∗ = sdim ( S ) = ( log k ) / ( log c ) . Many challenging open questions remain concerning the analysis of the dimen-sion of points in more general versions of random fractals, both by completing theresults in [31] to random selectors for different probability measures and by consid-ering generalizations such as self-affine fractals and fractals with randomly chosencontraction ratios.
While Euclidean space has a very well-behaved metric based on a Borel measure µ , where for instance s -Hausdorff measure coincides with µ for s =
1, this is notthe case for other metric spaces. Since both Hausdorff and packing dimension canbe defined in any metric space, the second author has considered in [70] the ex-tension of algorithmic dimension to a large class of separable metric spaces, theclass of spaces with a computable nice cover. This extension includes an algorith-mic information characterization of constructive dimension, based on the conceptof Kolmogorov complexity of a point at a certain precision, which is an extensionof the concept presented in section 2 for Euclidean space.
Resource-bounded dimension, introduced in [52] by the first author, has been a veryfruitful tool in the quantitative study of complexity classes, see [35, 54] for the mainresults. Many of the main complexity classes have a suitable resource bound forwhich the corresponding dimension is adequate for the class, since it has maximalvalue for the whole class.The development of resource-bounded dimension was based on a characteriza-tion of Hausdorff dimension in terms of betting strategies, imposing different com-plexity constraints on those strategies to obtain the different resource-bounded di-mensions. Contrary to the case of computability constraints introduced in section 3,many important resource-bounds such as polynomial time dimension do not havecorresponding algorithmic information characterizations (although more elaboratedcompression algorithms characterizations have been obtained in [50, 38]).In fact the study of gambling under very low complexity constraints, finite-statecomputability, has been studied at least since the seventies [85, 27] and the corre-sponding effective dimension, finite-state dimension, was studied by Dai, Lathrop,and the two authors [19] where finite-state dimension is characterized in terms offinite-state compression.For the definition of resource-bounded dimension, a class of languages C isrepresented via characteristic sequences as a set of infinite binary sequences C ⊆{ , } ∞ . Using binary representation each language can be seen as a real number in [ , ] and resource-bounded dimension as a tool in Euclidean space. Resource-bounded dimension has a natural extension Σ ∞ for other finite alphabets Σ andthe first question is therefore whether the choice of alphabet is relevant for thestudy of Euclidean space. A satisfactory answer is given in [36] where it is proventhat polynomial-time dimension is invariant under base change, that is, for everybase b and set X ⊆ R the set of base- b -representations of all elements in X has apolynomial-time dimension independent of b .Finite-state dimension is not closed under base change, but its connections withnumber theory are deep. Borel introduced normal numbers in [8], defining a realnumber α to be Borel normal in base b if for every finite sequence w of base- b dig-its, the asymptotic, empirical frequency of w in the base- b expansion of α is b −| w | .There is a tight relationship of Borel-normality and finite-state dimension, since areal number is normal in base b iff its base b representation is a finite-state dimen-sion 1 sequence [85, 9]. It is known [15, 84] that there are numbers that are normalin one base but not in another, so the nonclosure under base change property offinite-dimension is a corollary of these results. Absolutely normal numbers are realnumbers that are normal in every base, so they correspond to real numbers whosebase- b representation has finite-dimension 1 for every base b , this characterizationhas been used in very effective constructions of absolutely normal numbers [3, 58].It is natural to ask whether there are real numbers for which the finite-state dimen-sion of its base- b representations is strictly between 0 and 1 and does not depend onthe base b . This chapter’s primary focus is the role of algorithmic fractal dimensions in fractalgeometry. However, it should be noted that fractal geometry is only a part of geo-metric measure theory, and that algorithmic methods may shed light on many otheraspects of geometric measure theory.Many questions in geometric measure theory involve rectifiability [28]. The sim-plest case of this classical notion is the rectifiability of curves. A curve in R n is acontinuous function f : [ , ] → R n . The length of a curve f islength ( f ) = sup a k − ∑ i = | f ( a i + ) − f ( a i ) | , where the supremum is taken over all dissections a of [ , ] , i.e., all a = ( a , . . . , a k ) with 0 = a < a < . . . < a k =
1. Note that length ( f ) is the length of the actualpath traced by f , which may “retrace” parts of its range. (In fact, there are com-putable curves f for which every computable curve g with the same range must dounboundedly many such retracings [30].) A curve f is rectifiable if length ( f ) < ∞ .Gu and the authors [29] posed the fanciful question, “ Where can an infinitelysmall nanobot go?” Intuitively, the nanobot is the size of a Euclidean point, and its lgorithmic Fractal Dimensions in Geometric Measure Theory 27 motion is algorithmic, so its trajectory must be a curve f : [ , ] → R n that is com-putable in the sense of computable analysis [100]. Moreover, the nanobot’s trajec-tory f should be rectifiable. This last assumption, aside from being intuitively rea-sonable, prevents the question from being trivialized by space-filling curves [83, 17].The above considerations translate our fanciful question about a nanobot to thefollowing mathematical question. Whichpointsin R n ( n ≥ ( n ) for the set of all points in R n thatlie on rectifiable computable curves. The objective of [29] was to characterize theelements of BP ( n ) .A few preliminary observations on the set BP ( n ) are in order here. Every com-putable point in R n clearly lies in BP ( n ) , so BP ( n ) is a dense subset of R n . It is alsoeasy to see that BP ( n ) is path-connected. On the other hand, the ranges of rectifiablecurves have Hausdorff dimension 1 [25] and there are only countably many com-putable curves, so BP ( n ) is a countable union of sets of Hausdorff dimension 1 andhence has Hausdorff dimension 1. Since n ≥
2, this implies that most points in R n do not lie on the beaten path BP ( n ) .For each rectifiable computable curve f , the set range f is a computably closed,i.e., Π , subset of R n . By the preceding paragraph and Hitchcock’s correspondenceprinciple (11), it follows that cdim ( BP ( n ) ) =
1, whence every point x ∈ BP ( n ) satis-fies dim ( x ) ≤
1. This is a necessary, but not sufficient condition for membership inBP ( n ) , because the complement of BP ( n ) contains points of arbitrarily low dimen-sion [29]. Characterizing membership in BP ( n ) thus requires algorithmic methods tobe extended beyond fractal dimensions.The “analyst’s traveling salesman theorem” of geometric measure theory char-acterizes those subsets of Euclidean space that are contained in rectifiable curves.This celebrated theorem was proven for the plane by Jones [39] and extended tohigh-dimensional Euclidean spaces by Okikiolu [76]. The main contribution of [29]is to formulate the notion of a computableJonesconstriction,an algorithmic versionof the infinitary data structure implicit in the analyst’s traveling salesman theorem,and to prove the computableanalyst’stravelingsalesmantheorem,which says that apoint in Euclidean space lies on the beaten path BP ( n ) if and only if it is “permitted”by some computable Jones constriction.The computable analysis of points in rectifiable curves has continued in at leasttwo different directions. In one direction, Rettinger and Zheng have shown (answer-ing a question in [29]) that there are points in BP ( n ) that do not lie on any computablecurve of computable length [78] and extended this to obtain a four-level hierarchy ofsimple computable planar curves that are point-separable in the sense that the setsof points lying on curves of the four types are distinct [102]. In another direction,McNicholl [71] proved that there is a point on a computable arc (a set computablyhomeomorphic to [ , ] ) that does not lie in BP ( n ) . In the same paper, McNichollused a beautiful geometric priority argument to prove that there is a point on a com-putable curve of computable length that does not lie on any computable arc.It is apparent from the above results that algorithmic methods will have a greatdeal more to say about rectifiability and other aspects of geometric measure theory. Acknowledgements
The first author’s work was supported in part by National Science Founda-tion research grants 1247051 and 1545028 and is based in part on lectures that he gave at theNew Zealand Mathematical Research Institute Summer School on Mathematical Logic and Com-putability, January 9-14, 2017. He thanks Neil Lutz for useful discussions and Don Stull and AndreiMigunov for helpful comments on the exposition. The second author’s research was supported inpart by Spanish Government MEC Grants TIN2011-27479-C04-01 and TIN2016-80347-R andwas done and based in part on lectures given during a research stay at the Institute for Mathemati-cal Sciences at the National University of Singapore for the Program on Aspects of Computation,August 2017. We thank two anonymous reviewers for useful suggestions on the exposition.
References
1. Athreya, K.B., Hitchcock, J.M., Lutz, J.H., Mayordomo, E.: Effective strong dimension inalgorithmic information and computational complexity. SIAM Journal on Computing (3),671–705 (2007)2. Barmpalias, G., Brodhead, P., Cenzer, D., Dashti, S., Weber, R.: Algorithmic randomness ofclosed sets. J. Log. Comput. (6), 1041–1062 (2007)3. Becher, V., Heiber, P.A., Slaman, T.A.: A polynomial-time algorithm for computing abso-lutely normal numbers. Information and Computation , 1–9 (2013)4. Besicovitch, A.S.: Sur deux questions d’int´egrabilit´e des fonctions. Journal de la Soci´et´e dephysique et de math´ematique de l’Universit´e de Perm , 105–123 (1919)5. Besicovitch, A.S.: On Kakeya’s problem and a similar one. Mathematische Zeitschrift ,312–320 (1928)6. Billingsley, P.: Hausdorff dimension in probability theory. Illinois J. Math , 187–209 (1960)7. Bishop, C.J., Peres, Y.: Fractals in Probability and Analysis. Cambridge University Press(2017)8. Borel, E.: Sur les probabilit´es d´enombrables et leurs applications arithm´etiques. Rendicontidel Circolo Matematico di Palermo (1), 247–271 (1909)9. Bourke, C., Hitchcock, J.M., Vinodchandran, N.V.: Entropy rates and finite-state dimension.Theoretical Computer Science (3), 392–406 (2005)10. Braverman, M., Cook, S.: Computing over the reals: Foundations for scientific computing.Notices of the AMS (3), 318–329 (2006)11. Cai, J., Hartmanis, J.: On Hausdorff and topological dimensions of the Kolmogorov com-plexity of the real line. Journal of Computer and Systems Sciences , 605–619 (1994)12. Cajar, H.: Billingsley dimension in probability spaces. Springer Lecture Notes in Mathemat-ics (1981)13. Case, A., Lutz, J.H.: Mutual dimension. ACM Transactions on Computation Theory
7, arti-cle no. 12 (2015)14. Case, A., Lutz, J.H.: Mutual dimension and random sequences. Theoretical Computer Sci-ence , 68–87 (2018)15. Cassels, J.W.S.: On a problem of Steinhaus about normal numbers. Colloquium Mathe-maticum , 95–101 (1959)16. Conidis, C.: Effective packing dimension of Π -classes. Proceedings of the American Math-ematical Society , 3655–3662 (2008)17. Couch, P.J., Daniel, B.D., McNicholl, T.H.: Computing space-filling curves. Theory Comput.Syst. (2), 370–386 (2012)18. Cover, T.M., Thomas, J.A.: Elements of Information Theory, 2nd edition. John Wiley &Sons, Inc., New York, N.Y. (2006)19. Dai, J.J., Lathrop, J.I., Lutz, J.H., Mayordomo, E.: Finite-state dimension. Theoretical Com-puter Science , 1–33 (2004)20. Davies, R.O.: Some remarks on the Kakeya problem. Proc. Cambridge Phil. Soc. , 417–421 (1971)lgorithmic Fractal Dimensions in Geometric Measure Theory 2921. Diamondstone, D., Kjos-Hanssen, B.: Members of random closed sets. In: K. Ambos-Spies,B. L¨owe, W. Merkle (eds.) CiE, Lecture Notes in Computer Science , vol. 5635, pp. 144–153.Springer (2009)22. Dougherty, R., Lutz, J.H., Mauldin, R.D., Teutsch, J.: Translating the Cantor set by a randomreal. Transactions of the American Mathematical Society , 3027–3041 (2014)23. Downey, R.G., Hirschfeldt, D.R.: Algorithmic randomness and complexity. Springer-Verlag(2010)24. Falconer, K.: Dimensions and measures of quasi self-similar sets. Proc. Amer. Math. Soc. , 543–554 (1989)25. Falconer, K.: Fractal Geometry: Mathematical Foundations and Applications, 3rd edition.John Wiley & Sons (2014)26. Falconer, K.J.: Sets with large intersection properties. Journal of the London MathematicalSociety (2), 267–280 (1994)27. Feder, M.: Gambling using a finite state machine. IEEE Transactions on Information Theory , 1459–1461 (1991)28. Federer, H.: Geometric Measure Theory. Springer-Verlag (1969)29. Gu, X., Lutz, J.H., Mayordomo, E.: Points on computable curves. In: 47th Annual IEEESymposium on Foundations of Computer Science, pp. 469–474. IEEE Computer SocietyPress (2006). Proceedings of FOCS 2006, Berkeley, CA, October 22–24, 200630. Gu, X., Lutz, J.H., Mayordomo, E.: Curves that must be retraced. Inform. and Comput. (6), 992–1006 (2011)31. Gu, X., Lutz, J.H., Mayordomo, E., Moser, P.: Dimension spectra of random subfractals ofself-similar fractals. Annals of Pure and Applied Logic (11), 1707–1726 (2014)32. Hausdorff, F.: Dimension und ¨außeres Maß. Math. Ann. , 157–179 (1919)33. Hitchcock, J., Lutz, J., Terwijn, S.: The arithmetical complexity of dimension and random-ness. ACM Transactions on Computational Logic (2007)34. Hitchcock, J.M.: Correspondence principles for effective dimensions. Theory of ComputingSystems , 559–571 (2005)35. Hitchcock, J.M., Lutz, J.H., Mayordomo, E.: The fractal geometry of complex-ity classes. SIGACT News Complexity Theory Column , 24–38 (2005), URL
36. Hitchcock, J.M., Mayordomo, E.: Base invariance of feasible dimension. Inform. Process.Lett. (14-16), 546–551 (2013)37. Hitchcock, J.M., Pavan, A.: Resource-bounded strong dimension versus resource-boundedcategory. Information Processing Letters , 377–381 (2005)38. Hitchcock, J.M., Vinodchandran, N.V.: Dimension, entropy rates, andcompression. J. Comput. Syst. Sci. (4), 760–782 (2006), URL
39. Jones, P.W.: Rectifiable sets and the traveling salesman problem. Inventions Mathematicae , 1–15 (1990)40. Kahane, J.P.: Sur la dimension des intersections. In: J.A. Barroso (ed.) Aspects of mathemat-ics and its applications,
North-Holland Mathematical Library , vol. 34, pp. 419–430. Elsevier(1986)41. Kamo, H., Kawamura, K.: Computability of self-similar sets. Mathematical Logic Quarterly , 23–30 (1999)42. Katz, N.H., Tao, T.: Some connections between Falconer’s distance set conjecture and setsof Furstenburg type. New York Journal of Mathematics , 149–187 (2001)43. Ko, K.I.: Complexity Theory of Real Functions. Progress in Theoretical Computer Science.Birkh¨auser, Boston (1991)44. Kolmogorov, A.N.: Three approaches to the quantitative definition of ‘information’. Prob-lems of Information Transmission , 1–7 (1965)45. Lacombe, D.: Extension de la notion de fonction r´ecursive aux fonctions d’une ou plusieursvariables r´eelles I. Comptes Rendus Acad´emie des Sciences Paris , 2478–2480 (1955).Th´eorie des fonctions0 Jack H. Lutz and Elvira Mayordomo46. Lacombe, D.: Extension de la notion de fonction r´ecursive aux fonctions d’une ou plusieursvariables r´eelles II. Comptes Rendus Acad´emie des Sciences Paris , 13–14 (1955).Th´eorie des fonctions47. Levin, L.A.: On the notion of a random sequence. Soviet Mathematics Doklady , 1413–1416 (1973)48. Levin, L.A.: Laws of information conservation (nongrowth) and aspects of the foundation ofprobability theory. Problems of Information Transmission , 206–210 (1974)49. Li, M., Vit´anyi, P.M.B.: An Introduction to Kolmogorov Complexity and its Applications.Springer-Verlag, Berlin (2008). Third Edition.50. L´opez-Vald´es, M., Mayordomo, E.: Dimension is compression. Theory of Computing Sys-tems , 95–112 (2013)51. Lutz, J.H.: Dimension in complexity classes. In: Proceedings of the 15th Annual IEEEConference on Computational Complexity, pp. 158–169 (2000)52. Lutz, J.H.: Dimension in complexity classes. SIAM Journal on Computing (5), 1236–1259(2003)53. Lutz, J.H.: The dimensions of individual strings and sequences. Information and Computa-tion (1), 49–79 (2003)54. Lutz, J.H.: Effective fractal dimensions. Mathematical Logic Quarterly (1), 62–72 (2005)55. Lutz, J.H., Lutz, N.: Lines missing every random point. Computability (2), 85–102 (2015)56. Lutz, J.H., Lutz, N.: Algorithmic information, plane Kakeya sets, and conditional dimension.ACM Transactions on Computation Theory (2018). Article 757. Lutz, J.H., Mayordomo, E.: Dimensions of points in self-similar fractals. SIAM J. Comput. (3), 1080–1112 (2008)58. Lutz, J.H., Mayordomo, E.: Computing absolutely normal numbers in nearly linear time.Tech. Rep. arXiv:1611.05911, arXiv.org (2016)59. Lutz, J.H., Weihrauch, K.: Connectivity properties of dimension level sets. MathematicalLogic Quarterly , 483–491 (2008)60. Lutz, N.: A note on pointwise dimensions. Tech. Rep. arXiv:1612.05849, arXiv.org (2016)61. Lutz, N.: Fractal intersections and products via algorithmic dimension. In: InternationalSymposium on Mathematical Foundations of Computer Science (MFCS) (2017)62. Lutz, N., Stull, D.: Dimension spectra of lines. In: J. Kari, F. Manea, I. Petre (eds.) UnveilingDynamics and Complexity, Lecture Notes in Computer Science , vol. 10307, pp. 304–314.Springer, Cham (2017). 13th Conference on Computability in Europe, CiE 2017, Turku,Finland, June 12-16, 201763. Lutz, N., Stull, D.M.: Bounding the dimension of points on a line. In: Theory and Applica-tions of Models of Computation (TAMC), pp. 425–439 (2017)64. Marstrand, J.M.: Some fundamental geometrical properties of plane sets of fractional dimen-sions. Proceedings of the London Mathematical Society (3), 257–302 (1954)65. Martin-L¨of, P.: The definition of random sequences. Information and Control , 602–619(1966)66. Mattila, P.: Hausdorff dimension and capacities of intersections of sets in n -space. ActaMathematica , 77–105 (1984)67. Mattila, P.: On the Hausdorff dimension and capacities of intersections. Mathematika ,213–217 (1985)68. Mayordomo, E.: A Kolmogorov complexity characterization of constructive Hausdorff di-mension. Inform. Process. Lett. (1), 1–3 (2002)69. Mayordomo, E.: Effective fractal dimension in algorithmic informa-tion theory. In: New Computational Paradigms: Changing Concep-tions of What is Computable, pp. 259–285. Springer-Verlag (2008), URL http://webdiis.unizar.es/elvira/publicaciones/efdait.pdf
70. Mayordomo, E.: Effective Hausdorff dimension in general metric spaces. Theory of Com-puting Systems (2018). To appear71. McNicholl, T.H.: Computing links and accessing arcs. MLQ Math. Log. Q. (1-2), 101–107(2013)lgorithmic Fractal Dimensions in Geometric Measure Theory 3172. Molter, U., Rela, E.: Furstenberg sets for a fractal set of directions. Proceedings of theAmerican Mathematical Society , 2753–2765 (2012)73. Moran, P.: Additive functions of intervals and Hausdorff dimension. Proceedings of theCambridge Philosophical Society , 5–23 (1946)74. Moschovakis, Y.N.: Descriptive Set Theory. North-Holland, Amsterdam (1980)75. Nies, A.: Computability and Randomness. Oxford University Press (2012)76. Okikiolu, K.: Characterization of subsets of rectifiable curves in R n . Journal of the LondonMathematical Society (2), 336–348 (1992)77. Reimann, J.: Computability and fractal dimension. Ph.D. thesis, University of Heidelberg(2004)78. Rettinger, R., Zheng, X.: Points on computable curves of computable lengths. In: Mathe-matical foundations of computer science 2009, Lecture Notes in Comput. Sci. , vol. 5734, pp.736–743. Springer, Berlin (2009)79. Ryabko, B.Y.: Coding of combinatorial sources and Hausdorff dimension. Soviets Mathe-matics Doklady , 219–222 (1984)80. Ryabko, B.Y.: Noiseless coding of combinatorial sources. Problems of Information Trans-mission , 170–179 (1986)81. Ryabko, B.Y.: Algorithmic approach to the prediction problem. Problems of InformationTransmission , 186–193 (1993)82. Ryabko, B.Y.: The complexity and effectiveness of prediction problems. Journal of Com-plexity , 281–295 (1994)83. Sagan, H.: Space-Filling Curves. Universitext. Springer (1994)84. Schmidt, W.M.: On normal numbers. Pacific J. Math (2), 661–672 (1960)85. Schnorr, C.P., Stimm, H.: Endliche automaten und zufallsfolgen. Acta Informatica (4),345–359 (1972)86. Shannon, C.E.: A mathematical theory of communication. Bell System Technical Journal , 379–423, 623–656 (1948)87. Shen, A., Uspensky, V.A., Vereshchagin, N.: Kolmogorov Complexity and Algorithmic Ran-domness. American Mathematical Society (2017)88. Shen, A., Vereshchagin, N.: Logical operations and Kolmogorov complexity. TheoreticalComputer Science (1–2), 125–129 (2002)89. Staiger, L.: Kolmogorov complexity and Hausdorff dimension. Information and Computa-tion , 159–94 (1993)90. Staiger, L.: A tight upper bound on Kolmogorov complexity and uniformly optimal predic-tion. Theory of Computing Systems , 215–29 (1998)91. Staiger, L.: How much can you win when your adversary is handicapped? In: Numbers,Information and Complexity, pp. 403–412. Kluwer (2000)92. Staiger, L.: The Kolmogorov complexity of infinite words. Theoretical Computer Science , 187–199 (2007)93. Stein, E.M., Shakarchi, R.: Real Analysis: Measure Theory, Integra- tion, and Hilbert Spaces.Princeton Lectures in Analysis. Princeton University Press (2005)94. Sullivan, D.: Entropy, Hausdorff measures old and new, and limit sets of geometrically finiteKleinian groups. Acta Mathematica , 259–277 (1984)95. Tao, T.: From rotating needles to stability of waves: emerging connections between combi-natorics, analysis, and PDE. Notices Amer. Math. Soc. , 294–303 (2000)96. Tricot, C.: Two definitions of fractional dimension. Mathematical Proceedings of the Cam-bridge Philosophical Society , 57–74 (1982)97. Turetsky, D.: Connectedness properties of dimension level sets. Theoretical Computer Sci-ence (29), 3598–3603 (2011)98. Turing, A.M.: On computable numbers, with an application to the “Entscheidungsproblem”.A correction. Proceedings of the London Mathematical Society (2), 544–546 (1938)99. Wegmann, H.: Uber den dimensionsbegriff in wahrsheinlichkeitsraumen von P. Billingsley Iand II. Z. Wahrscheinlichkeitstheorie verw. Geb. , 216–221 and 222–231 (1968)100. Weihrauch, K.: Computable Analysis. Springer, Berlin (2000)2 Jack H. Lutz and Elvira Mayordomo101. Wolff, T.: Recent work connected with the Kakeya problem. In: Prospects in Mathematics,pp. 129–162. AMS (1999)102. Zheng, X., Rettinger, R.: Point-separable classes of simple computable planar curves. Log.Methods Comput. Sci.8