Algorithmically Optimal Outer Measures
aa r X i v : . [ c s . CC ] J un Algorithmically Optimal Outer Measures
Jack H. LutzIowa State University Neil LutzIowa State University
Abstract
We investigate the relationship between algorithmic fractal dimensions and the classicallocal fractal dimensions of outer measures in Euclidean spaces. We introduce global and localoptimality conditions for lower semicomputable outer measures. We prove that globally optimalouter measures exist. Our main theorem states that the classical local fractal dimensions ofany locally optimal outer measure coincide exactly with the algorithmic fractal dimensions.Our proof uses an especially convenient locally optimal outer measure κ defined in terms ofKolmogorov complexity. We discuss implications for point-to-set principles. Algorithmic fractal dimensions, which quantify the density of algorithmic information in individ-ual points [17, 1, 21], have recently been used to prove new theorems [25, 23, 26, 24, 20] abouttheir classical forerunners, the Hausdorff and packing dimensions of sets. Since algorithmic fractaldimensions are products of the theory of computing, and since the aforementioned new theoremsare entirely classical (not involving logic or the theory of computing), these developments call fora more thorough investigation of the relationships between algorithmic and classical fractal dimen-sions. One significant facet of this investigation, initiated by Orponen [31], is to look for purelyclassical proofs of these new classical theorems.In this paper, taking a different approach, we establish direct connections between algorithmicand classical fractal dimensions. Aside from the presence versus absence of algorithms, the moststriking difference between algorithmic fractal dimensions and classical fractal dimensions is thatthe algorithmic dimensions are usefully defined for individual points in Euclidean space, whilethe classical Hausdorff and packing dimensions vanish on individual points. To bridge this gap,we examine the classical local dimensions (also called pointwise dimensions ) of outer measures atindividual points in Euclidean spaces [10]. These local fractal dimensions have been studied atleast since the 1930s and are essential tools in multifractal analysis [11, 9]. Outer measures and thealgorithmic and local dimensions are defined precisely in Section 2 below.Outer measures, introduced by Carath´eodory [4] in the “prehistory” of Hausdorff dimension [12](defining what later became known as the 1-dimensional Hausdorff measure), are now best knownfor their role in Carath´eodory’s program [5] to generalize Lebesgue measure to a wide variety ofsettings [35]. However, it is the role of outer measures in local fractal dimensions that are of interesthere.The second author observed [22] that a particular, very nonclassical outer measure κ , definedin terms of Kolmogorov complexity, has the property that the classical local fractal dimensionsof κ coincide exactly with the algorithmic fractal dimensions at every point in R n . This prop-erty of κ is analogous to Levin’s coding theorem [14, 15], which pertains to a particular, verynonclassical subprobability measure m on strings. Levin’s theorem says that if we substitute m p in the classical Shannon self-information [33] log 1 /p ( x ), then theresulting quantity log 1 / m ( x ) is essentially the prefix Kolmogorov complexity (i.e., the algorithmicinformation content) of the string x .Levin defined m as an optimal lower semicomputable subprobability measure, so the aboveanalogy leads us to investigate here the algorithmic optimality properties of κ and other outermeasures on Euclidean spaces.We first investigate outer measures that are globally optimal , a property that is closely analogousto the optimality property of Levin’s m . In Section 3 we prove that globally optimal outer measureson R n exist.As it turns out, the outer measure κ is not globally optimal. In Section 4 we prove this fact,and we introduce and investigate the more general and more subtly defined class of locally optimal outer measures on R n . Our main theorem establishes that every locally optimal outer measure µ on a Euclidean space R n has the property that the classical local fractal dimensions of µ coincideexactly with the algorithmic dimensions at every point in R n .In Section 5 we discuss implications of our results, especially for the point-to-set principles thathave enabled the new classical results mentioned in the first paragraph of this introduction. This section reviews the algorithmic fractal dimensions and the classical local fractal dimensions.Following standard practice [30, 8, 16], we fix a universal prefix Turing machine U and definethe (prefix) Kolmogorov complexity of a string w ∈ { , } ∗ to beK( w ) = min (cid:8) | π | (cid:12)(cid:12) π ∈ { , } ∗ and U ( π ) = w (cid:9) , i.e., the minimum number of bits required to cause U to output w . By standard binary encodings,we extend this from { , } ∗ to other countable domains. In particular, the Kolmogorov complexityK( q ) of a rational point q ∈ Q n is well defined.The Kolmogorov complexity of a point x ∈ R n at a precision r ∈ N isK r ( x ) = min (cid:8) K( q ) | q ∈ Q n and | q − x | < − r (cid:9) , where | q − x | is the Euclidean distance from q to x .We now define the algorithmic fractal dimensions of points in R n . Definition ([17, 28, 1]) . Let x ∈ R n .1. The algorithmic dimension of x is dim( x ) = lim inf r →∞ K r ( x ) r . (2.1)2. The strong algorithmic dimension of x isDim( x ) = lim sup r →∞ K r ( x ) r . (2.2)See [32, 29, 19] for surveys of these notions.The classical local fractal dimensions are local properties of outer measures. An outer measure on a set X [35] is a function µ : P ( X ) → [0 , ∞ ] (where P ( X ) is the power set of X ) with thefollowing three properties. 2i) (vanishes on empty set) µ ( ∅ ) = 0.(ii) (monotonicity) For all E, F ⊆ X , E ⊆ F = ⇒ µ ( E ) ≤ µ ( F ) . (iii) (countable subadditivity) For all E , E , . . . ⊆ X , µ ∞ [ n =0 E n ! ≤ ∞ X n =0 µ ( E n ) . An outer measure µ is finite if µ ( R n ) < ∞ . Definition ([10]) . If µ is a finite outer measure on R n , then the lower and upper local (or pointwise ) dimensions of µ at a point x ∈ R n aredim loc µ ( x ) = lim inf r →∞ log µ ( B ( x, − r )) r (2.3)and Dim loc µ ( x ) = lim sup r →∞ log µ ( B ( x, − r )) r , (2.4)respectively. (The logarithms here are base-2, and B ( x, ε ) is the open ball of radius ε about x in R n .)As stated in the introduction, our main objective is to identify a class of outer measures thatcause the classical local fractal dimensions (2.3) and (2.4) to coincide with the algorithmic fractaldimensions (2.1) and (2.2). The optimality notions that we discuss in this paper concern outer measures with three specialproperties that we now define.
Definition.
An outer measure µ on R n is finitely supported on Q n if, for every ε >
0, there is afinite set A ⊆ Q n such that µ ( R n \ A ) < ε .Note that an outer measure µ on R n that is finitely supported on Q n is supported on Q n in theusual sense that µ ( R n \ Q n ) = 0. The following example shows that the converse does not hold. Example 3.1.
The function µ : P ( R n ) → [0 , ∞ ] defined by µ ( E ) = ( − −| E ∩ Q n | if | E ∩ Q n | < ∞ | E ∩ Q n | = ∞ is an outer measure on R n that is supported, but not finitely supported, on Q n . Definition.
An outer measure µ on R n is strongly finite if µ is supported on Q n and X q ∈ Q n µ ( { q } ) < ∞ .
3t is clear that every strongly finite outer measure is finite. The outer measure of Example 3.1shows that the converse does not hold.
Definition.
An outer measure on R n is lower semicomputable if it is finitely supported on Q n andthere is a computable function ˆ µ : P <ω ( Q n ) × N → Q ∩ [0 , ∞ )(where P <ω ( Q n ) is the finite power set of Q n , i.e, the set of all finite subsets of Q n ) with thefollowing two properties.(i) For all A ∈ P <ω ( Q n ) and s, t ∈ N , s ≤ t = ⇒ ˆ µ ( A, s ) ≤ ˆ µ ( A, t ) ≤ µ ( A ) . (ii) For all A ∈ P <ω ( Q n ), lim t →∞ ˆ µ ( A, t ) = µ ( A ) . Lemma 3.2.
Let µ be a lower semicomputable outer measure on R n . If ˆ µ is a function testifyingto the lower semicomputability of µ , then, for all E ⊆ R n , lim A ր E ∩ Q n t →∞ ˆ µ ( E, t ) = µ ( E ) , meaning that, for all ε > , there exist A ∈ P <ω ( E ∩ Q n ) and t ∈ N such that, for all B ∈P <ω ( E ∩ Q n ) and t ∈ N , [ B ⊇ A and t ≥ t ] = ⇒ | µ ( E ) − ˆ µ ( B, t ) | < ε. Definition.
An outer measure µ on R n is globally optimal if the following conditions hold.(i) µ is strongly finite and lower semicomputable.(ii) For every strongly finite, lower semicomputable outer measure ν on R n , there is a constant β ∈ (0 , ∞ ) such that, for all E ⊆ R n , µ ( E ) ≥ β · ν ( E ) . In proving that globally optimal outer measures exist, we will use a computable enumerationof all strongly finite, lower semicomputable outer measures that take values in [0 , Lemma 3.3.
Let Θ be the set of all strongly finite, lower semicomputable outer measures µ : P ( R n ) → [0 , . There is a computable function ˆ θ : N × P <ω ( Q n ) × N → Q ∩ [0 , such that, if we write ˆ θ k ( A, t ) = ˆ θ ( k, A, t ) for all k, t ∈ N and A ∈ P <ω ( Q n ) , and if we define θ k : P ( R n ) → [0 , by θ k ( E ) = lim A ր E ∩ Q n t →∞ ˆ θ k ( A, t ) for all E ⊆ R n , then Θ = { θ k | k ∈ N } . roof. Let M , M , M . . . be an enumeration of all prefix Turing machines that take inputs in P <ω ( Q n ) × N , give outputs in Q ∩ [0 , M k ( ∅ , t ) = 0 for all k, t ∈ N . For each k ∈ N ,we define the following functions. τ k : P <ω ( Q n ) × N → [0 , ∞ ] is given by τ k ( A, t ) = min B ⊆ A max { s ≤ t | for all s ′ ≤ s, M k halts on input ( B, s ′ ) within t steps } .η k : P <ω ( Q n ) × N × N → Q ∩ [0 ,
1] is given by η k ( A, t, T ) = max B ⊆ As ≤ t { M k ( B, s ) | M k halts on input ( B, s ) within T steps } . ˆ θ k : P <ω ( Q n ) × N → Q ∩ [0 ,
1] is given byˆ θ k ( A, t ) = min ( ℓ − X i =0 η k ( A i , τ k ( A, t ) , t ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) A , . . . , A ℓ − ⊆ A and A ⊆ ℓ − [ i =0 A i ) . Now fix k ∈ N . It is immediate from our construction that θ k takes values in [0 ,
1] and is finitelysupported on Q n , and that ˆ θ k is computable. To prove that θ k ∈ Θ, then, it remains to show thatˆ θ k satisfies conditions (i) and (ii) from the definition of lower semicomputable outer measures, andthat θ k is countably subadditive.For condition (i), let A ∈ P <ω ( Q n ) and s, t ∈ N with s ≤ t . Then τ k ( A, s ) ≤ τ k ( A, t ), and forall T ∈ N , η k ( A, s, T ) ≤ η k ( A, t, T ), so ˆ θ k ( A, s ) ≤ ˆ θ k ( A, t ).For condition (ii), let
A, B ∈ P <ω ( Q n ) with A ⊆ B , let t ∈ N , and let T ≥ t be such that τ k ( B, T ) ≥ τ k ( A, t ). Thenˆ θ k ( B, T ) = min ( ℓ − X i =0 η k ( A i , τ k ( B, T ) , T ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) A , . . . , A ℓ − ⊆ B and B ⊆ ℓ − [ i =0 A i ) ≥ min ( ℓ − X i =0 η k ( A i , τ k ( B, T ) , T ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) A , . . . , A ℓ − ⊆ A and A ⊆ ℓ − [ i =0 A i ) ≥ min ( ℓ − X i =0 η k ( A i , τ k ( A, t ) , t ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) A , . . . , A ℓ − ⊆ A and A ⊆ ℓ − [ i =0 A i ) = ˆ θ k ( A, t ) , so lim t →∞ ˆ µ ( A, t ) ≤ lim t →∞ ˆ µ ( B, t ) . To prove that θ k is countably subadditive, we first show that, for all t ∈ N , ˆ θ k ( · , t ) is finitelysubadditive on P <ω ( Q n ). Fix t ∈ N , let A ∈ P <ω ( Q n ), and let A , . . . , A ℓ − ⊆ A be such that A ⊆ S ℓ − i =0 A i and ˆ θ k ( A, t ) = ℓ − X i =0 η k ( A i , τ k ( A, t ) , t ) . ≤ i < ℓ , we have η k ( A i , τ k ( A, t )) > ˆ θ k ( A i , t ). Then there are sets A i, , . . . , A i,m − ( A i such that η k ( A i,j , τ k ( A, t ) , t ) > m − X j =0 η k ( A i , τ k ( A i , t ) , t ) ≥ m − X j =0 η k ( A i,j , τ k ( A, t ) , t ) , which contradicts the minimality of our choice of A , . . . , A ℓ − . Hence, for all 0 ≤ i < ℓ , η k ( A i , τ k ( A, t )) ≤ ˆ θ k ( A i , t ) , so ˆ θ ( A, t ) = ℓ − X i =0 η k ( A i , τ k ( A, t ) , t ) ≤ ℓ − X i =0 ˆ θ k ( A i , t ) . Now let
E, E , E , E , . . . ⊆ R n be such that E ⊆ S ∞ i =0 E i . Let ε >
0, and let A ∈ P <ω ( E ∩ Q n )and t ∈ N be such that ˆ θ k ( A, t ) > θ k ( E ) − ε. For each i ∈ N , let A i = E i ∩ A . Since A is a finite set, there is some ℓ ∈ N such that A ⊆ S ℓ − i =0 A i .By Lemma 3.2 and the finite subadditivity of ˆ θ k ( · , t ), θ k ( E ) − ε < ˆ θ k ( A, t ) ≤ ℓ − X i =0 ˆ θ k ( A i , t ) ≤ ℓ − X i =0 θ k ( E i ) . Letting ε →
0, we have θ k ( E ) ≤ ∞ X i =0 θ k ( E i ) , and we conclude that θ k ∈ Θ, so { θ k | k ∈ N } ⊆ Θ.For the converse, let µ ∈ Θ, and let ˆ µ be a witness to the lower semicomputability of µ .Then ˆ µ is computable and ˆ µ ( ∅ , t ) = 0 for all t ∈ N , so there is some k ∈ N such that, for all( A, t ) ∈ P <ω ( Q n ) × N , we have M k ( A, t ) = ˆ µ ( A, t ).We now show that for all A ∈ P <ω ( Q n ),lim t →∞ ˆ θ k ( A, t ) = lim t →∞ ˆ µ ( A, t ) . A ∈ P <ω ( Q n ) and t ∈ N . Thenˆ θ k ( A, t ) ≤ η k ( A, τ k ( A, t ) , t )= max B ⊆ As ≤ τ k ( A,t ) { M k ( B, s ) | M k halts on input ( B, s ) within t steps } = max B ⊆ As ≤ τ k ( A,t ) M k ( B, s ) ≤ max B ⊆ As ≤ t M k ( B, s )= M k ( A, t )= ˆ µ ( A, t ) . Now let ε >
0, let t ∈ N be such that, for all B ⊆ A ,ˆ µ ( B, t ) ≥ µ ( B ) − ε/ | A | , and let T ∈ N be such that τ k ( A, T ) ≥ t . Then for each B ⊆ A , η k ( A, τ k ( A, T ) , T ) = max C ⊆ Bs ≤ τ k ( A,T ) { M k ( C, s ) | M k halts on input ( C, s ) within T steps } = max C ⊆ Bs ≤ τ k ( A,T ) M k ( C, s )= M k ( B, τ k ( A, T )) ≥ M k ( B, t )= ˆ µ ( B, t ) . It follows thatˆ θ k ( A, T ) = min ( ℓ − X i =0 η k ( A i , τ k ( A, T ) , T ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) A , . . . , A ℓ − ⊆ A and A ⊆ ℓ − [ i =0 A i ) ≥ min ( ℓ − X i =0 ˆ µ ( A i , t ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) A , . . . , A ℓ − ⊆ A and A ⊆ ℓ − [ i =0 A i ) ≥ min ( ℓ − X i =0 (cid:16) µ ( A i ) − ε | A | (cid:17) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) A , . . . , A ℓ − ⊆ A and A ⊆ ℓ − [ i =0 A i ) ≥ µ ( A ) − ε, by the countable subadditivity of µ .We have shown that for every A ∈ P <ω ( Q n ), every ε >
0, and every sufficiently large t ∈ N ,ˆ θ k ( A, t ) ≤ ˆ µ ( A, t ) and there exists some T ∈ N such that ˆ θ k ( A, T ) ≥ M k ( A, t ) − ε . This impliesthat for all A ∈ P <ω ( Q n ), lim t →∞ ˆ θ k ( A, t ) = lim t →∞ ˆ µ ( A, t ) , E ⊆ R n , θ k ( E ) = lim A ր E ∩ Q n t →∞ ˆ θ k ( A, t )= lim A ր E ∩ Q n t →∞ ˆ µ ( A, t )= µ ( A ) . We conclude that µ ∈ { θ k | k ∈ N } , so Θ ⊆ { θ k | k ∈ N } . Theorem 3.4.
Globally optimal outer measures exist.Proof.
Define the strongly finite outer measure θ : P ( R n ) → [0 ,
1] by θ ( E ) = ∞ X k =0 θ k ( E )2 k +1 , where θ k is defined as in Lemma 3.3. This outer measure is supported on P <ω ( Q n ), and the functionˆ θ : P <ω ( Q n ) × N → [0 ,
1] given by ˆ θ ( A, t ) = t X k =0 ˆ θ k ( A, t )2 k +1 is a witness to the lower semicomputability of θ .Let µ : P ( R n ) → [0 , ∞ ) be any lower semicomputable outer measure that is strongly finite andsupported on P <ω ( Q n ), and let h = max E ⊆ R n ⌈ µ ( E ) ⌉ . Then the function ˜ µ : P ( R n ) → [0 ,
1] given by˜ µ ( E ) = µ ( E ) /h belongs to Θ. By Lemma 3.3, there is some k ∈ N such that, for all E ⊆ R n , θ k ( E ) = ˜ µ ( E ) . We have, for all E ⊆ R n , µ ( E ) = h · ˜ µ ( E ) = h · θ k ( E ) ≤ h · k · θ ( E ) , so θ is globally optimal. This paper’s investigation of algorithmic optimality is primarily driven by a specific outer measure κ . To define κ , we first define the Kolmogorov complexity of a set E ⊆ R n to beK( E ) = min { K( q ) | q ∈ E ∩ Q n } . That is, K( E ) is the minimum number of bits required to cause the universal prefix Turing machine U to print some rational point in E . (Shen and Vereschagin [34] introduced a similar notion for adifferent purpose.) 8 efinition ([22]) . Define the function κ : P ( R n ) → [0 ,
1] by κ ( E ) = 2 − K( E ) for all E ⊆ R n . Observation 4.1 ([22]) . κ is an outer measure on R n . Our primary interest in κ is the following connection between classical local fractal dimensionsand algorithmic fractal dimensions. Observation 4.2 ([22]) . For all x ∈ R n , dim loc κ ( x ) = dim( x ) and Dim loc κ ( x ) = Dim( x ) . Proof.
By (2.1)–(2.4), it suffices to note that, for all x ∈ R n ,log 1 κ ( B ( x, − r )) = K r ( x ) . We next investigate the algorithmic properties of the outer measure κ . Observation 4.3. κ is strongly finite and lower semicomputable.Proof. It suffices to show three things.1. κ is finitely supported on Q n . For this, let ε >
0. Let A = (cid:26) q ∈ Q n (cid:12)(cid:12)(cid:12)(cid:12) K( q ) ≤ log 1 ε (cid:27) . Then A is a finite subset of Q n , andK( R n \ A ) = min { K( q ) | q ∈ Q n \ A } > log 1 ε , so κ ( R n \ A ) < ε .2. X q ∈ Q n κ ( { q } ) < ∞ . For this, just note that X q ∈ Q n κ ( { q } ) = X q ∈ Q n − K( q ) ≤ , by the Kraft inequality for prefix Kolmogorov complexity.3. κ is lower semicomputable. This follows immediately from the well known upper semicom-putability of the Kolmogorov complexity function. Lemma 4.4. κ is not globally optimal. roof. Define the function ν : P ( R n ) → [0 , ∞ ] by ν ( E ) = X q ∈ E ∩ Q n − K( q ) . We now show that ν is a strongly finite, lower semicomputable outer measure on R n . It is clearthat ν is an outer measure on R n . It thus suffices to prove that ν has the properties 1, 2, and 3proven for κ in the proof of Observation 4.3. For properties 2 and 3, the proofs for ν are identicalto those for κ . For property 1, that ν is finitely supported on Q n , let ε >
0. By the Kraft inequalityfor prefix Kolmogorov complexity, X q ∈ Q n − K( q ) ≤ , so there is a finite set A ⊆ Q n such that ν ( R n \ A ) = X q ∈ Q n \ A − K( q ) < ε. Hence, to prove the lemma, it suffices to exhibit a set E ⊆ R n such that, for all β ∈ (0 , ∞ ), κ ( E ) < βν ( E ) . (4.1)Let β ∈ (0 , ∞ ). Let c be a constant such that, for all m ∈ N ,K( m, , . . . , < log( m ) + 2 log log( m ) + c . Let α ∈ (0 , ∞ ) be some parameter, let γ = 2 + 2 log( α + 2) + c , and define the set E = { q ∈ Q n | K( q ) > α } . Then κ ( E ) < − α , and ν ( E ) = X q ∈ E ∩ Q n − K( q ) ≥ X q ∈ Q n K( q ) ∈ ( α,α + γ ) − K( q ) ≥ − α − γ · |{ q ∈ Q n | K( q ) ∈ ( α, α + γ ) }| . There are fewer than 2 α +1 rational points q with K( q ) ≤ α . For all m ∈ N such that m ≤ α +2 ,K( m, , . . . , < α + 2 + 2 log( α + 2) + c = α + γ , so there are at least 2 α +1 rational points q with K( q ) ∈ ( α, α + γ ), and we have ν ( E ) > − γ . Thus, κ ( E ) < γ − α ν ( E ) . Choosing α such that γ − α = 2 + 2 log( α + 2) + c − α < log β yields (4.1). 10otwithstanding Lemma 4.4, κ does have an optimality property, which we next define. Foreach m = ( m , . . . , m n ) ∈ Z n , let Q m = [ m , m + 1) × · · · × [ m n , m n + 1)be the unit cube at m . For each such m and each r ∈ N , let Q ( r ) m = 2 − r Q m = (cid:8) − r x (cid:12)(cid:12) x ∈ Q m (cid:9) be the r -dyadic cube with address m . Note that each Q ( r ) m is “half-closed, half-open” in such a waythat, for each r ∈ N , the family Q ( r ) = n Q ( r ) m (cid:12)(cid:12)(cid:12) m ∈ Z n o is a partition of R n . Definition.
Let µ and ν be outer measures on R n , and let A = ( A ( r ) | r ∈ N ) be a sequence offamilies A ( r ) ⊆ P ( R n ) of subsets of R n . We say that µ dominates ν on A if there is a function β : N → (0 , ∞ ) such that β ( r ) = 2 − o ( r ) as r → ∞ and, for every r ∈ N and every set E ∈ A ( r ) , µ ( E ) ≥ β ( r ) · ν ( E ) . We say that µ dominates ν on dyadic cubes if µ dominates ν on Q = ( Q ( r ) | r ∈ N ). We saythat µ dominates ν on balls if µ dominates ν on B = ( B ( r ) | r ∈ N ), where B ( r ) is the set of allopen balls of radius 2 − r in R n . Definition.
An outer measure µ on R n is locally optimal if the following two conditions hold.(i) µ is strongly finite and lower semicomputable.(ii) For every strongly finite, lower semicomputable outer measure ν on R n , µ dominates ν ondyadic cubes. Theorem 4.5.
The outer measure κ is locally optimal.Proof. We rely on machinery created for a different purpose by Case and the first author [6]. Justas we have “lifted” Kolmogorov complexity from { , } ∗ to Q n via routine encoding, we lift Levin’soptimal lower semicomputable subprobability measure m [14, 15] from { , } ∗ to Q n so that, forall q ∈ Q n , m ( q ) = X πU ( π )= q | π | . We also set m ( E ) = X q ∈ E ∩ Q n m ( q )for all E ⊆ R n . The LDS coding theorem of [6] is a mild generalization of Levin’s coding theorem [14,15] that tells us that there is a constant c ∈ N such that, for all r ∈ N and Q ∈ Q ( r ) ,K( Q ) ≤ log 1 m ( Q ) + K( r ) + c. (4.2)To prove the present theorem, it suffices by Observation 4.3 to prove that κ satisfies condition (ii)of the definition of local optimality. For this, let ν be a strongly finite, lower semicomputable outer11easure on R n . Define p ν : Q n → [0 , ∞ ] by p ν ( q ) = ν ( { q } ) for all q ∈ Q n . Then p ν is lowersemicomputable and P q ∈ Q n p ν ( q ) < ∞ , so the optimality property of m tells us that there is aconstant α ∈ (0 , ∞ ) such that, for all q ∈ Q n , m ( q ) ≥ αp ν ( q ) . (4.3)Define β : N → (0 , ∞ ) by β ( r ) = α − (K( r )+ c ) for all r ∈ N . Then lim r →∞ log β ( r ) r = lim r →∞ log α ( r ) + K( r ) + cr = 0 , so β ( r ) = 2 − o ( r ) as r → ∞ . Also, for all r ∈ N and Q ∈ Q ( r ) , (4.2), (4.3), and the countablesubadditivity of ν tell us that κ ( Q ) = 2 − K( Q ) ≥ − (K( r )+ c ) m ( Q )= 2 − (K( r )+ c ) X q ∈ Q ∩ Q n m ( q ) ≥ β ( r ) X q ∈ Q ∩ Q n p ν ( q )= β ( r ) X q ∈ Q ∩ Q n ν ( { q } ) ≥ β ( r ) ν ( Q ) . This shows that κ dominates ν on dyadic cubes, confirming that κ is locally optimal. Corollary 4.6.
A strongly finite, lower semicomputable outer measure on R n is locally optimal ifand only if it dominates κ on dyadic cubes. Lemma 4.7.
There is a constant c ∈ N such that, for every r ∈ N , every r -dyadic cube Q ∈ Q ( r ) ,and every open ball B ⊆ R n of radius − r that intersects Q , | K( B ) − K( Q ) | ≤ K( r ) + c. Proof.
Lemma 3.5 of [6] gives us a constant c ∈ N such that, for all r , Q , and B as in the presentlemma, K( B ) ≤ K( Q ) + K( r ) + c . Hence it suffices to show that there is a constant c such that, for all r , Q , and B as in the presentlemma, K( Q ) ≤ K( B ) + K( r ) + c . Let M be a prefix Turing machine that, on input π π π where U ( π ) = r ∈ N , and U ( π ) = k ∈ N ,and U ( π ) = ( q , . . . , q n ) ∈ Q n , outputs the lexicographically k th point in the product set n Y i =1 (cid:8) − r ( ⌊ r q i ⌋ − , − r ( ⌊ r q i ⌋ − , − r ⌊ r q i ⌋ , − r ( ⌊ r q i ⌋ + 1) , − r ( ⌊ r q i ⌋ + 2) (cid:9) . r , Q , and B be as in the present lemma. Let q ∈ B ∩ Q n be such that K( q ) = K( B ). Thenthere is some point p = ( p , . . . , p n ) ∈ Q ∩ B ∩ Q n such that | p − q | < − r . Hence Q is the r -dyadiccube with address ( ⌊ r p ⌋ , . . . , ⌊ r p n ⌋ ), and for each 1 ≤ i ≤ n , |⌊ r q i ⌋ − ⌊ r p i ⌋| ≤ . That is, the address of Q belongs to the product set n Y i =1 {⌊ r q i ⌋ − , ⌊ r q i ⌋ − , ⌊ r q i ⌋ , ⌊ r q i ⌋ + 1 , ⌊ r q i ⌋ + 2 } . Let k ≤ n be the lexicographical index of Q ’s address within this set, and let π , π , and π be witnesses to K( r ), K( k ), and K( q ), respectively. Then M ( π π π ) ∈ Q , so letting c M be anoptimality constant for the machine M , we haveK( Q ) ≤ | π | + | π | + | π | + c M = K( r ) + K( k ) + K( q ) + c M = K( B ) + K( r ) + K( k ) + c M . Since k ≤ n , there is some constant c such that K( k ) ≤ n ) + c , so the constant c = c M + 2 log(5 n ) + c affirms the lemma. Lemma 4.8.
A strongly finite, lower semicomputable outer measure µ dominates κ on balls if andonly if it dominates κ on dyadic cubes.Proof. Suppose that µ dominates κ on balls. Then for every x ∈ R n , µ ( B ( x, − r )) = 2 − o ( r ) κ ( B ( x, − r )) . Let r ∈ Z , and let Q be an r -dyadic cube with center q , and let B = B ( q, − r − ), so that B ⊆ Q .Then, applying Lemma 4.7, µ ( Q ) ≥ µ ( B )= 2 − o ( r ) κ ( B )= 2 − K( B ) − o ( r ) = 2 − K( Q ) − o ( r ) = 2 − o ( r ) κ ( Q ) , so µ dominates κ on dyadic cubes.Now suppose that µ dominates κ on dyadic cubes. Then for every r -dyadic cube Q , µ ( Q ) = 2 − o ( r ) κ ( Q ) . Let r ∈ Z , x ∈ R n , and B = B ( x, − r ). Let Q be the (cid:0) r + (cid:6) log √ n (cid:7)(cid:1) -dyadic cube containing x , sothat Q ⊆ B . Applying Lemma 4.7, µ ( B ) ≥ µ ( Q )= 2 − o ( r ) κ ( Q )= 2 − K( Q ) − o ( r ) = 2 − K( B ) − o ( r ) = 2 − o ( r ) κ ( B ) , µ dominates κ on balls. Corollary 4.9.
For every strongly finite, lower semicomputable outer measure µ on R n , the fol-lowing three conditions are equivalent.(1) µ is locally optimal.(2) µ dominates κ on balls.(3) For every strongly finite, lower semicomputable outer measure ν on R n , µ dominates ν onballs. We now have everything we need to prove our main theorem, which is the following generaliza-tion of Observation 4.2.
Theorem 4.10. If µ is any locally optimal outer measure on R n , then for all x ∈ R n , dim loc µ ( x ) = dim( x ) (4.4) and Dim loc µ ( x ) = Dim( x ) . (4.5) Proof.
Let µ be any locally optimal outer measure on R n . By Corollary 4.9, κ dominates µ onballs, and µ dominates κ on balls. That is, there exist two function β , β : N → (0 , ∞ ) such that β ( r ) = 2 − o ( r ) and β ( r ) = 2 − o ( r ) as r → ∞ , and, for every r ∈ N and x ∈ R n , κ ( B ( x, − r )) ≥ β ( r ) µ ( B ( x, − r ))and µ ( B ( x, − r )) ≥ β ( r ) κ ( B ( x, − r )) . Letting β ( r ) = min { β ( r ) , β ( r ) } , we have β ( r ) = 2 − o ( r ) as r → ∞ and, for every r ∈ N and x ∈ R n , (cid:12)(cid:12)(cid:12)(cid:12) log 1 µ ( B ( x, − r )) − log 1 κ ( B ( x, − r )) (cid:12)(cid:12)(cid:12)(cid:12) ≤ log 1 β ( r ) . It follows that, for all x ∈ R n , (cid:12)(cid:12)(cid:12)(cid:12) log 1 µ ( B ( x, − r )) − K r ( x ) (cid:12)(cid:12)(cid:12)(cid:12) = o ( r )as r → ∞ , whence (4.4) and (4.5) follow from (2.1)–(2.4). Local dimensions of measures give rise to global dimensions of measures, which we now briefly com-ment on. In classical fractal geometry, the global dimensions of Borel measures play a substantialrole in studying the interplay between local and global properties of fractal sets and measures. Thematerial in this section is from [22]. 14 efinition ([9]) . For any locally finite Borel measure µ on R n , the lower and upper Hausdorff andpacking dimension of µ aredim H ( µ ) = sup (cid:8) α (cid:12)(cid:12) µ ( { x | dim µ ( x ) < α } ) = 0 (cid:9) Dim H ( µ ) = inf (cid:8) α (cid:12)(cid:12) µ ( { x | dim µ ( x ) > α } ) = 0 (cid:9) dim P ( µ ) = sup (cid:8) α (cid:12)(cid:12) µ ( { x | Dim µ ( x ) < α } ) = 0 (cid:9) Dim P ( µ ) = inf (cid:8) α (cid:12)(cid:12) µ ( { x | Dim µ ( x ) > α } ) = 0 (cid:9) , respectively.Extending these definitions to outer measures, we may consider global dimensions of the outermeasure κ . Since κ is supported on P <ω ( Q n ) and dim( x ) = 0 for all x ∈ P <ω ( Q n ),dim H ( κ ) = Dim H ( κ ) = dim P ( κ ) = Dim P ( κ ) = 0 . (5.1)The point-to-set principle [18] expresses classical Hausdorff and packing dimensions in termsof relativized algorithmic dimensions. That is, algorithmic dimensions in which the underlyinguniversal Turing machine U is an oracle machine with access to some oracle A ⊆ N . We writedim A ( x ) and Dim A ( x ) to denote the algorithmic dimension and strong algorithmic dimension of apoint x ∈ R n relative to A . Theorem 5.1 ([18]) . For every E ⊆ R n , dim H ( E ) = min A ⊆ N sup x ∈ E dim A ( x ) and dim P ( E ) = min A ⊆ N sup x ∈ E Dim A ( x ) . In light of Theorem 4.10, this principle may be considered a member of the family of results,such as Billingsley’s lemma [2] and Frostman’s lemma [11], that relate the local decay of measuresto global properties of measure and dimension. Useful references on such results include [3, 13, 27].Among classical results, this principle is most directly comparable to the weak duality principle of Cutler [7] (see also [9]), which expresses Hausdorff and packing dimensions in terms of lowerand upper pointwise dimensions of measures. For nonempty E ⊆ R n , let ∆( E ) be the collection ofBorel probability measures on R n such that the E is measurable and has measure 1, and let E bethe closure of E . Theorem 5.2 ([7]) . For every nonempty E ⊆ R n , dim H ( E ) = inf µ ∈ ∆( E ) sup x ∈ E dim loc µ ( x ) and dim P ( E ) = inf µ ∈ ∆( E ) sup x ∈ E Dim loc µ ( x ) . By letting A = { κ A | A ⊆ N } and invoking Observation 4.2, Theorem 5.1 can be restated evenmore similarly as dim H ( E ) = inf µ ∈A sup x ∈ E dim loc µ ( x )15nd dim P ( E ) = inf µ ∈A sup x ∈ E Dim loc µ ( x ) . Notice, however, that the collections over which the infima are taken in these two results, A and∆( E ), are disjoint and qualitatively very different. In particular, A does not depend on E . Whereasthe global dimensions of the measures in ∆( E ) are closely tied to the dimensions of E [9], (5.1)shows that the outer measures in A all have trivial global dimensions. Acknowledgments
This research was supported in part by National Science Foundation Grants 1445755, 1545028, and1900716. Some of this work was conducted while the second author was at Rutgers University,DIMACS, and the University of Pennsylvania.
References [1] Krishna B. Athreya, John M. Hitchcock, Jack H. Lutz, and Elvira Mayordomo. Effectivestrong dimension in algorithmic information and computational complexity.
SIAM Journal onComputing , 37(3):671–705, 2007.[2] Patrick Billingsley. Hausdorff dimension in probability theory II.
Illinois Journal of Mathe-matics , 5(2):291–298, 1961.[3] Christopher J. Bishop and Yuval Peres.
Fractals in Probability and Analysis . CambridgeUniversity Press, 2017.[4] Constantin Carath´eodory. ¨Uber das lineare maß von punktmengen—eine verallgemeinerungdes l¨angenbegriffs.
Nachrichten von der Gesellschaft der Wissenschaften zu G¨ottingen,Mathematisch-Physikalische Klasse , pages 404–426, 1914. Translated as On the Linear Mea-sure of Point Sets—a Generalization of the Concept of Length. In
Classics on Fractals , GeraldA. Edgar (Ed.). Addison Wesley, 1993.[5] Constantin Carath´eodory.
Vorlesungen ¨uber reele Funktionen . Teubner, Leipzig, 1918.[6] Adam Case and Jack H. Lutz. Mutual dimension.
ACM Transactions on Computation Theory ,7(3):12, 2015.[7] Colleen D. Cutler. Strong and weak duality principles for fractal dimension in Euclidean space.
Mathematical Proceedings of the Cambridge Philosophical Society , 118:393–410, 1995.[8] Rod Downey and Denis Hirschfeldt.
Algorithmic Randomness and Complexity . Springer-Verlag, 2010.[9] Kenneth J. Falconer.
Techniques in Fractal Geometry . Wiley, 1997.[10] Kenneth J. Falconer.
Fractal Geometry: Mathematical Foundations and Applications . Wiley,third edition, 2014.[11] Otto Frostman. Potential d’equilibre et capacit´e des ensembles avec quelques applicationsa la the´orie des fonctions.
Meddelanden fr˚an Lunds Universitets Matematiska Seminarium ,3:1–118, 1935. 1612] Felix Hausdorff. Dimension und ¨ausseres Mass.
Mathematische Annalen , 79:157–179, 1919.[13] Michael Hochman. Lectures on dynamics, fractal geometry, and metric number theory.
Journalof Modern Dynamics , 8(3–4):437–497, 2014.[14] Leonid A. Levin. On the notion of a random sequence.
Soviet Mathematics Doklady ,14(5):1413–1416, 1973.[15] Leonid A. Levin. Laws of information conservation (nongrowth) and aspects of the foundationof probability theory.
Problemy Peredachi Informatsii , 10(3):30–35, 1974.[16] Ming Li and Paul M. B. Vit´anyi.
An Introduction to Kolmogorov Complexity and its Applica-tions . Springer-Verlag, Berlin, fourth edition, 2019.[17] Jack H. Lutz. The dimensions of individual strings and sequences.
Information and Compu-tation , 187(1):49–79, 2003.[18] Jack H. Lutz and Neil Lutz. Algorithmic information, plane Kakeya sets, and conditionaldimension.
ACM Transactions on Computation Theory , 10(2), 2018.[19] Jack H. Lutz and Neil Lutz. Who asked us? how the theory of computing answers questionsabout analysis. In Dingzhu Du and Jie Wang, editors,
Complexity and Approximation: InMemory of Ker-I Ko , pages 48–56. Springer, 2020.[20] Jack H. Lutz, Neil Lutz, and Elvira Mayordomo. The dimensions of hyperspaces.arXiv:2004.07798, 2020.[21] Jack H. Lutz and Elvira Mayordomo. Dimensions of points in self-similar fractals.
SIAMJournal on Computing , 38(3):1080–1112, 2008.[22] Neil Lutz.
Algorithmic Information, Fractal Geometry, and Distributed Dynamics . PhD thesis,Rutgers University, 2017.[23] Neil Lutz. Fractal intersections and products via algorithmic dimension. In , 2017.[24] Neil Lutz. Fractal intersections and products via algorithmic dimension. arXiv:1612.01659v4[cs.CC], 2019.[25] Neil Lutz and D. M. Stull. Bounding the dimension of points on a line.
Proceedings of the14th Annual Conference on Theory and Applications of Models of Computation, TAMC 2017,Bern, Switzerland , 2017.[26] Neil Lutz and D. M. Stull. Projection theorems using effective dimension. In , pages 71:1–71:15, 2018.[27] Pertti Mattila.
Geometry of sets and measures in Euclidean spaces: fractals and rectifiability .Cambridge University Press, 1995.[28] Elvira Mayordomo. A Kolmogorov complexity characterization of constructive Hausdorff di-mension.
Inf. Process. Lett. , 84(1):1–3, 2002.1729] Elvira Mayordomo. Effective fractal dimension in algorithmic information theory. In S. BarryCooper, Benedikt L¨owe, and Andrea Sorbi, editors,
New Computational Paradigms: ChangingConceptions of What is Computable , pages 259–285. Springer-Verlag, 2008.[30] Andre Nies.
Computability and Randomness . Oxford University Press, 2009.[31] Tuomas Orponen. Combinatorial proofs of two theorems of Lutz and Stull. arXiv:2002.01743[math.CA], 2020.[32] Jan Reimann.
Computability and fractal dimension . PhD thesis, Heidelberg University, 2004.[33] Claude E. Shannon. A mathematical theory of communication.
Bell System Technical Journal ,27(3–4):379–423, 623–656, 1948.[34] Alexander Shen and Nikolai K. Vereshchagin. Logical operations and Kolmogorov complexity.
Theoretical Computer Science , 271(1–2):125–129, 2002.[35] Terence Tao.