DDimension and measure for typical random fractals
Jonathan M. Fraser
Mathematical Institute, University of St Andrews, North Haugh,St Andrews, Fife, KY16 9SS, Scotlande-mail: [email protected]
October 30, 2018
Abstract
A random iterated function system (RIFS) is a finite set of (deterministic) iterated functionsystems (IFSs) acting on the same metric space and, for a given RIFS, we define a continuum ofrandom attractors corresponding to each sequence of deterministic IFSs. Much work has been doneon computing the ‘almost sure’ dimensions of these random attractors. We compute the typicaldimensions (in the sense of Baire) and observe that our results are in stark contrast to those obtainedusing the probabilistic approach. Furthermore, we examine the typical Hausdorff and packingmeasures of the random attractors and give examples to illustrate some of the strange phenomenathat can occur. The only restriction we impose on the maps is that they are bi-Lipschitz and weobtain our dimension results without assuming any separation conditions.
Mathematics Subject Classification
Key words and phrases : Hausdorff dimension, packing dimension, box dimension, Baire cate-gory, random iterated function system.
In this paper we consider the dimension and measure of typical attractors of random iterated functionsystems (RIFSs). We define a RIFS to be a finite set of iterated function systems (IFSs) acting on thesame metric space and, for a given RIFS, we define a continuum of random attractors correspondingto each sequence of deterministic IFSs. In fact, these attractors are 1-variable random fractals, asdiscussed in [B, BHS, BHS2]. Much work has been done on computing the ‘almost sure’ dimensionsof these random attractors, where ‘almost sure’ refers to a probability measure on the sample spaceinduced from a probability vector associated with the finite list of IFSs. One expects the dimension tobe some sort of ‘weighted average’ of the dimensions corresponding to the attractors of the deterministicIFSs. Here, we consider a topological approach, based on Baire category, to computing the genericdimensions and obtain results in stark contrast to those obtained using the probabilistic approach. Weare able to obtain very general results, only requiring that our maps are bi-Lipschitz and assuming noseparation conditions. We compute the typical Hausdorff, packing and box dimensions of the randomattractors (in the sense of Baire) and also study the typical Hausdorff and packing measures withrespect to different gauge functions. Finally, we give a number of illustrative examples and open questions.We find that the dimensions of typical attractors behave rather well. In particular, the typicalHausdorff and lower box dimension are always as small as possible and the typical packing and upperbox dimensions are always as large as possible. In comparison, the typical Hausdorff and packing measures behave rather badly. We provide examples where the typical Hausdorff measure in the criticaldimension is as small as possible and examples where it is as large as possible (with similar examplesconcerning packing measure). We find that in the simpler setting of random self-similar sets thebehaviour of the typical Hausdorff and packing measures is more predictable.1 a r X i v : . [ m a t h . M G ] J a n .1 The random model Let (
K, d ) be a compact metric space. A (deterministic) iterated function system (IFS) is a finite set ofcontraction mappings on K . Given such an IFS, { S , . . . , S m } , it is well-known that there exists a uniquenon-empty compact set F satisfying F = m (cid:91) i =1 S i ( F )which is called the attractor of the IFS. We define a random iterated function system (RIFS) to be aset I = { I , . . . , I N } , where each I i is a deterministic IFS, I i = { S i,j } j ∈I i , for a finite index set, I i , andeach map, S i,j , is a contracting bi-Lipschitz self-map on K . We define a continuum of attractors of I inthe following way. Let D = { , . . . , N } , Ω = D N and let ω = ( ω , ω , . . . ) ∈ Ω. Define the attractor of I corresponding to ω by F ω = (cid:92) k (cid:91) i ∈I ω ,...,i k ∈I ωk S ω ,i ◦ · · · ◦ S ω k ,i k ( K ) . So, by ‘randomly choosing’ ω ∈ Ω, we ‘randomly choose’ an attractor F ω . Attractors of RIFSs can enjoya much richer and more complicated structure than attractors of IFSs. Some pictures have been includedin Section 4.5 to help illustrate this. We now wish to make statements about the generic nature of F ω .In particular, what is the generic dimension of F ω ? In the following section we briefly recall the notionsof dimension we will be interested in. Let F be a subset of K . For s (cid:62) δ > δ -approximate s -dimensional Hausdorffmeasure of F by H sδ ( F ) = inf (cid:26) (cid:88) i ∈ I | U i | s : { U i } i ∈ I is a countable δ -cover of F by open sets (cid:27) and the s -dimensional Hausdorff (outer) measure of F by H s ( F ) = lim δ → H sδ ( F ). The Hausdorff dimen-sion of F is dim H F = inf (cid:110) s (cid:62) H s ( F ) = 0 (cid:111) = sup (cid:110) s (cid:62) H s ( F ) = ∞ (cid:111) . If F is compact, then we may define the Hausdorff measure of F in terms of finite covers. Packingmeasure, defined in terms of packings , is a natural dual to Hausdorff measure, which was defined in termsof covers . For s (cid:62) δ > δ -approximate s -dimensional packing pre-measure of F by P s ,δ ( F ) = sup (cid:26) (cid:88) i ∈ I | U i | s : { U i } i ∈ I is a countable centered δ -packing of F by closed balls (cid:27) and the s -dimensional packing pre-measure of F by P s ( F ) = lim δ → P s ,δ ( F ). To ensure countablestability, the packing (outer) measure of F is defined by P s ( F ) = inf (cid:26) (cid:88) i P s ( F i ) : F ⊆ (cid:91) i F i (cid:27) and the packing dimension of F isdim P F = inf (cid:110) s (cid:62) P s ( F ) = 0 (cid:111) = sup (cid:110) s (cid:62) P s ( F ) = ∞ (cid:111) . A less sophisticated, but very useful, notion of dimension is box (or box-counting) dimension. The lowerand upper box dimensions of F are defined bydim B F = lim inf δ → log N δ ( F ) − log δ and dim B F = lim sup δ → log N δ ( F ) − log δ , respectively, where N δ ( F ) is smallest number of open sets required for a δ -cover of F . If dim B F = dim B F ,then we call the common value the box dimension of F and denote it by dim B F .2n general we have the following relationships between the dimensions discussed abovedim P F (cid:54) (cid:54) dim H F dim B F (cid:54) (cid:54) dim B F and, furthermore, if F is compact and every ball centered in F intersects F in a set with upper boxdimension the same as F , then dim H F (cid:54) dim B F (cid:54) dim P F = dim B F . This will apply in our situation.It is possible to consider a ‘finer’ definition of Hausdorff and packing dimension. We define a gauge function to be a function, G : (0 , ∞ ) → (0 , ∞ ), which is continuous, monotonically increasingand satisfies lim t → G ( t ) = 0. We then define the Hausdorff measure, packing pre-measure and packingmeasure with respect to the gauge G as H G ( F ) = lim δ → inf (cid:26) (cid:88) i ∈ I G ( | U i | ) : { U i } i ∈ I is a countable δ -cover of F by open sets (cid:27) , P G ( F ) = lim δ → sup (cid:26) (cid:88) i ∈ I G ( | U i | ) : { U i } i ∈ I is a countable centered δ -packing of F by closed balls (cid:27) and P G ( F ) = inf (cid:26) (cid:88) i P G ( F i ) : F ⊆ (cid:91) i F i (cid:27) respectively. Note that if G ( t ) = t s then we obtain the standard Hausdorff and packing measures.The advantage of this approach is that, in the case where the measure of a set is zero or infinite in itsdimension, one may be able to find an appropriate gauge for which the measure is positive and finite.For example, with probability 1, Brownian trails in R have Hausdorff dimension 2, but 2-dimensionalHausdorff measure equal to zero. However, with probability 1, they have positive and finite H G -measurewith respect to the gauge G ( t ) = t log(1 /t ) log log log(1 /t ), see [F2], Chapter 16, and the referencestherein.For a given gauge function, G , and a constant, c >
0, we define D − ( G, c ) = inf t> G ( c t ) G ( t ) and D + ( G, c ) = sup t> G ( c t ) G ( t ) . Notice that, if c (cid:54)
1, then D + ( G, c ) (cid:54)
1. It is easy to see that if 0 < D − ( G, c ) (cid:54) D + ( G, c ) < ∞ forsome c >
0, then 0 < D − ( G, c ) (cid:54) D + ( G, c ) < ∞ for all c > doubling . The standard gauge is clearly doubling, with D − ( G, c ) = D + ( G, c ) = c s .For a more detailed discussion of this finer approach to dimension see, [F2], Section 2.5, or [R],Chapter 2. Remark 1.1.
We have defined Hausdorff measure and box dimension by means of covers by open sets.We do this for technical reasons and note that these definitions are equivalent to the standard definitionsusing covers by arbitrary sets, see [M] Theorem 4.4.
In this section we introduce some separation properties which will be required for some of our results.Note that our main Theorem (Theorem 2.1) does not require any separation properties.
Definition 1.2.
We say that a deterministic IFS, { S i } mi =1 , satisfies the open set condition (OSC) , ifthere exists a non-empty open set, O , such that m (cid:91) i =1 S i ( O ) ⊆ O with the union disjoint.
3e generalise the OSC to the RIFS situation in the following way.
Definition 1.3.
We say that I satisfies the uniform open set condition (UOSC) , if each deterministicIFS satisfies the OSC and the open set can be chosen uniformly, i.e., there exists a non-empty open set O ⊆ K such that, for each i ∈ D , we have (cid:91) j ∈I i S i,j ( O ) ⊆ O with the union disjoint. The UOSC also appears in, for example, [BHS].
Definition 1.4.
Let µ be a Borel measure supported on K . We say that I satisfies the µ -measureseparated condition ( µ -MSC) , if, for all ω ∈ Ω , l ∈ D and i, j ∈ I l with i (cid:54) = j , we have µ (cid:0) S l,i ( F ω ) ∩ S l,j ( F ω ) (cid:1) = 0 . The µ -MSC means that µ will be additive on the subsets of F ω corresponding to images of finite (distinct)sequences of maps, S ω ,i , . . . , S ω k ,i k . In this paper we will use the µ -MSC with µ equal to either theHausdorff or packing measure. In Section 1.1 we mentioned that our maps are assumed to be bi-Lipschitz contractions. In this sectionwe will fix some related notation which we will need to state some of our results. For a map φ : K → K define Lip − ( φ ) = inf x,y ∈ K d (cid:0) φ ( x ) , φ ( y ) (cid:1) d ( x, y ) and Lip + ( φ ) = sup x,y ∈ K d (cid:0) φ ( x ) , φ ( y ) (cid:1) d ( x, y ) . If Lip + ( φ ) < ∞ , then we say φ is Lipschitz and if, in addition, Lip − ( φ ) >
0, then we say φ is bi-Lipschitz .If Lip + ( φ ) <
1, then we say φ is a contraction . Finally, if Lip − ( φ ) = Lip + ( φ ) <
1, then we write Lip( φ )to denote the common value and say that φ is a similarity . Given a deterministic IFS, { S , . . . , S k } ,consisting of similarities, the similarity dimension is defined to be the unique solution to Hutchison’sformula k (cid:88) i =1 Lip( S i ) s = 1 . It is well-known that if such an IFS satisfies the OSC, then the similarity dimension equals the Hausdorff,packing and box dimension of the attractor, see [F2] Section 9.3.
The most common approach to studying random fractals is to associate a probability measure withthe space of possible attractors and then make almost sure statements. For some examples based onconformal systems, see [F3], [LW], [O1], [BHS], [BHS2], [B]; and for non-conformal (self-affine) systems,see [GuLi], [GuLi2] [GL], [O3], [FO]. For the random model we described in Section 1.1 this probabilisticapproach would go as follows. Associate a probability vector, p = ( p , . . . , p N ), with I . Then, to obtainour random attractor, we choose each entry in ω randomly and independently with respect to p . Thisinduces a probability measure, P , on Ω given by P = (cid:89) N N (cid:88) i =1 p i δ i , where δ i is the Dirac measure concentrated at i ∈ D = { , . . . , N } . We then say a property of the randomattractors is generic if it occurs for P -almost all ω ∈ Ω. This approach has attracted much attention inthe literature with the ergodic theorem often playing a key role in the analysis, utilising the fact that P is ergodic with respect to the left shift on Ω. We give a couple of examples.4 heorem 1.5 ([BHS]) . Let I = { I , . . . , I N } be an RIFS consisting of similarity maps on R n withassociated probability vector p = ( p , . . . , p N ) . Assume that I satisfies the UOSC and let s be the solutionof N (cid:89) i =1 (cid:18) (cid:88) j ∈I i Lip( S i,j ) s (cid:19) p i = 1 . (1.1) Then, for P almost all ω ∈ Ω , dim H F ω = dim B F ω = dim P F ω = s . Equation 1.1 should be viewed as a randomised version of Hutchison’s formula. Here the almost suredimension is ‘some sort of’ weighted average of the dimensions of the attractors of I i . For a proof ofTheorem 1.5, see [BHS], or alternatively, [B], Chapter 5.7, and the references therein.Self-affine sets are an important class of fractals and often provide examples of strange behaviournot observed in the self-similar setting. We will now discuss a well-studied class of self-affine sets andrandom self-affine sets which we will use in Section 4 to demonstrate some important phenomena. Takethe unit square and divide it up into an m × n grid for some m (cid:54) n . Now choose a subset of therectangles formed by the grid and form an IFS of affine maps which take the unit square onto each chosensubrectangle, preserving orientation. The attractor of this system is called a self-affine Sierpi´nski carpet.A formula for the Hausdorff dimension was obtained independently by Bedford [Be] and McMullen [Mc].Now consider a random Sierpi´nski carpet where we take N deterministic IFSs, I i , built by dividing theunit square into an m i × n i grid m i (cid:54) n i and an associated probability vector p = ( p , . . . , p N ). Thefollowing dimension formula was given in [GuLi2] and can be derived from results in [FO]. Theorem 1.6 ([FO], [GuLi2]) . For j = 1 . . . m i , let C i,j ∈ { , . . . , m i } denote the number of rectangleschosen in the j th column in the i th IFS. Let ν = m p · · · m p N N and ν = n p · · · n p N N . Then, for P almost all ω ∈ Ω , dim H F ω = N (cid:88) i =1 p i (cid:32) ν log (cid:18) m i (cid:88) j =1 C log ν / log ν i,j (cid:19)(cid:33) . We note that in [FO] a higher dimensional analogue of Theorem 1.6 was obtained where one begins theconstruction with the unit cube in R d rather than the unit square. Notice that if m i = m and n i = n forall i , then the above dimension formula simplifies todim H F ω = N (cid:88) i =1 p i (cid:32) m log (cid:18) m (cid:88) j =1 C log m/ log ni,j (cid:19)(cid:33) = N (cid:88) i =1 p i s i , where s i is the Hausdorff dimension of the attractor of the attractor of I i given by Bedford and McMullen.In this case, the almost sure Hausdorff and box dimension were computed in [GuLi]. If the m i and n i are not chosen uniformly, then we have a nonlinear dependence on the probability vector p . An exampleusing Theorem 1.6 will be given in Section 4.2. In this paper we will investigate the generic dimension and measure of F ω from a topological point ofview using Baire Category. In this section we will recall the basic definitions and theorems.Let ( X, d ) be a complete metric space. A set N ⊆ X is nowhere dense if for all x ∈ N and forall r > y ∈ X \ N and t > B ( y, t ) ⊆ B ( x, r ) \ N. A set M is said to be of the first category , or, meagre , if it can be written as a countable union of nowheredense sets. We think of a meagre set as being small and the complement of a meagre set as being big .A set T ⊆ X is residual or co-meagre , if X \ T is meagre. A property is called typical if the set of pointswhich have the property is residual. In Section 3 we will use the following theorem to test for typicalitywithout mentioning it explicitly. 5 heorem 1.7. In a complete metric space, a set T is residual if and only if T contains a countableintersection of open dense sets or, equivalently, T contains a dense G δ subset of X .Proof. See [Ox].In order to consider typical properties of members of Ω, we need to topologize Ω in a suitable way. Wedo this by equipping it with the metric d Ω where, for u = ( u , u , . . . ) (cid:54) = v = ( v , v , . . . ) ∈ ω , d Ω ( u, v ) = 2 − k where k = min { n ∈ N : u n (cid:54) = v n } . The space (Ω , d Ω ) is complete. For a more detailed account of BaireCategory the reader is referred to [Ox].It is worth noting that one could also formulate the topological approach using the set { F ω : ω ∈ Ω } instead of Ω. In fact, this leads to an equivalent analysis but since we do not use this approach directlywe defer discussion of it until Section 5 (9). In this section we state our results. In Section 2.1 we state results which apply in very general circum-stances, namely, the random iterated function systems introduced in Section 1.1. Theorem 2.1 is the mainresult of the paper and gives the typical Hausdorff, packing and upper and lower box dimensions of F ω and, furthermore, gives sufficient conditions for the typical Hausdorff and packing measures with respectto any (doubling) gauge function to be zero or infinite. In Section 2.2 we specialise to the self-similarsetting. Our main result is the following.
Theorem 2.1.
Let G : (0 , ∞ ) → (0 , ∞ ) be a gauge function.(1) If inf u ∈ Ω H G ( F u ) = 0 , then for a typical ω ∈ Ω , we have H G ( F ω ) = 0 ;(2) If G is doubling and sup u ∈ Ω P G ( F u ) = ∞ , then for a typical ω ∈ Ω , we have P G ( F ω ) = ∞ ;(3) The typical Hausdorff dimension is infimal, i.e., for a typical ω ∈ Ω , we have dim H F ω = inf u ∈ Ω dim H F u ; (4) The packing dimension and upper box dimension are supremal and, in fact, for a typical ω ∈ Ω , wehave dim B F ω = dim P F ω = sup u ∈ Ω dim B F u = sup u ∈ Ω dim P F u ; (5) The lower box dimension is infimal, i.e, for a typical ω ∈ Ω , we have dim B F ω = inf u ∈ Ω dim B F u . We will prove Theorem 2.1 part (1) in Section 3.2; part (2) in Section 3.3; and part (5) in Section3.4. Choosing G such that G ( t ) = t s , part (3) follows from part (1) and part (4) follows from part(2) combined with the observation that the packing and upper box dimension coincide for all randomattractors, see Lemma 3.2.It is slightly unsatisfactory that in Theorem 2.1 part (1) we do not get a precise value for thetypical Hausdorff measure if the infimal Hausdorff measure is positive and finite; and similarly, in part(2) we do not get a precise value for the typical packing measure if the supremal packing measure ispositive and finite. In keeping with the rest of the results and what is ‘usually’ expected when dealingwith Baire category, one might expect that either: the typical Hausdorff measure will be the infimal6alue and the typical packing measure will be the supremal value; or, even though F ω will typically be‘small’ in terms of Hausdorff dimension and ‘large’ in terms of packing dimension, due to the influenceof deterministic IFSs with non-extremal attractors, they will be ‘large’ in terms of Hausdorff measureand ‘small’ in terms of packing measure. Surprisingly, both of these phenomena are possible. In thefollowing two theorems we identify a large class of RIFS where the second type of behaviour occurs.Theorem 2.2 refers to Hausdorff measure and Theorem 2.3 refers to packing measure. Theorem 2.2.
Write h = inf u ∈ Ω dim H F u and assume that I satisfies the H h -MSC and that there exists v = ( v , v , . . . ) ∈ Ω such that lim l →∞ (cid:88) j ∈I v ,...,j l ∈I vl Lip − ( S v ,j ◦ · · · ◦ S v l ,j l ) h = ∞ . (2.1) Then,(1) If inf u ∈ Ω H h ( F u ) = 0 , then for a typical ω ∈ Ω , we have H h ( F ω ) = 0 ;(2) If inf u ∈ Ω H h ( F u ) > , then for a typical ω ∈ Ω , we have H h ( F ω ) = ∞ . Note that part (1) follows from Theorem 2.1. We will prove Theorem 2.2 (2) in Section 3.5. Althoughcondition (2.1) seems a little contrived, what it really means is that, for some v ∈ Ω, we can give asimple lower bound for the Hausdorff dimension of F v which is strictly bigger than the infimal Hausdorffdimension, h . Theorem 2.3.
Write p = sup u ∈ Ω dim P F u and assume that there exists v = ( v , v , . . . ) ∈ Ω such that lim k →∞ (cid:88) j ∈I v ,...,j k ∈I vk Lip + ( S v ,j ◦ · · · ◦ S v k ,j k ) p = 0 . (2.2) Then,(1) If sup u ∈ Ω P p ( F u ) = ∞ , then for a typical ω ∈ Ω , we have P p ( F ω ) = ∞ ;(2) If sup u ∈ Ω P p ( F u ) < ∞ , then for a typical ω ∈ Ω , we have P p ( F ω ) = 0 . Note that in Theorem 2.3 we do not require any separation conditions. Part (1) follows from Theorem2.1. We will prove Theorem 2.3 (2) in Section 3.6. Similar to above, condition (2.2) seems a littlecontrived at first sight but what it really means is that, for some v ∈ Ω, we can give a simple upperbound for the packing dimension of F v which is strictly smaller than the supremal packing dimension, p .With the previous two Theorems in mind, one might be tempted to think that something muchmore general is true. Namely, that for s (cid:62)
0, we have(1) If inf u ∈ Ω H s ( F u ) >
0, then for a typical ω ∈ Ω, we have H s ( F ω ) = sup u ∈ Ω H s ( F u );(2) If sup u ∈ Ω P s ( F u ) < ∞ , then for a typical ω ∈ Ω, we have P s ( F ω ) = inf u ∈ Ω P s ( F u ).However, this is false. We will demonstrate this by constructing two simple examples in Section 4.1.This ‘bad behaviour’ of the typical packing and Hausdorff measures disappears to a certain extent if themappings in the RIFS are similarities. This idea will be developed in the following section. In this section we extend the results of the previous section in the self-similar setting. It turns outthat for random self-similar sets we can obtain more precise information and, furthermore, many ofthe strange phenomena which we observe in the general setting no longer occur. The first example ofthis is that, given the UOSC, the dimensions of F ω are bounded by the dimensions of the attractors ofthe deterministic IFSs. This allows us to get our hands on the extremal quantities, see Theorem 2.4.Unfortunately, this rather nice property does not always hold in the general situation. In Section 4.2 wewill give an example of a RIFS satisfying the UOSC for which the infimal (and thus typical) Hausdorffdimension is strictly less than the minimum Hausdorff dimension of the attractors of the deterministicIFSs. Secondly, given the UOSC and certain measure separation, we can compute the exact value of7he typical Hausdorff and packing measure, see Theorem 2.5, which we are unable to do in the generalsituation.Throughout this section let I be a RIFS consisting of finitely many deterministic IFSs of similar-ity mappings of R n . For each i ∈ D , let s i be the solution of (cid:88) j ∈I i Lip( S i,j ) s i = 1and write s min = min i ∈ D s i and s max = max i ∈ D s i . Theorem 2.4.
Assume the UOSC is satisfied. Then(1) < sup ω ∈ Ω P s max ( F ω ) < ∞ ;(2) sup ω ∈ Ω dim P F ω = sup ω ∈ Ω dim B F ω = s max ;(3) < inf ω ∈ Ω H s min ( F ω ) < ∞ ;(4) inf ω ∈ Ω dim H F ω = inf ω ∈ Ω dim B F ω = s min . We will prove Theorem 2.4 parts (1) and (3) in Section 3.7. Part (2) follows from part (1) and part(4) follows from part (3). Given certain measure separation we can also compute the exact packing andHausdorff measure for typical F ω . Write H min = inf ω ∈ Ω H s min ( F ω ) and P max = sup ω ∈ Ω P s max ( F ω ). Theorem 2.5.
Assume that I satisfies the UOSC and the P s min -MSC. Then(1) If s min = s max = s , then for a typical ω ∈ Ω , dim H F ω = dim P F ω = s and < H s ( F ω ) = H min (cid:54) P max = P s ( F ω ) < ∞ ; (2) If s min < s max , then for a typical ω ∈ Ω , dim H F ω = s min < s max = dim P F ω , H s min ( F ω ) = ∞ and P s max ( F ω ) = 0 . We will prove Theorem 2.5 (1) in Section 3.8. Note that part (2) follows immediately from Theorems2.2 and 2.3. In Section 4.3 we construct a simple example where we can apply Theorem 2.5.It is worth noting here that it is possible to give easily checkable sufficient conditions for the P s min -MSCto hold. In particular, if we say that I satisfies the uniform strong open set condition (USOSC) if the theUOSC is satisfied and the open set O can be chosen such that, for every ω ∈ Ω, we have
O ∩ F ω (cid:54) = ∅ , thenwe can use an argument similar to that used by Lalley in [L], Section 6, to show that the P s min -MSCis satisfied. Unfortunately, the USOSC is not equivalent to the UOSC as in the deterministic case, see [Sc].We can also obtain a partial result concerning packing measure without assuming any separationconditions. Theorem 2.6.
Each deterministic IFS, I i ∈ I , has an attractor with dimension d i and similarity di-mension s i (cid:62) d i . Assume that s min < max i d i . Write p = sup u ∈ Ω dim P F u . Then, for a typical ω ∈ Ω , dim P F ω = p , but P p ( F ω ) = 0 .Proof. This follows immediately from Theorem 2.3.8
Proofs
Throughout this section let G : (0 , ∞ ) → (0 , ∞ ) be a gauge function. In this section we will gather together some simple preliminary results and observations which will be usedin the subsequent sections without being mentioned explicitly. The proofs are elementary (or classical)and are omitted.
Lemma 3.1 (scaling properties) . Let φ : K → K be a bi-Lipschitz map and F ⊆ K . Then D − ( G, Lip − ( φ )) H G ( F ) (cid:54) H G ( φ ( F )) (cid:54) D + ( G, Lip + ( φ )) H G ( F ) ,D − ( G, Lip − ( φ )) P G ( F ) (cid:54) P G ( φ ( F )) (cid:54) D + ( G, Lip + ( φ )) P G ( F ) and D − ( G, Lip − ( φ )) P G ( F ) (cid:54) P G ( φ ( F )) (cid:54) D + ( G, Lip + ( φ )) P G ( F ) . In particular, using the standard gauge,
Lip − ( φ ) s H s ( F ) (cid:54) H s ( φ ( F )) (cid:54) Lip + ( φ ) s H s ( F ) , Lip − ( φ ) s P s ( F ) (cid:54) P s ( φ ( F )) (cid:54) Lip + ( φ ) s P s ( F ) and Lip − ( φ ) s P s ( F ) (cid:54) P s ( φ ( F )) (cid:54) Lip + ( φ ) s P s ( F ) . Lemma 3.1, says that if the gauge is doubling, then mapping a set under a bi-Lipschitz map only changesthe measure by a constant. Clearly if φ is bi-Lipschitz, then dim φ ( F ) = dim F , where dim can be anyof the four dimensions used here. We can also deduce that, for all ω ∈ Ω, the upper box dimension andpacking dimension coincide.
Lemma 3.2 (packing and upper box dimension) . For all ω ∈ Ω , dim P F ω = dim B F ω . To prove this simply note that all balls centered in F ω contain a bi-Lipschitz image of F ( ω k ,ω k +1 ,... ) forsome sufficiently large k and, furthermore, F ω can be written as a finite union of bi-Lipschitz imagesof F ( ω k ,ω k +1 ,... ) and since upper box dimension is finitely stable, dim B F ( ω k ,ω k +1 ,... ) = dim B F ω and theresult follows. See the discussion on sufficient conditions for the equality of packing and upper boxdimension given in Section 1.2.We recall the defintion of the Hausdorff metric. Let K ( K ) denote the set of all compact subsetsof ( K, d ). This forms a complete metric space when equipped with the Hausdorff metric, d H , which isdefined by d H ( E, F ) = inf { ε > E ⊆ F ε and F ⊆ E ε } for E, F ∈ K ( K ) and where E ε denotes the ε -neighbourhood of E . The following lemma will allow usto approximate F ω in K by approximating ω in Ω, which will be of vital importance in the subsequentproofs. Lemma 3.3 (continuity properties) . The map
Ψ : (cid:0) Ω , d Ω (cid:1) → (cid:0) K ( K ) , d H (cid:1) defined by Ψ( ω ) = F ω iscontinuous. Finally, we will state a version of the mass distribution principle which we use to estimate the Hausdorffand packing measures of random self-similar sets in Section 3.7.
Proposition 3.4 (mass distribution principle) . Let µ be a Borel probability measure supported on a Borelset F ⊂ R n and let λ ∈ (0 , ∞ ) . Then(1) If lim sup r → µ (cid:0) B ( x, r ) (cid:1) r − s (cid:54) λ for all x ∈ F , then H s ( F ) (cid:62) λ − ;(2) If lim inf r → µ (cid:0) B ( x, r ) (cid:1) r − s (cid:62) λ for all x ∈ F , then P s ( F ) (cid:54) λ − s . For a proof of this, see [F1, M]. 9 .2 Proof of Theorem 2.1 (1)
Suppose inf u ∈ Ω H G ( F u ) = 0. We will show that the set H = { ω ∈ Ω : H G ( F ω ) = 0 } is residual. Writing H m,n = { ω ∈ Ω : H G /m ( F ω ) < n } , we have H = (cid:92) m,n ∈ N H m,n , so it suffices to prove that each H m,n is open and dense in (Ω , d Ω ). Fix m, n ∈ N .(i) H m,n is open.Let ω ∈ H m,n . It follows that there exists a finite (1 /m )-cover of F ω by open sets, { U i } , satisfy-ing (cid:88) i G ( | U i | ) < n . Let U = ∂ (cid:0) ∪ i U i (cid:1) be the boundary of the union of the covering sets, { U i } , and let η = min x ∈U ,y ∈ F ω d ( x, y )which is strictly positive by the compactness of F ω . Now choose r > u ∈ B ( ω, r ), then d H ( F ω , F u ) < η/
2. Let u ∈ B ( ω, r ) and observe that { U i } is a (1 /m )-cover for F u giving that H G /m ( F u ) (cid:54) (cid:80) i G ( | U i | ) < n . It follows that B ( ω, r ) ⊆ H m,n and that H m,n is open.(ii) H m,n is dense.Let ω = ( ω , ω , . . . ) ∈ Ω and ε >
0. Choose k ∈ N such that 2 − k < ε and choose u = ( u , u , . . . ) ∈ Ωsuch that H G ( F u ) < /n |I ω | · · · |I ω k | . Let v = ( ω , . . . , ω k , u , u , . . . ). It follows that d Ω ( ω, v ) < ε and, since F v = (cid:91) j ∈I ω ,...,j k ∈I ωk S ω ,j ◦ · · · ◦ S ω k ,j k ( F u ) , it follows that H G /m ( F v ) (cid:54) H G ( F v ) = H G (cid:32) (cid:91) j ∈I ω ,...,j k ∈I ωk S ω ,j ◦ · · · ◦ S ω k ,j k (cid:0) F u (cid:1)(cid:33) (cid:54) (cid:88) j ∈I ω ,...,j k ∈I ωk H G (cid:0) F u (cid:1) (cid:54) |I ω | · · · |I ω k | H G (cid:0) F u (cid:1) < /n and so v ∈ H m,n , proving that H m,n is dense. Assume that G is a doubling gauge and that sup u ∈ Ω P G ( F u ) = ∞ . We will show that the set P = { ω ∈ Ω : P G ( F ω ) = ∞} is residual. The extra step in the definition of packing measure causes it to be more awkward to work withthan Hausdorff measure. To circumvent these difficulties we need the following two technical lemmas.10 emma 3.5. Suppose F ⊂ K is such that for all open V which intersect F , P G ( F ∩ V ) = ∞ . Then P G ( F ) = ∞ .Proof. Let { F i } i be a countable sequence of closed sets such that F ⊂ ∪ i F i . The Baire Category Theoremimplies that for some i and some open set V , F ∩ V ⊆ F i and hence P G ( F i ) = ∞ . This means that,for every countable cover of F by closed sets, at least one of the closed sets must have infinite packingpre-measure, proving the result.We will use Lemma 3.5 to prove the following Lemma, which will allow us to work with packing pre-measure instead of packing measure. Lemma 3.6.
We have P = { ω ∈ Ω : P G ( F ω ) = ∞} .Proof. It is clear that P ⊆ { ω ∈ Ω : P G ( F ω ) = ∞} . We will now prove the opposite inclusion. Let ω ∈ Ωbe such that P G ( F ω ) = ∞ and let V be an open set which intersects F ω . Choose k large enough toensure that for some i ∈ I ω , . . . , i k ∈ I ω we have S ω ,i ◦ · · · ◦ S ω k ,i k (cid:0) F ( ω k +1 ,ω k +2 ,... ) (cid:1) ⊆ F ∩ V. Write φ = S ω ,i ◦ · · · ◦ S ω k ,i k and u = ( ω k +1 , ω k +2 , . . . ). Since packing pre-measure is finitely additive,we have ∞ = P G ( F ω ) = P G (cid:32) (cid:91) i ∈I ω ,...,i k ∈I ωk S ω ,i ◦ · · · ◦ S ω k ,i k ( F u ) (cid:33) (cid:54) (cid:88) i ∈I ω ,...,i k ∈I ωk P G ( F u ) (cid:54) |I ω | · · · |I ω k | P G ( F u )and therefore P G ( F ∩ V ) (cid:62) P G ( φ ( F u )) (cid:62) D − (cid:0) G, Lip − ( φ ) (cid:1) P G ( F u )= ∞ . Finally, by Lemma 3.5, we have that P G ( F ω ) = ∞ and hence ω ∈ P .Writing P m,n = { ω ∈ Ω : P G , /m ( F ω ) > n } , it follows from Lemma 3.6 that P = { ω ∈ Ω : P G ( F ω ) = ∞} = (cid:92) m,n ∈ N P m,n , so it suffices to prove that each P m,n is open and dense in (Ω , d Ω ). Fix m, n ∈ N .(i) P m,n is open.Let ω ∈ P m,n . It follows that there exists a finite centered (1 /m )-packing of F ω by closed balls, { U i } , satisfying (cid:88) i G ( | U i | ) > n. Let η = min i (cid:54) = j min x ∈ U i ,y ∈ U j d ( x, y )which is strictly positive since the sets U i are closed. Now choose r > u ∈ B ( ω, r ), then d H ( F ω , F u ) < η/ u ∈ B ( ω, r ). It follows that we can11nd a centered (1 /m )-packing, { ˜ U i } , of F u , where ˜ U i is centered in F u and has the same diameter as U i . It follows that P G , /m ( F u ) (cid:62) (cid:80) i G ( | U i | ) > n and therefore B ( ω, r ) ⊆ P m,n , proving that P m,n is open.(ii) P m,n is dense.Let ω = ( ω , ω , . . . ) ∈ Ω and ε >
0. Choose k ∈ N such that 2 − k < ε and choose u = ( u , u , . . . ) ∈ Ωsuch that P G ( F u ) (cid:62) n max j ∈I ω ,...,j k ∈I ωk D (cid:0) G, Lip − (cid:0) S ω ,j ◦ · · · ◦ S ω k ,j k (cid:1)(cid:1) Let v = ( ω , . . . , ω k , u , u , . . . ). It follows that d Ω ( ω, v ) < ε and, since F v = (cid:91) j ∈I ω ,...,j k ∈I ωk S ω ,j ◦ · · · ◦ S ω k ,j k ( F u ) , it follows that P G , /m ( F v ) (cid:62) P G ( F v ) = P G (cid:32) (cid:91) j ∈I ω ,...,j k ∈I ωk S ω ,j ◦ · · · ◦ S ω k ,j k ( F u ) (cid:33) (cid:62) max j ∈I ω ,...,j k ∈I ωk P G (cid:0) S ω ,j ◦ · · · ◦ S ω k ,j k ( F u ) (cid:1) (cid:62) max j ∈I ω ,...,j k ∈I ωk D (cid:0) G, Lip − (cid:0) S ω ,j ◦ · · · ◦ S ω k ,j k (cid:1)(cid:1) P G ( F u ) (cid:62) n and so v ∈ P m,n , proving that P m,n is dense. It is well-known that lower box dimension is not finitely stable, see [F2], Chapter 3, i.e., it is not true ingeneral that dim B E ∪ F (cid:54) max { dim B E, dim B F } . To get around this problem in the following proof, webegin with a simple technical lemma. Lemma 3.7.
Let F ⊂ K be such that that dim B F = s and let { φ i } i ∈S be a finite collection of Lipschitzcontractions. Then dim B (cid:91) i ∈S φ i ( F ) (cid:54) s. Proof.
For all δ > N δ (cid:16) (cid:91) i ∈S φ i ( F ) (cid:17) (cid:54) (cid:88) i ∈S N δ (cid:0) φ i ( F ) (cid:1) (cid:54) (cid:88) i ∈S N δ/ Lip + ( φ i ) ( F ) (cid:54) |S| N δ ( F ) . Taking logs, dividing by − log δ and computing the limes inferior completes the proof.We now turn to the proof of Theorem 2.1 (5). Let b = inf u ∈ Ω dim B F u . We will show that the set B = { ω ∈ Ω : dim B F ω (cid:54) b } is residual, from which Theorem 2.1 (5) follows. Writing B n = (cid:91) δ ∈ (0 , /n ) (cid:110) ω ∈ Ω : N δ ( F ω ) (cid:54) δ − ( b + 1 n ) (cid:111) , we have B = (cid:92) n ∈ N (cid:91) δ ∈ (0 , /n ) (cid:110) ω ∈ Ω : log N δ ( F ω ) − log δ (cid:54) b + n (cid:111) = (cid:92) n ∈ N B n ,
12o it suffices to prove that each B n is open and dense in (Ω , d Ω ). Fix n ∈ N .(i) B n is open.Let ω ∈ B n . It follows that for some δ < /n there exists a δ -cover of F ω by fewer than δ − ( b + 1 n ) opensets, { U i } . Let U = ∂ (cid:0) ∪ i U i (cid:1) be the boundary of the union of the covering sets, { U i } , and let η = min x ∈U ,y ∈ F ω d ( x, y )which is strictly positive by the compactness of F ω . Now choose r > u ∈ B ( ω, r ), then d H ( F ω , F u ) < η/
2. Let u ∈ B ( ω, r ) and observe that { U i } is a (1 /m )-cover for F u giving that N δ ( F ω ) (cid:54) δ − ( b + 1 n ) . It follows that B ( ω, r ) ⊆ B n and therefore B n is open.(ii) B n is dense.Let ω = ( ω , ω , . . . ) ∈ Ω and ε >
0. Let u = ( u , u , . . . ) ∈ Ω be such that dim B F u (cid:54) b + 1 /n .Now choose k ∈ N such that 2 − k < ε and let v = ( ω , . . . , ω k , u , u , . . . ). It follows that d Ω ( v, ω ) < ε and, furthermore, F v = (cid:91) j ∈I ω ,...,j k ∈I ωk S ω ,j ◦ · · · ◦ S ω k ,j k ( F u ) . and since, for all j ∈ I ω , . . . , j k ∈ I ω k the map S ω ,j ◦ · · · ◦ S ω k ,j k is a Lipschitz contraction, it followsfrom Lemma 3.7 that dim B F v (cid:54) dim B F u (cid:54) b + 1 /n and so v ∈ B n , proving that B n is dense. Write h = inf u ∈ Ω dim H F u and assume that inf u ∈ Ω H h ( F u ) = H > v = ( v , v , . . . ) ∈ Ω satisfiescondition (2.1) and that the RIFS satisfies the H h -MSC. We will show the set M = { ω ∈ Ω : H h ( F ω ) < ∞} is meagre, from which the result follows. Writing M n = { ω ∈ Ω : H h ( F ω ) < n } , we have M = (cid:91) n ∈ N M n , so it suffices to show that each M n is nowhere dense. Fix n ∈ N , ω ∈ M n and r >
0. Now choose k ∈ N such that 2 − k < r . It follows that the open ball B l = B (cid:0) ( ω , . . . , ω k , v , v , . . . ) , − l (cid:1) is contained in B ( ω, r ) for all l > k . Let u ∈ B l , and note that u = ( ω , . . . , ω k , v , . . . , v l − k , u , u , . . . )for some ( u , u , . . . ) ∈ Ω. Noting that the RIFS satisfies the H h -MSC and that Lip − is supermultiplica-tive, we have H h ( F u ) = H h (cid:32) (cid:91) i ∈I ω ,...,i k ∈I ωk (cid:91) j ∈I v ,...,j l − k ∈I vl − k S ω ,i ◦ · · · ◦ S ω k ,i k ◦ S v ,j ◦ · · · ◦ S v l − k ,j l − k (cid:0) F ( u ,u ,... ) (cid:1)(cid:33) = (cid:88) i ∈I ω ,...,i k ∈I ωk (cid:88) j ∈I v ,...,j l − k ∈I vl − k H h (cid:32) S ω ,i ◦ · · · ◦ S ω k ,i k ◦ S v ,j ◦ · · · ◦ S v l − k ,j l − k (cid:0) F ( u ,u ,... ) (cid:1)(cid:33) (cid:62) (cid:88) i ∈I ω ,...,i k ∈I ωk (cid:88) j ∈I v ,...,j l − k ∈I vl − k Lip − ( S ω ,i ◦ · · · ◦ S ω k ,i k ◦ S v ,j ◦ · · · ◦ S v l − k ,j l − k ) h H h (cid:0) F ( u ,u ,... ) (cid:1) (cid:62) H (cid:32) (cid:88) i ∈I ω ,...,i k ∈I ωk Lip − ( S ω ,i ◦ · · · ◦ S ω k ,i k ) h (cid:33)(cid:32) (cid:88) j ∈I v ,...,j l − k ∈I vl − k Lip − ( S v ,j ◦ · · · ◦ S v l − k ,j l − k ) h (cid:33) → ∞ l → ∞ . It follows that we may choose l large enough to ensure B l ⊆ B ( ω, r ) \ M n and so M n isnowhere dense. Write p = sup u ∈ Ω dim P F u and assume that sup u ∈ Ω P p ( F u ) = P < ∞ and that v = ( v , v , . . . ) ∈ Ωsatisfies condition (2.2). We will show the set N = { ω ∈ Ω : P h ( F ω ) > } is meagre, from which the result follows. Writing N n = { ω ∈ Ω : P p ( F ω ) > /n } , we have N = (cid:91) n ∈ N N n , so it suffices to show that each N n is nowhere dense. Fix n ∈ N , ω ∈ N n and r >
0. Now choose k ∈ N such that 2 − k < r . It follows that the open ball B l = B (cid:0) ( ω , . . . , ω k , v , v , . . . ) , − l (cid:1) is contained in B ( ω, r ) for all l > k . Let u ∈ B l , and note that u = ( ω , . . . , ω k , v , . . . , v l − k , u , u , . . . )for some ( u , u , . . . ) ∈ Ω. Noting that Lip + is submultiplicative, we have P p ( F u ) = P p (cid:32) (cid:91) i ∈I ω ,...,i k ∈I ωk (cid:91) j ∈I v ,...,j l − k ∈I vl − k S ω ,i ◦ · · · ◦ S ω k ,i k ◦ S v ,j ◦ · · · ◦ S v l − k ,j l − k (cid:0) F ( u ,u ,... ) (cid:1)(cid:33) (cid:54) (cid:88) i ∈I ω ,...,i k ∈I ωk (cid:88) j ∈I v ,...,j l − k ∈I vl − k P p (cid:32) S ω ,i ◦ · · · ◦ S ω k ,i k ◦ S v ,j ◦ · · · ◦ S v l − k ,j l − k (cid:0) F ( u ,u ,... ) (cid:1)(cid:33) (cid:54) (cid:88) i ∈I ω ,...,i k ∈I ωk (cid:88) j ∈I v ,...,j l − k ∈I vl − k Lip + ( S ω ,i ◦ · · · ◦ S ω k ,i k ◦ S v ,j ◦ · · · ◦ S v l − k ,j l − k ) p P p (cid:0) F ( u ,u ,... ) (cid:1) (cid:54) P (cid:32) (cid:88) i ∈I ω ,...,i k ∈I ωk Lip + ( S ω ,i ◦ · · · ◦ S ω k ,i k ) p (cid:33)(cid:32) (cid:88) j ∈I v ,...,j l − k ∈I vl − k Lip + ( S v ,j ◦ · · · ◦ S v l − k ,j l − k ) p (cid:33) → l → ∞ . It follows that we may choose l large enough to ensure B l ⊆ B ( ω, r ) \ N n and so N n is nowheredense. The proof of Theorem 2.4 is a standard application of the mass distribution principle, Proposition 3.4.Similar arguments can be found in, for example, [F2] Chapter 9.For each i ∈ D , let s i be as in Section 2.2 and write c = min i ∈ D, j ∈I i Lip( S i,j ). We will now de-fine a mass distribution on F ω which will be used in the subsequent proofs. First define a measure, µ sym ω ,on the symbollic space, (cid:81) ∞ l =1 I ω l , by µ sym ω (cid:16)(cid:8) ( j , j , . . . ) : j = i , . . . , j k = i k (cid:9)(cid:17) = Lip( S ω ,i ) s ω · · · Lip( S ω k ,i k ) s ωk for each ( i , . . . , i k ) ∈ (cid:81) kl =1 I ω l . Now transfer µ sym ω to a Borel probability measure µ ω , supported on F ω ,by µ ω ( E ) = µ sym ω (cid:18)(cid:110) ( i , i , . . . ) ∈ ∞ (cid:89) l =1 I ω l : (cid:92) k S ω ,i ◦ · · · ◦ S ω k ,i k ( K ) ∈ E (cid:111)(cid:19) E ⊆ K . Proof of (1)
Since each deterministic IFS satisfies the OSC, it is clear that sup ω ∈ Ω P s max ( F ω ) (cid:62) sup ω ∈ Ω H s max ( F ω ) > ω ∈ Ω P s max ( F ω ) < ∞ . Fix ω = ( ω , ω , . . . ) ∈ Ω, let x ∈ F ω and r >
0. Nowlet l ∈ N and i ∈ I ω , . . . , i l ∈ I ω l be such that x ∈ S ω ,i ◦ · · · ◦ S ω l ,i l ( F ω )and Lip( S ω ,i ) · · · Lip( S ω l ,i l ) | K | < r (cid:54) Lip( S ω ,i ) · · · Lip( S ω l − ,i l − ) | K | . It follows that µ ω ( B ( x, r )) r − s max (cid:62) µ ω (cid:16) S ω ,i ◦ · · · ◦ S ω l ,i l ( F ω ) (cid:17) r − s max (cid:62) Lip( S ω ,i ) s ω · · · Lip( S ω l ,i l ) s ωl r − s max (cid:62) (cid:18) Lip( S ω ,i ) · · · Lip( S ω l ,i l ) r (cid:19) s max (cid:62) (cid:18) r c | K | − r (cid:19) s max = (cid:0) c/ | K | (cid:1) s max and by Proposition 3.4 (2) it follows that P s max ( F ω ) (cid:54) (cid:0) | K | /c (cid:1) s max < ∞ and, in particular,0 < sup ω ∈ Ω P s max ( F ω ) < ∞ . Proof of (3)
We will need the following lemma which appears as Lemma 9.2 in [F2].
Lemma 3.8.
Let { V i } be a collection of disjoint open subsets of R n such that each V i contains a ballof radius a r and is contained in a ball of radius a r . Then any ball, B , of radius r intersects at most (1 + 2 a ) n a − n of the closures, V i . Let O be the open set used in the UOSC and let a , a be such that O contains a ball of radius a and iscontained in a ball of radius a . Let I ∗ ω = (cid:83) k ∈ N (cid:81) kl =1 I ω l and, for r >
0, let I rω be an r -stopping definedby I rω = (cid:8) ( i , i , . . . , i l ) ∈ I ∗ ω : Lip( S ω ,i ) · · · Lip( S ω l ,i l ) (cid:54) r < Lip( S ω ,i ) · · · Lip( S ω l − ,i l − ) (cid:9) . Note that(1) (cid:8) S ω ,i ◦ · · · ◦ S ω l ,i l ( O ) : ( i , i , . . . , i l ) ∈ I rω (cid:9) is a collection of disjoint open subsets of R n ;(2) Each S ω ,i ◦ · · · ◦ S ω l ,i l ( O ) contains a ball of radius c a r and is contained in a ball of radius a r ;(3) For each ( i , i , . . . , i l ) ∈ I rω , we have S ω ,i ◦ · · · ◦ S ω l ,i l ( F ( ω l +1 ,ω l +2 ,... ) ) ⊆ S ω ,i ◦ · · · ◦ S ω l ,i l ( O ) . Since each deterministic IFS satisfies the OSC, it is clear that inf ω ∈ Ω H s min ( F ω ) < ∞ . We will now showthat inf ω ∈ Ω H s min ( F ω ) >
0. Fix ω = ( ω , ω , . . . ) ∈ Ω, let x ∈ F ω and r >
0. It follows from (1)–(3) and15emma 3.8 that µ ω ( B ( x, r )) r − s min = r − s min µ ω (cid:0) B ( x, r ) ∩ F (cid:1) = r − s min µ sym ω (cid:18)(cid:110) ( i , i , . . . ) ∈ ∞ (cid:89) l =1 I ω l : (cid:92) k S ω ,i ◦ · · · ◦ S ω k ,i k ( K ) ∈ B ( x, r ) ∩ F (cid:111)(cid:19) (cid:54) r − s min µ sym ω (cid:32) (cid:91) ( i ,i ,...,i l ) ∈I rω : B ( x,r ) ∩ S ω ,i ◦···◦ S ωl,il ( O ) (cid:54) = ∅ (cid:8) ( j , j , . . . ) : j = i , . . . , j l = i l (cid:9) (cid:33) (cid:54) r − s min (cid:88) ( i ,i ,...,i l ) ∈I rω : B ( x,r ) ∩ S ω ,i ◦···◦ S ωl,il ( O ) (cid:54) = ∅ Lip( S ω ,i ) s ω · · · Lip( S ω l ,i l ) s ωl (cid:54) r − s min (cid:16) Lip( S ω ,i ) · · · Lip( S ω l ,i l ) (cid:17) s min (1 + 2 a ) n ( c a ) − n (cid:54) (1 + 2 a ) n ( c a ) − n < ∞ and by Proposition 3.4 (1) it follows that H s min ( F ω ) (cid:62) (1 + 2 a ) − n ( c a ) n > < inf ω ∈ Ω H s min ( F ω ) < ∞ which completes the proof. Write H min = inf ω ∈ Ω H s min ( F ω ) and P max = sup ω ∈ Ω P s max ( F ω ) and let s = s min = s max . Hausdorff measure
We will show that the set H = { ω ∈ Ω : H s ( F ω ) = H min } is residual. Writing H m,n = { ω ∈ Ω : H s /m ( F ω ) < H min + n } , we have H = (cid:92) m,n ∈ N H m,n , so it suffices to prove that each H m,n is open and dense in (Ω , d Ω ). Fix m, n ∈ N . It can be shown that H m,n is open using a similar approach to that used in the proof of Theorem 2.1 (1). We will now provethat H m,n is dense.Let ω = ( ω , ω , . . . ) ∈ Ω and ε >
0. Choose k ∈ N such that 2 − k < ε and choose u = ( u , u , . . . ) ∈ Ωsuch that H s ( F u ) < H min + n . v = ( ω , . . . , ω k , u , u , . . . ). It follows that d Ω ( ω, v ) < ε and, furthermore, H s /m ( F v ) (cid:54) H s ( F v ) = H s (cid:32) (cid:91) j ∈I ω ,...,j k ∈I ωk S ω ,j ◦ · · · ◦ S ω k ,j k (cid:0) F u (cid:1)(cid:33) (cid:54) (cid:88) j ∈I ω ,...,j k ∈I ωk Lip( S ω ,j ) s · · · Lip( S ω k ,j k ) s H s (cid:0) F u (cid:1) < (cid:16) H min + n (cid:17) (cid:88) j ∈I ω ,...,j k ∈I ωk Lip( S ω ,j ) s · · · Lip( S ω k ,j k ) s = H min + n where the final equality is due to the fact that s is a solution to Hutchison’s formula for each deterministicIFS. It follows that v ∈ H m,n , proving that H m,n is dense. Packing measure
We will show that the set P = { ω ∈ Ω : P s ( F ω ) = P max } is residual. It was proved in [FHW]that if a compact set has finite packing pre-measure, then the packing measure and packing pre-measurecoincide. Writing P m,n = { ω ∈ Ω : P s , /m ( F ω ) > P max − n } , it follows that P ⊇ { ω ∈ Ω : P s ( F ω ) = P max } = (cid:92) m,n ∈ N P m,n , so it suffices to prove that each P m,n is open and dense. Fix m, n ∈ N . It can be shown using a similarapproach to that used in the proof of Theorem 2.1 (2) that P m,n is open. We will now show that it isalso dense.Let ω = ( ω , ω , . . . ) ∈ Ω and ε >
0. Choose k ∈ N such that 2 − k < ε and choose u = ( u , u , . . . ) ∈ Ωsuch that P s ( F u ) > P max − n . Let v = ( ω , . . . , ω k , u , u , . . . ). It follows that d Ω ( ω, v ) < ε and,furthermore, P s , /m ( F v ) (cid:62) P s ( F v ) = P s (cid:32) (cid:91) j ∈I ω ,...,j k ∈I ωk S ω ,j ◦ · · · ◦ S ω k ,j k (cid:0) F u (cid:1)(cid:33) = (cid:88) j ∈I ω ,...,j k ∈I ωk Lip( S ω ,j ) s · · · Lip( S ω k ,j k ) s P s (cid:0) F u (cid:1) > (cid:16) P max − n (cid:17) (cid:88) j ∈I ω ,...,j k ∈I ωk Lip( S ω ,j ) s · · · Lip( S ω k ,j k ) s = P max − n where the final equality is due to the fact that s is a solution to Hutchison’s formula for each deterministicIFS. It follows that u ∈ P m,n , proving that P m,n is dense. In this section we provide a number of examples designed to illustrate some of the key points made inSection 2. The examples in Sections 4.1 and 4.2 will be random Sierpi´nski carpets, as discussed in Section1.5.
In this section we give two simple examples which show that the Hausdorff measure can typically bepositive and finite even if the supremal Hausdorff measure is infinite and the packing measure can17ypically be positive and finite even if the infimal packing measure is zero. The existence of theseexamples is slightly surprising in view of Theorems 2.2 and 2.3 and the behaviour observed in theself-similar setting, see Theorem 2.5.
Hausdorff measure
Let I = { I , I } be a RIFS where I and I are IFSs of orientation preserving affine self-maps on[0 , corresponding to the figure below.Figure 1: The defining pattern for a random Sierpi´nski carpet with N = 2 , m = m = 2 and n = n = 4.It is clear that inf ω ∈ Ω dim H F ω = 1 and inf ω ∈ Ω H ( F ω ) = 1 < ∞ = sup ω ∈ Ω H ( F ω ). It followsfrom Theorem 2.1 that the typical Hausdorff dimension is 1. We will now show that the typicalHausdorff measure is also infimal and, in particular, positive and finite. We will show that the set H = { ω ∈ Ω : H ( F ω ) = 1 } is a dense G δ set and thus residual. It can be shown that H is G δ using avery similar approach to that used in the proof of Theorem 2.1 (1). It remains to show that H is dense.Let ω = ( ω , ω , . . . ) ∈ Ω and ε >
0. Choose k ∈ N such that 2 − k < ε and let v = ( ω , . . . , ω k , , , . . . ).It follows that d Ω ( ω, v ) < ε and, furthermore, since F (2 , ,... ) = { } × [0 , F v = (cid:91) j ∈I ω ,...,j k ∈I ωk S ω ,j ◦ · · · ◦ S ω k ,j k (cid:0) { } × [0 , (cid:1) and, since the vertical component of every map in I is a similarity with contraction ratio 1 / H ( F v ) (cid:54) (cid:88) j ∈I ω ,...,j k ∈I ωk H (cid:32) S ω ,j ◦ · · · ◦ S ω k ,j k (cid:0) { } × [0 , (cid:1)(cid:33) = 4 k − k H (cid:0) { } × [0 , (cid:1) = 1and so v ∈ H , proving that H is dense. Packing measure
Let I = { I , I } be a RIFS where I and I are IFSs of orientation preserving affine self-maps on[0 , corresponding to the figure below.Figure 2: The defining pattern for a random Sierpi´nski carpet with N = 2 , m = m = 2 and n = n = 4.18e claim that inf ω ∈ Ω P ( F ω ) = 0 < (cid:54) sup ω ∈ Ω P ( F ω ) (cid:54) ω ∈ Ω dim P F ω = 1.The only inequality which is not obvious is sup ω ∈ Ω P ( F ω ) (cid:54) ω ∈ Ωand define a mass distribution, µ ω , on F ω by assigning each level k rectangle mass 2 − k in a similarway to the construction of the measures in Section 3.7. It is easy to see that for all x ∈ F ω we havelim inf r → µ ( B ( x, r )) r − (cid:62) / P ( F ω ) (cid:54)
4. Theorem2.1 gives that the typical packing dimension is 1. We will now show that the typical packing measureis greater than or equal to 1 and, in particular, positive and finite. We will show that the set P = { ω ∈ Ω : P ( F ω ) (cid:62) } is a dense G δ set and thus residual. It follows from the result in [FHW] andLemma 3.6 that P = { ω ∈ Ω : P ( F ω ) (cid:62) } and it can thus be shown that P is G δ using a very similarapproach to that used in the proof of Theorem 2.1 (2). It remains to show that P is dense.Let ω = ( ω , ω , . . . ) ∈ Ω and ε >
0. Choose k ∈ N such that 2 − k < ε and let v = ( ω , . . . , ω k , , , . . . ).It follows that d Ω ( ω, v ) < ε and, furthermore, since F (2 , ,... ) = [0 , × { } , we have F v = (cid:91) j ∈I ω ,...,j k ∈I ωk S ω ,j ◦ · · · ◦ S ω k ,j k (cid:0) [0 , × { } (cid:1) and, since the horizontal component of every map in I is a similarity with contraction ratio 1 / P ( F v ) = P ( F v ) = (cid:88) j ∈I ω ,...,j k ∈I ωk P (cid:18) S ω ,j ◦ · · · ◦ S ω k ,j k (cid:0) [0 , × { } (cid:1)(cid:19) = 2 k − k P (cid:0) [0 , × { } (cid:1) = 1and so u ∈ P , proving that P is dense. Remark 4.1.
We believe that a more delicate application of the mass distribution principle will yieldthat, in fact, sup ω ∈ Ω P ( F ω ) = 1 , but since the important thing for our purposes is that the typical valueis positive and finite, we omit further calculation. In this section we give a simple example which shows that in the non-conformal setting the dimensionof the random attractor need not be bounded below by the minimum dimension of the deterministicattractors. This is in stark contrast to Theorem 2.4, concerning random self-similar sets. Furthermore,inf u ∈ Ω dim H F u is not attained by any finite combination of the determinsitic IFSs. Let I = { I , I } be aRIFS where I and I are IFSs of orientation preserving affine self-maps on [0 , corresponding to thefigure below.Figure 3: The defining pattern for a random Sierpi´nski carpet with N = 2 , m = 2 , n = 3 , m = 3 and n = 4.The results of [Be, Mc] give that for both deterministic attractors the Hausdorff, box and packing dimen-sions are all equal to 1 + log 2 / log 3 ≈ .
63. For p ∈ [0 , p, − p ) withthis system. By the result of [FO], given here as Theorem 1.6, the almost sure Hausdorff dimension of19 ω is given bydim H F ω = p log 2 p − p log (cid:18) log 2 p − p / log 3 p − p + 2 log 2 p − p / log 3 p − p (cid:19) + 1 − p log 2 p − p log (cid:18) log 2 p − p / log 3 p − p + 4 log 2 p − p / log 3 p − p (cid:19) = log 2log 2 p − p + (2 − p ) log 2log 3 p − p . In fact, since each deterministic IFS has uniform vertical fibres it follows from results in [GuLi2] that theabove formula also gives the almost sure box and packing dimensions of F ω . Plotting this as a functionof p , we obtainFigure 4: A graph of the almost sure Hausdorff dimension as a function of p . The grey line shows thedimension of the deterministic attractors.Notice the nonlinear dependence on p and the fact that for p ∈ (0 ,
1) the almost sure dimension is lowerthan the minimum dimension of the two deterministic attractors. In particular, the dimension of F ω isnot bounded below by the minimum Hausdorff dimension of the deterministic attractors, despite the factthat the UOSC is satisfied. As such it is not at all clear what the infimal (and thus typical) Hausdorffdimension is. This is in stark contrast to the self-similar setting, see Theorem 2.4 (4). It is natural to askif the infimal dimension is attained by an attractor of a deterministic IFS given by a finite combination ofthe original deterministic IFSs, I and I . We will argue now that it is not. Finite combinations of I , I give deterministic IFSs with attractors equal to F ω for some ‘rational’ ω ∈ Ω, i.e., some ω which consistsof a finite word over D repeated infinitely often. Fix such a finite combination and let N be the numberof times we have used I and let N be the number of times we have used I . It is clear, and in fact itfollows from the results in [GuLi2], that the Hausdorff dimension of the attractor is equal to the almostsure Hausdorff dimension of the attractor corresponding to p = N / ( N + N ) ∈ Q . However, elementaryoptimisation reveals that the minimum almost sure Hausdorff dimension (seen as the minimum of thegraph above) is attained by p = 2 − √ / ∈ Q . In this section we will give a straightforward example which has the interesting property that, althoughthe Hausdorff and packing measures of the attractors of the deterministic IFSs in the appropriate dimen-sion are positive and finite, the typical Hausdorff and packing measures are infinity and zero, respectively.Let S , S , S : [0 , → [0 ,
1] be defined by S ( x ) = x/ , S ( x ) = x/ / , and S ( x ) = x/ / . I be the RIFS consisting of the two deterministic IFSs, { S , S } and { S , S , S } . The attractors forthese systems are the middle 1 / C / , and the unit interval, [0 , ω ∈ Ω, C / ⊆ F ω ⊆ [0 , s = log 2log 3 and 1 and thatinf u ∈ Ω H s ( F u ) = H s ( C / ) = 1and sup u ∈ Ω P ( F u ) = P ([0 , . It follows from Theorem 2.5 that, for a typical ω ∈ Ω, the set F ω has Hausdorff and lower box dimensionequal to log 2log 3 and packing and upper box dimension equal to 1 but log 2log 3 -dimensional Hausdorff measureequal to ∞ and 1-dimensional packing measure equal to 0. It is clear that the P log 2 / log 3 -MSC is satisfied. Although the previous examples illustrate some of the key phenomenon we wish to discuss, they haveall been based on RIFSs consisting of translate linear (affine) maps. Of course, Theorems 2.1, 2.2 and2.3 apply in far more general circumstances than this. In this section we construct a more complicatedexample using nonlinear maps to which we can apply Theorems 2.2 and 2.3 to deduce that neither thetypical Hausdorff nor packing measures are positive and finite in the appropriate dimensions.Let f , f : [0 , → R be defined by f ( x ) = − x ( x −
1) and f ( x ) = 9( x − / x − / f (left) and f (right) restricted to the unit square.Observe that f maps each of the intervals X , = (cid:2) , − √ (cid:3) and X , = (cid:2) + √ , (cid:3) bijectivelyonto [0 ,
1] and furthermore f (cid:48) is continuous and2 (cid:54) | f (cid:48) ( x ) | (cid:54) x ∈ X , ∪ X , . Similarly, f maps each of the intervals X , = (cid:2) − √ , (cid:3) and X , = (cid:2) , + √ , (cid:3) bijectively onto [0 , f (cid:48) is continuous and 6 (cid:54) | f (cid:48) ( x ) | (cid:54) x ∈ X , ∪ X , . We have constructed two expanding dynamical systems ( X , ∪ X , , f ) and( X , ∪ X , , f ) with repellers given by F = (cid:92) k (cid:62) f − k (cid:0) [0 , (cid:1) and F = (cid:92) k (cid:62) f − k (cid:0) [0 , (cid:1) cookie cutters and the Hausdorff dimension can becomputed via the thermodynamical formalism. For a more detailed account of cookie cutters and thethermodynamical formalism, the reader is referred to [F1], Chapters 4–5. We can view F and F asattractors of deterministic IFSs consisting of the inverse branches of f and f . In particular, the inversebranches of f are given by S , ( x ) = − (cid:113) − x (cid:17) and S , ( x ) = + (cid:113) − x (cid:17) and the inverse branches of f are given by S , ( x ) = − √ x and S , ( x ) = + √ x. Let I be the RIFS consisting of I = { S , , S , } and I = { S , , S , } . Here F corresponds to the choice(1 , , . . . ) ∈ Ω and F corresponds to the choice (2 , , . . . ) ∈ Ω. For an arbitrary ω = ( ω , ω , . . . ) ∈ Ω,we obtain a random cookie cutter F ω = (cid:92) k (cid:62) f − ω ◦ · · · ◦ f − ω k (cid:0) [0 , (cid:1) . Write h = inf u ∈ Ω dim H F u and p = sup u ∈ Ω dim P F u . It follows from (4.1-4.2), the fact that f (cid:48) , f (cid:48) arecontinuous and the mean value theorem that, for i = 1 , / (cid:54) Lip − ( S ,i ) (cid:54) Lip + ( S ,i ) (cid:54) / / (cid:54) Lip − ( S ,i ) (cid:54) Lip + ( S ,i ) (cid:54) / h (cid:54) dim H F (cid:54) log 2log 6 < log 2log 5 (cid:54) dim P F (cid:54) p, see [F2], Propositions 9.6–9.7. Furthermore, (cid:88) i ∈I ,...,i k ∈I Lip − ( S ,i ◦ · · · ◦ S ,i k ) h (cid:62) (cid:0) · − h (cid:1) k → ∞ and (cid:88) j ∈I ,...,j k ∈I Lip + ( S ,j ◦ · · · ◦ S ,j k ) p (cid:54) (cid:0) · − p (cid:1) k → k → ∞ . It follows from Theorem 2.1, 2.2 and 2.3 that, for a typical ω ∈ Ω, dim H F ω = h < p = dim P F ω but H h ( F ω ) = u ∈ Ω H h ( F u ) = 0 ∞ if inf u ∈ Ω H h ( F u ) > P p ( F ω ) = u ∈ Ω P p ( F u ) < ∞∞ if sup u ∈ Ω P p ( F u ) = ∞ In particular, for a typical ω ∈ Ω, the random cookie cutter F ω is ‘dimensionless’ in the sense that neitherthe s -dimensional Hausdorff measure nor the s -dimensional packing measure are positive and finite forany s (cid:62) In this section we give some pictorial examples of attractors of RIFSs to illustrate some of the rich andcomplicated structures we can expect to see. Although our results apply in both examples we do notperform any calculations.Let S , S , S : R → R be defined by S ( x, y ) = ( x/ , y / S ( x, y ) = ( x/ / , y/
2) and S ( x, y ) = ( x / , y/ /
2) and let T , T : R → R be defined by T ( x, y ) = ( x / , y/
2) and22 ( x, y ) = ( x/ / , y/ / I be the RIFS consisting of I = { S , S , S } and I = { T , T } .Figure 6: The attractors of I (top left) and I (top right) along with two random attractors of I corresponding to ω = (1 , , , , , , . . . ) (bottom left) and ω = (2 , , , , , , . . . ) (bottom right).Let U , U , U : R → R be defined by U ( x, y ) = ( x/ y/ , y/ U ( x, y ) = ( x/ − y/ / , y/ / U ( x, y ) = ( x/ / , y/ /
3) and let V , V , V : R → R be defined by V ( x, y ) = ( x/ , x (1 − x ) / y/ V ( x, y ) = ( − x/ , x (1 − x ) / y/
2) and V ( x, y ) = ( x/ / , x (1 − x ) / y/ / I be the RIFS consisting of I = { U , U , U } and I = { V , V } . The attractor of I is aself-affine set. 23igure 7: The attractors of I (top left) and I (top right) along with two random attractors of I corresponding to ω = (1 , , , , , , . . . ) (bottom left) and ω = (2 , , , , , , . . . ) (bottom right). In this section we collect together and discuss some of the questions raised by the results in this paper.(1)
Is the typical measure always extremal?
We have shown that the typical dimensions behaverather well in that the typical Hausdorff and lower box dimensions always infimal and the typicalHausdorff and lower box dimensions always supremal. The typical Hausdorff and packing measuresbehave rather worse and our examples show that they can both be either infimal or supremal. However,we have not proved that they are always extremal.(2)
Computing the extremal dimensions . Theorem 2.1 tells us that the typical dimensions are ex-tremal in very general circumstances. However, it gives no indication of how one might compute theextremal dimensions. This may be a very difficult problem and the example in Section 4.2 sheds somelight on that difficulty. Given a RIFS, can we say anything non-trivial about the extremal dimensionsin general? Theorem 2.4 tells us how to compute the extremal dimensions in the self-similar setting,assuming the UOSC.(3)
The bi-Lipschitz requirement.
Throughout this paper we assume that all of our maps are bi-Lipschitz. It is easily seen, however, that not all of our proofs require this. In fact, Theorem 2.1 parts(1), (3) and (5) go through assuming that the maps are simply contractions. Also, a slightly weakerversion of Theorem 2.3 can be proved, which states that if there exists v ∈ Ω satisfying conditon (2.2)and sup u ∈ Ω P p ( F u ) < ∞ , then for a typical ω ∈ Ω, we have P p ( F ω ) = 0.(4) Strengthening of Theorem 2.4.
In view of the non-conformal example given in Section 4 itseems that the validity of the bounds given in Theorem 2.4 depend on two things: conformality; andseparation properties. It seems likely that one could prove an analogous result using conformal mappingsinstead of similarities and replacing each s i with the solution of Bowen’s formula corresponding to the24FS, I i . What could be a more interesting question is whether or not the UOSC condition is required inthe self-similar case.(5) Doubling gauges.
At first sight it is somewhat curious that in Theorem 2.1 we require thatthe gauge is doubling for the result concerning packing measure, but can use arbitrary gauges forHausdorff measure. In fact, it is not uncommon that doubling gauges play an important rˆole whenstudying packing measure, see, for example, [JP, WW].(6)
Dimension outside range . The example in Section 4.2 shows that the dimensions can bestrictly less than the minimum of the dimensions of the attractors of the deterministic IFSs. We havenot, however, proved that the dimensions can be bigger than the maximum of the dimensions of theattractors of the deterministic IFSs(7)
Separation properties in the self-similar case.
In Theorems 2.4 and 2.5 we assumed variousseparation properties. In fact, some parts of these Theorems go through assuming slightly weakerconditions. For example, Theorem 2.5 (1) we require only the H s min -MSC to prove that the typicalHausdorff measure is infimal and positive and finite. We choose to state these theorems using thestronger separation properties in order to simplify exposition and not shroud the key ideas.(8) More randomness.
It is possible to introduce more randomness into our construction. Inparticular, one might relax the requirement that at the k th level of the construction we use the sameIFS within each k th level iterate of K . In this case our sequence space, Ω, would be replaced by a spaceof infinite rooted trees. We believe that although this is a significantly more general construction, thetopological properties of Ω would not change significantly and most of our arguments should generalisewithout too much difficulty. One might also consider the intermediate levels of randomness given byV-variable fractals introduced in [BHS2] and discussed in detail in [B].(8) Typical versus almost sure.
An interesting consequence of Theorem 2.1 is that our topologi-cal approach gives drastically different results to the probabilistic (or measure theoretic) approach. Forexample, compare Theorem 1.5 with our result, Theorem 2.5. A similar comparison has cropped up in awide variety of situations with, roughly speaking, the topological approach favouring divergence and theprobabilistic approach favouring converegence. Indeed, our results on dimension are of this nature. Asimilar phenomenon has arisen in, for example: dimensions of measures [H, O2]; dimensions of graphs ofcontinuous functions [FH]; and frequency properties of expansions of real numbers [S]. These referencesare given as a sample of some of the situations where a contrast between topological and probabilisticapproaches have been observed and are by no means a complete list. For example, generic dimensionsof measures and graphs of continuous functions have been studied extensively and, for a more completesurvey, the reader is referred to [O2] and [FH] and the references therein.(9)
Choice of topological space.
Baire category theory can be used in much more general spacesthan just complete metric spaces. In fact, all one needs is a
Baire topological space , i.e., a topologicalspace where the intersection of any countable collection of open dense sets is dense. In Section 1.6 weintroduced a topology on Ω to allow us to examine the size of subsets of Ω using Baire category. Ofcourse we could have formulated our analysis in terms of the set Λ = { F ω : ω ∈ Ω } equipped withthe topology induced by the Hausdorff metric. We note here that these two approaches are essentiallyequivalent. Define an equivalence relation, R , on Ω by ω R u ⇔ F ω = F u and let q : Ω → Ω /R be thequotient map, where Ω /R is equipped with the quotient topology. Let Ψ : Ω → K ( K ) be defined byΨ( ω ) = F ω and ˆΨ : Ω /R → K ( K ) be defined by ˆΨ([ ω ]) = F ω and observe that Ψ is continuous by25emma 3.3 and that ˆΨ is clearly well-defined. The following diagram commutesΩ Ω /R Λ (cid:47) (cid:47) (cid:47) (cid:47) q (cid:15) (cid:15) (cid:15) (cid:15) (cid:15) (cid:15) ˆΨ (cid:31) (cid:31) (cid:31) (cid:31) Ψand furthermore, ˆΨ is a homeomorphism. It is easy to see that Ω /R , and hence Λ, are Baire and thatimages of residual subsets of Ω under q are residual in Ω /R . It follows that all of our results could bephrased as ‘for a typical set F ω ∈ Λ...’ instead of ‘for a typical ω ∈ Ω...’.
Acknowledgements
The author was supported by an EPSRC Doctoral Training Grant and thanks Kenneth Falconer for somehelpful comments on a previous draft of the manuscript.
References [B] M. F. Barnsley.
Superfractals , Cambridge University Press, Cambridge, 2006.[BHS] M. F. Barnsley, J. E. Hutchinson and O. Stenflo. A fractal valued random iteration algorithm andfractal hierarchy,
Fractals , , (2005), 111–146.[BHS2] M. F. Barnsley, J. E. Hutchinson and O. Stenflo. V-variable fractals: fractals with partial selfsimilarity, Adv. Math. , , (2008), 2051–2088.[Be] T. Bedford. Crinkly curves, Markov partitions and box dimensions in self-similar sets, Ph.D disser-tation, University of Warwick , (1984).[F1] K. J. Falconer.
Techniques in Fractal Geometry , John Wiley, 1997.[F2] K. J. Falconer.
Fractal Geometry: Mathematical Foundations and Applications , John Wiley, 2ndEd., 2003.[F3] K. J. Falconer. Random fractals,
Math. Proc. Cambridge Philos. Soc. , , (1986), 559–582.[FHW] D.-J. Feng, S. Hua and Z.-Y. Wen. Some relationships between packing premeasure and packingmeasure, Bull. London. Math. Soc. , , (1999), 665–670.[FH] J. M. Fraser and J. T. Hyde. The Hausdorff dimension of graphs of prevalent continuous functions, to appear, Real. Anal. Exchange .[FO] J. M. Fraser and L. Olsen. Multifractal spectra of random self-affine multifractal Sierpi´nski spongesin R d , to appear, Indiana Univ. Math. J. [GL] D. Gatzouras and S. P. Lalley. Statistically self-affine sets: Hausdorff and box dimensions, J. The-oret. Probab. , , (1994), 437–468.[GuLi] Y. Gui and W. Li. A random version of McMullen-Bedford general Sierpinski carpets and itsapplication, Nonlinearity , , (2008), 1745–1758.[GuLi2] Y. Gui and W. Li. Multiscale self-affine Sierpinski carpets, Nonlinearity , , (2010), 495–512.[H] H. Haase. A survey on the dimension of measures, in: Topology, Measures, and Fractals, Warnemnde,1991, in: Math. Res. , , Akademie-Verlag, Berlin, (1992), 66–75.[JP] H. Joyce and D. Preiss. On the existence of subsets of finite positive packing measure, Mathematika , , (1995), 15–24. 26L] S. P. Lalley. The packing and covering functions of some self-similar fractals, Indiana Univ. Math.J. , , (1988), 699–710.[LW] Y.-Y. Liu and J. Wu. Dimensions for random self-conformal sets, Math. Nachr. , , (2003), 71–81.[M] P. Mattila. Geometry of Sets and Measures in Euclidean Spaces , Cambridge University Press, 1995.[Mc] C. McMullen. The Hausdorff dimension of general Sierpi´nski carpets,
Nagoya Math. J. , , (1984),1–9.[O1] L. Olsen. Random Geometrically Graph Directed Self-Similar Multifractals , Longman, Harlow, 1994.[O2] L. Olsen. Fractal and multifractal dimensions of prevalent measures,
Indiana Univ. Math. J. , ,(2010), 661–690.[O3] L. Olsen. Random self-affine multifractal Sierpi´nski sponges in R d , Monatshefte f¨ur Mathematik , , (2011), 245–266.[Ox] J. C. Oxtoby. Measure and Category , Springer, 2nd Ed., 1996.[R] C. A. Rogers.
Hausdorff measures , Cambridge University Press, 1998.[S] T. ˇSal´at. A remark on normal numbers,
Rev. Roumaine Math. Pures Appl. , , (1966), 53–56.[Sc] A. Schief. Separation properties for self-similar sets, Proc. Amer. Math. Soc. , , (1994), 111–115.[WW] S.-Y. Wen and Z.-Y. Wen. Some properties of packing measure with doubling gauge. Studia Math. ,165