On measures driven by Markov chains
aa r X i v : . [ m a t h . C A ] N ov ON MEASURES DRIVEN BY MARKOV CHAINS
YANICK HEURTEAUX AND ANDRZEJ STOS
Abstract.
We study measures on [0 ,
1] which are driven by a finite Markovchain and which generalize the famous Bernoulli products. We propose ahands-on approach to determine the structure function τ and to prove that themultifractal formalism is satisfied. Formulas for the dimension of the measuresand for the Hausdorff dimension of their supports are also provided. Introduction
Multifractal measures on R d are measures m for which the level sets E α = (cid:26) x ∈ R d ; lim r → log m( B ( x, r ))log r = α (cid:27) are non-trivial for at most two values of the real α . In practice, it is impossibleto completely describe the sets E α , but we can try to calculate their Hausdorffdimensions. To this end, Frisch and Parisi ([7]) were the first to use the Legendretransform of a structure function τ . A mathematically rigorous approach was givenby Brown, Michon and Peyri`ere in [2] and by Olsen in [9]. There are many situationsin which the Legendre transform formula, now called multifractal formalism, issatisfied. For a comprehensive account see see e.g. [3], [5], [6], [11] or [8].A fundamental model for multifractal measures is given by so called Bernoulliproducts, see for example Chapter 10 of [4]. Roughly speaking, if I ε ··· ε n are the ℓ -adic intervals of the n th generation and ( X n ) is an i.i.d. sequence of randomvariables, then Bernoulli product m can be defined bym( I ε ··· ε n ) = P [ X = ε , · · · , X n = ε n ] . The purpose of this paper is to provide an explicit analysis of a natural gener-alization of this model. Instead of an i.i.d. sequence, we consider an irreduciblehomogeneous Markov chain. Consequently, the measure m satisfies the recurrencerelation m( I ε ··· ε n +1 ) = p ε n ε n +1 m( I ε ··· ε n ) , where P = ( p ij ) is the transition matrix of X n (see the next section for full details).In Section 3, we identify a formula for the structure function τ of such a mea-sure and we compute its dimension. Section 4 contains a construction of auxiliary(Gibbs) measures. While it involves a nontrivial rescaling, we insist on the fact thatour results don’t require sophisticated tools but only some fundamental results ofmultifractal analysis and the use of Perron-Frobenius theorem. We are then ableto prove that the multifractal formalism is satisfied and we give a formula for theHausdorff dimension of the (closed) support of the measures. Finally, in Section 5 Mathematics Subject Classification.
Key words and phrases.
Multifractal formalism, Cantor sets, Hausdorff dimension, Markovchains. we discuss ergodic properties and prove that for a given support K , the measure mwith maximal dimension is essentially unique.2. Preliminaries
Set S = { , , , ℓ − } . Let W n be set of words of length n over the alphabet S . A concatenation of two words ε = ε · · · ε n ∈ W n and δ = δ · · · δ k ∈ W k will bedenoted by εδ = ε · · · ε n δ · · · δ k . Let F = { [0 , } and for n ≥ F n be the set of ℓ -adic intervals of order n , that is the family of intervals of the form I ε = I ε ··· ε n = " n X i =1 ε i ℓ n , n X i =1 ε i ℓ n + 1 ℓ n ! where ε = ε · · · ε n ∈ W n . If I = I ε ∈ F n and J = I δ ∈ F k , we will write IJ = I εδ .Note that IJ ⊂ I . Finally, we will denote by I n ( x ) the unique interval I ∈ F n suchthat x ∈ I and | I | the length of the interval I .Consider a discrete Markov chain X = ( X k ) k ≥ on S with an initial distribution p i = P ( X = i ), i ∈ S , and the transition probabilities P = ( p ij ) ℓ − i,j =0 where p ij = P ( X n +1 = j | X n = i ). In order to exclude degenerated cases, we will supposethat p ij = 1 for any ( i, j ) ∈ S × S . Note that for the sake of coherence with thedefinition of F n , the entries of matrices and vectors will be indexed by numbersstarting from 0.If necessary, we will assume that X is irreducible, that is for all i, j ∈ S thereexists k ≥ P ( X k = j | X = i ) > P = ( p ij ) i,j ∈S we denote by P n the usual matrix power. Thisshould not be confused with P q , q ∈ R , which stands for the matrix whose entriesare given by ( p qij ) i,j ∈S . In this context, by convention we will set 0 q = 0 for any q ∈ R .The Hausdorff dimension (box dimension, respectively) of a set E will be denotedby dim H E (dim B E ). Recall the definitions of the lower and upper dimension of aprobability measure µ on R d :dim ∗ µ = inf { dim H ( E ) : µ ( E ) > } = sup { s > µ ≪ H s } dim ∗ µ = inf { dim H ( E ) : µ ( E ) = 1 } = inf { s > µ ⊥ H s } where H s denotes the Hausdorff measure in dimension s . If these two quantitiesagree, the common value is called the dimension of µ and denoted by dim µ . Inthis case the measure is said to be unidimensional . For more details, we refer thereader to e.g. [8].By c or C we will denote a generic positive constant whose exact value is notimportant and may change from line to line. For functions or expressions f and g ,depending on a variable x , say, we will write f ≍ g if there exists a constant c whichdoes not depend on x and such that c − g ( x ) ≤ f ( x ) ≤ cg ( x ) for any admissible x .3. The measures and the structure function τ Let ( X n ) n ≥ be a Markov chain on S with initial distribution ( p i ) i ∈ S and tran-sition matrix P = ( p ij ). Define the measure m as follows:m( I ε ··· ε n ) = P ( X = ε , . . . , X n = ε n ) . N MEASURES DRIVEN BY MARKOV CHAINS 3
Then m( I i ) = p i , i ∈ S , and by the Markov property,m( I ε ··· ε n ) = P ( X n = ε n | X n − = ε n − · · · X = ε ) P ( X n − = ε n − · · · X = ε )= p ε n − ε n m( I ε ··· ε n − ) . Iterating, we get(3.1) m( I ε ··· ε n ) = p ε p ε ε · · · p ε n − ε n . In other words, a finite trajectory of X selects an interval and assigns it a massequal to the probability of the trajectory. The measure is well defined since for anygiven ℓ -adic interval I there is exactly one path to it and its subintervals can bereached only through I .Because of the additivity property m( I ε ··· ε n ) = P n − k =0 m( I ε ··· ε n − k ) and theproperty lim n → + ∞ m( I ε ··· ε n ) = 0, it is well known that formula (3.1) defines aprobability Borel measure whose support is contained in [0 ,
1] (see for example[12]). Moreover the measure m is such that m( { x } ) = 0 for any point x .The construction proposed here can be viewed as a generalization of a classicalBernoulli measure. Our goal is to give a hands-on approach to the multifractalanalysis of such measures. Example 3.1. (1) Natural measure on the triadic Cantor set. Let P = 12 and set the initial distribution ( p , p , p ) = (1 / , , / C = supp mis the ternary Cantor set and m is the normalized (log C .(2) Bernoulli measures. Let p = ( p , · · · , p ℓ − ) be a probability vector andsuppose that ( X i ) are i.i.d. random variables with P ( X = j ) = p j . Byindependence p ij = p j so that P = p p · · · p ℓ − ... ...... ... p p · · · p ℓ − . Hence, by (3.1) we get m( I ε ··· ε n ) = p ε · · · p ε n , and the measure m is theclassical Bernoulli product.(3) Let ℓ = 2, p = 1 / P = (cid:18) p − p − p p (cid:19) . The associated measure m was introduced by Tukia in [13]. It is a doublingmeasure with dimension dim(m) = − ( p log p +(1 − p ) log (1 − p )) <
1. Theassociated repartition function f ( x ) = m([0 , x ]) is a singular quasisymetricfunction. YANICK HEURTEAUX AND ANDRZEJ STOS (4) Random walk on Z ℓ . Let p ij = 1 / | i − j | = 1 or ( i, j ) = (0 , ℓ −
1) or( i, j ) = ( ℓ − , n = 3, P = 12 . As we will see later, the associated measure is monofractal with dimension log 2log ℓ .Define as usual the structure function τ ( q ) by(3.2) τ ( q ) = lim sup n →∞ n log ℓ log X I ∈F n m( I ) q ! , with the usual convention 0 q = 0 for any q ∈ R . Theorem 3.2.
Let m be a probability measure driven by a Markov chain withtransition matrix P . Suppose that the matrix P is irreductible. For q ∈ R , let λ q be the spectral radius of P q . Then τ ( q ) = log ℓ ( λ q ) and the limit does exist in (3.2) .Proof. Let W n,k be the subset of W n consisting of words that end with k ∈ S . Set s n,k = X ε ∈W n,k m( I ε ) q and let S n be the (line) vector ( s n, , . . . , s n,ℓ − ). In particular, W ,k = { k } and S = ( p q , . . . , p qℓ − ). We claim that(3.3) S n P q = S n +1 , n ≥ . Indeed, the j th coordinate of S n P q is given by X k ∈S s n,k p qkj = X k ∈S X ε ∈W n,k p qkj m( I ε ) q Using the Markov property, we have p kj m( I ε ) = m( I εj ) when ε ∈ W n,k . It followsthat X k ∈S s n,k p qkj = X k ∈S X ε ∈W n,k m( I εj ) q = X ε ∈W n +1 ,j m( I ε ) q = s n +1 ,j which is the j th coordinate of S n +1 . So that (3.3) follows. Iterating, we obtain(3.4) S n = S ( P q ) n − . Now, observe that X ε ∈W n m( I ε ) q = X k ∈S s n,k = k S n k = k S P n − q k
1N MEASURES DRIVEN BY MARKOV CHAINS 5 where k k is the ℓ norm in R ℓ . It follows that τ ( q ) = lim sup n →∞ n log ℓ X ε ∈ W n m( I ε ) q ! = lim sup n →∞ n log ℓ k S P n − q k = lim sup n →∞ n log ℓ k S P nq k Let us now introduce the following notation. If a = ( a , · · · , a ℓ − ) and b =( b , · · · , b ℓ − ) are two vectors in R ℓ , we will write a ≺ b when a i ≤ b i for anyvalue of i . Observe in particular that that S ≻ λ q be the spectral radius of the matrix P q , which is also the spectral radius ofthe transposed matrix P tq . The matrix P q being positive and irreductible, Perron-Frobenius Theorem ensures the existence of an eigenvector ν q with strictly positiveentries, satisfying ν q P q = λ q ν q . Therefore, there exists a constant C > S ≺ Cν q . It follows that k S P nq k ≤ C k ν q P nq k = C k ν q k λ nq . On the other hand, using the irreductibility property of the matrix P q , we can findan integer k such that the matrix I + P q + · · · + P kq has strictly positive entries. Itfollows that the line vector S + S P q + · · · + S P kq has strictly positive entries andwe can find a constant C > ν q ≺ C (cid:0) S + S P q + · · · + S P kq (cid:1) . So k ν q P nq k ≤ C k S P nq + S P n +1 q + · · · + S P n + kq k = C k (cid:0) S P nq (cid:1) (cid:0) I + P q + · · · P kq (cid:1) k ≤ C ′ k S P nq k . Finally, k S P nq k ≍ λ nq .Taking the logarithm, we can conclude that τ ( q ) = log ℓ ( λ q )and that the limit exists. (cid:3) Corollary 3.3.
The function τ is analytic on R .Proof. This can be seen as a consequence of the Kato-Rellich theorem (see forexample [10]). But in this finite dimensional context, there is an elementary proof.Let F ( q, x ) = det( P q − xI ) be the characteristic polynomial of P q and let q ∈ R .Observing that F ( q , λ q ) = 0 and ∂F∂x ( q , λ q ) = 0 (the eigenvalue λ q is simple),the map q λ q is given arround q by the implicit functions theorem. Moreover, F being analytic in q and x , it is well known that the implicit function is analytic. (cid:3) The existence of τ ′ (1) ensures that the measure m is unidimensional (see e.g.[8], Theorem 3.1). Corollary 3.4.
The measure m is unidimensional with dimension dim(m) = − τ ′ (1) . Let us now describe some examples. Let h ℓ be the usual entropy function h ℓ ( p ) = − ℓ − X i =0 p i log ℓ p i , p = ( p , . . . , p ℓ − ) with ℓ − X i =0 p i = 1 . YANICK HEURTEAUX AND ANDRZEJ STOS
In particular, set h ( x ) = h ( x, − x ) = − ( x log ( x ) + (1 − x ) log (1 − x )). Example 3.5.
If m is the Bernoulli measure from Example 3.1 (2), then by The-orem 3.2 we get the well known formula for ττ ( q ) = log ℓ (cid:0) p q + · · · + p qℓ − (cid:1) . Furthermore, the dimension of the measure m is dim(m) = − τ ′ (1) = h ℓ ( p ). Example 3.6.
Actually, if ℓ = 2, we can obtain an explicit formula for any givenMarkov chain. Suppose that a, b ∈ (0 ,
1) and P = (cid:18) − a ab − b (cid:19) . Then(3.5) dim(m) = ba + b h ( a ) + aa + b h ( b ) . Indeed, by Theorem 3.2 we get τ ( q ) = − (cid:16) (1 − a ) q + (1 − b ) q + p ((1 − a ) q − (1 − b ) q ) + 4 a q b q (cid:17) . Note that if q = 1, then p ((1 − a ) q − (1 − b ) q ) + 4 a q b q simplifies to a + b . Thus,we obtain τ ′ (1) = 12 log 2 (1 − a ) log(1 − a ) + (1 − b ) log(1 − b ) ++ ( b − a )((1 − a ) log(1 − a ) − (1 − b ) log(1 − b )) + 2 ab log aba + b ! Rearranging, we get (3.5). Note that the coefficients in (3.5) come from the sta-tionary distribution of X : π = (cid:18) ba + b , aa + b (cid:19) . This observation will be generalized in Theorem 3.8 below.
Example 3.7.
Let ( a , · · · , a ℓ − ) be a probability vector (with possibly someentries that are equal to 0) and P be an ℓ × ℓ irreductible stochastic matrixwith entries a , . . . , a ℓ − in every row, but in an arbitrary order. Then τ ( q ) =log ℓ (cid:0) a q + . . . + a qℓ − (cid:1) and dim(m) = h ℓ ( a , · · · , a ℓ − ). In particular, if κ is thenumber of a i ’s that are not equal to 0 and if each nonzero a i is equal to 1 /κ , weget dim(m) = log κ log ℓ which is the maximal possible value and is also the dimensionof the support of the measure m. Such a remark will be generalized below. Proof.
Set A q = a q + · · · + a qℓ − . We have X ε ∈W n +1 m( I ε ) q = X k ∈S X ε ∈W n,k X j ∈S p qkj m( I ε ) q = A q X k ∈S X ε ∈W n,k m( I ε ) q = A q X ε ∈W n m( I ε ) q . N MEASURES DRIVEN BY MARKOV CHAINS 7
Iterating, we get X ε ∈W n m( I ε ) q = A n − q X ε ∈W m( I ε ) q . Consequently, τ ( q ) = log ℓ (cid:0) a q + . . . + a qℓ − (cid:1) . It follows that τ ′ (1) = − h ℓ ( a , . . . , a ℓ − ).If there are only κ nonzero values in the probability vector ( a , · · · , a ℓ − ), saye.g. a , · · · , a κ − , the formula turns to dim m = − P κ − j =0 a j log ℓ a j which is maximalwhen a j = 1 /κ for any j . (cid:3) Let us finish this part with a general formula for the dimension of the measurem. That is the purpose of the following theorem.
Theorem 3.8.
Denote by L k the k th line of the matrix P . Let H = ( h ℓ ( L ) , . . . , h ℓ ( L ℓ − )) be the vector of entropies of the lines of P . Then dim(m) = < π | H >, where π is the stationary distribution of the Markov chain X and < | > is thecanonical scalar product.Proof. Let T n = X I ∈F n m( I ) log ℓ m( I ), and H n = − n T n the entropy related to thepartition F n . We know that τ ′ (1) exists. It follows that − τ ′ (1) = lim n → + ∞ H n (seefor example [8], Theorem 3.1). Further, T n = X k ∈S X ε ∈W n − ,k X j ∈S m( I ε ) p kj log ℓ (m( I ε ) p kj )= − X k ∈S h ℓ ( L k ) X ε ∈W n − ,k m( I ε ) + X k ∈S X ε ∈W n − ,k m( I ε ) log ℓ (m( I ε ))Denote, as before, s n − ,k = X ε ∈W n − ,k m( I ε ). It follows that T n = − X k ∈S h ℓ ( L k ) s n − ,k + T n − . Iterating, we get τ ′ (1) = lim n → + ∞ X k ∈S h ℓ ( L k ) 1 n ( s n − ,k + s n − ,k + . . . + s ,k )Observe now that s n − ,k is the k th component of S n − = S P n − (see (3.4)). It iswell known that the Cesaro means converge to the stationary distribution π (evenin the periodic case). The theorem follows. (cid:3) Example 3.9.
Let P = / / /
30 1 / / / / / / . YANICK HEURTEAUX AND ANDRZEJ STOS
An analytic formula for τ ( q ) is complicated. Nevertheless, a numerical evaluation of τ (0) (which is also the box dimension of the support of m) is possible and Theorem3.8 allows us to estimate the dimension of the measure m. We finddim m ≈ .
58 and dim B (supp m) ≈ . . This example shows that the “uniform” transition densities does not need to implythe maximality of dim m. Indeed, we will see in Corollary 4.2 that there alwaysexists a choice of the transition matrix for which the dimension of the measurecoincides with the dimension of its support.4.
Multifractal analysis and dimension of the support
By the definition of τ we have τ (0) = lim sup n →∞ log N n n log ℓ , where N n is the number of intervals from F n having positive measure. In ourcontext the limit exists, so τ (0) = dim B (supp m). Observe that the support ofthe measure m doesn’t depend on the specific values of p ij but only on the con-figuration of the nonzero entries in the matrix P and in the initial distribution p = ( p , · · · , p ℓ − ) . More precisely, the support of the measure m is the compactset K = \ n ≥ [ p ε p ε ε ··· p εn − εn > I ε ··· ε n . Indeed, the construction of the support can be viewed as a Cantor-like removal pro-cess. Given an interval I ε ··· ε n of the n th generation with ε n = i , its j th subintervalwill be removed if and only if p ij = 0 (cf. Example 3.1 (1)).According to Theorem 3.2, dim B (supp m) = log ℓ λ where λ is the spectralradius of the matrix P , so that the box dimension of supp m does not dependon the initial distribution p and only depends on the configuration of the nonzeroentries of the matrix P .This motivates the following questions. Given a configuration of nonzero entriesof P and of the initial distribution p , which values of p ij maximize dim m? Isthe maximal value of dim m equal to the box dimension of the support ? Is thismaximal measure unique?In some cases one has an immediate answer. In particular, Example 3.7 saysthat if each row of the matrix P has the same number κ of nonzero entries, themaximum of dim m is obtained when each nonzero entry of the matrix P is equalto 1 /κ and is then equal to log κ log ℓ .The general answer will be a consequence of the following result which says thatthe measure m satisfies the multifractal formalism. Theorem 4.1.
Let X be an ireductible Markov chain with transition matrix P andlet m be the associated measure. Then m satisfies the multifractal formalism. Moreprecisely, define E α = (cid:26) x ∈ [0 ,
1] ; lim n →∞ log m( I n ( x ))log | I n ( x ) | = α (cid:27) . Then, for any − τ ′ (+ ∞ ) < α < − τ ′ ( ∞ ) , dim( E α ) = τ ∗ ( α ) , N MEASURES DRIVEN BY MARKOV CHAINS 9 where τ ∗ ( α ) = inf q ( αq + τ ( q )) is the Legendre transform of the function τ .Proof. We will prove the existence of a Gibbs measure at a given state q , that isan auxiliary measure m q such that for any ℓ -adic interval I one has(4.1) m q ( I ) ≍ | I | τ ( q ) m( I ) q . Note that in (4.1), the constant may depend on q . Since the function τ is differen-tiable, it is well known that the existence of such a measure at each state q impliesthe validity of the multifractal formalism for m (see for example [2] or [8]).Now again, such a Gibbs measure will be obtained with an elementary construc-tion. Note that P is irreductible if and only if P q is. By Perron-Frobenius Theorem,the spectral radius λ q of P q is a simple eigenvalue and there exists a unique prob-ability vector π q with strictly positive entries and satisfying P q π q = λ q π q .Define D q as the ℓ × ℓ -matrix having the coordinates of π q on the diagonaland zeros elsewhere. Set Q q = 1 λ q D − q P q D q and let be the column vector =(1 , . . . , t ∈ R ℓ . Then we have Q q = 1 λ q D − q P q D q = 1 λ q D − q P q π q = D − q π q = . In other words, Q q is a stochastic matrix and thus it can be associated to a Markovchain X ( q ) . We may and do assume that this chain has the initial distribution q i = αp qi , i ∈ S , where ( p , . . . , p ℓ − ) is the initial distribution of X and α =( p q + · · · + p qℓ − ) − is the normalization constant. Let m q be the measure inducedby X ( q ) . Remark that if Q q = ( q ij ), D q = ( d ij ), then we have q ij = 1 λ q d − ii p qij d jj , i, j ∈ S . Cleraly, q ij > p ij > q i > p i >
0. Thereforethe measures m q and m have the same support.Let I = I ε ··· ε n ∈ F n . We have m q ( I ) = q ε q ε ε · · · q ε n − ε n = αp qε (cid:18) λ q d − ε ε p qε ε d ε ε (cid:19) · · · (cid:18) λ q d − ε n − ε n − p qε n − ε n d ε n ε n (cid:19) = αλ n − q d − ε ε m ( I ε ··· ε n ) q d ε n ε n Further, since the entries of the the eigenvector π q are strictly positive, there is aconstant c (possibly depending on q ) such that c − ≤ d ii d jj ≤ c for any i, j ∈ S . This yields m q ( I ) ≍ λ n − q m ( I ) q . Now, observe that by Theorem 3.2, ℓ τ ( q ) = λ q so that | I | τ ( q ) = (cid:0) ℓ − n (cid:1) τ ( q ) = λ − nq . It follows that for any I ∈ F n we have m q ( I ) ≍ | I | τ ( q ) m ( I ) q . Note that in the above estimate the implicit constants may depend on q but not on n and I . Therefore m q is the needed Gibbs measure and the theorem follows. (cid:3) Corollary 4.2.
Let K be the support of the measure m . The measure m = m associated to the matrix Q satisfies supp m = K , is monofractal and stronglyequivalent to the Hausdorff measure H τ (0) on K . In particular, dim(m) = dim H ( K ) = dim B ( K ) and m is a measure driven by a Markov chain, with support K and with maximaldimension.Proof. Let m = m from the previous proof. Then we havem( I ) ≍ | I | τ (0) for any ℓ -adic interval I such that m( I ) >
0. In particular, for any x ∈ K ,m ( I n ( x )) ≍ | I n ( x ) | τ (0) . By Billingsley’s theorem (see e.g. [4], Propositions 2.2and 2.3), we conclude that m is equivalent to τ (0)-dimensional Hausdorff measure H τ (0) on K . In particular H τ (0) ( K ) is positive and finite. It follows thatdim H ( K ) = τ (0) = dim B ( K ) . On the other hand, the structure function τ of the measure m is τ ( q ) = τ (0)(1 − q ).It follows that the measure m is monofractal, and thatdim m = − τ ′ (1) = τ (0) = dim B ( K ) . (cid:3) Example 4.3.
Suppose that X is a random walk on Z ℓ (cf. Example 3.1 (3)).Then τ ( q ) = (1 − q ) log ℓ − τ ′ (1) = log ℓ τ (0) = dim B ( K ) . Example 4.4.
Let P = / / / / / / / . It can be easily seen that P is irreducible (actually, P has only positive entries).We obtain τ ( q ) = − log (cid:16) − q + 3 − q + p − q + 6 − q + 9 − q (cid:17) . Hencedim H (supp m) = log (1 + √ ≈ .
802 and dim m = 17 (3 + 4 log ≈ . . As observed in Example 3.9, the transitions are uniform row by row, but the mea-sure m is not monofractal. The Markov chain inducing the maximal measure m isassociated to the following transition matrix : Q = ( √ − × √ / √ / √ √ . N MEASURES DRIVEN BY MARKOV CHAINS 11 Invariance, ergodicity and application to the uniqueness of mThe goal of this section is to discuss the uniqueness of measure m with maximaldimension given in Corollary 4.2. This is the object of Theorem 5.5.We need to start with some preliminary results. Let us introduce the shift σ on[0 ,
1) defined by σ ( x ) = ℓx − E ( ℓx ), where E ( y ) is the integer part of y . Observethat σ ( I ε ··· ε n ) = I ε ··· ε n and that σ ( x ) = \ n I ε ··· ε n if x = \ n I ε ··· ε n . That is why σ is called the shift. Proposition 5.1.
Let P be an irreducible ℓ × ℓ transition matrix, ν be the (unique)probability vector such that νP = ν and X be the Markov chain with transition P and initial law ν . Set m P to be the probability measure driven by the Markov chain X . Then, m P is σ -invariant and ergodic.Proof. m P ( σ − ( I ε ··· ε n )) = ℓ X j =0 m P ( I jε ··· ε n )= ℓ X j =0 ν j p jε · · · p ε n − ε n = ν ε p ε ε · · · p ε n − ε n = m P ( I ε ··· ε n )So, by the monotone class theorem, the measure m P is σ -invariant.Let k be an integer such that P + P + · · · + P k has strictly positive entries. Weclaim that there exists a constant C >
I, J ∈ S n F n , we have(5.1) 1 C m P ( I ) m P ( J ) ≤ k − X j =0 X K ∈F j m P ( IKJ ) ≤ C m P ( I ) m P ( J ) . Indeed, if I = I ε ··· ε n , J = I δ ··· δ m , and if ( π ij ) i,j denote the coefficients of thematrix P + P + · · · + P k , it is easy to check that k − X j =0 X K ∈F j m P ( IKJ ) = ν ε p ε ε · · · p ε n − ε n × π ε n δ p δ δ · · · p δ m − δ m and the claim follows.Inequality (5.1) can be rewritten as(5.2) k − X j =0 m P (cid:16) I ∩ σ − ( n + j ) ( J ) (cid:17) ≍ m P ( I ) × m P ( J )where n is the generation of I . If we observe that any open set is a countable unionof disjoint intervals in ∪ n F n , inequality (5.2) remains true when J is an open set. Finaly, by regularity of the measure m p , it is also true for any Borel set J . Inparticular, if E is a σ -invariant Borel set, we get ∀ I ∈ [ n F n , k m P ( I ∩ E ) ≍ m P ( I ) × m P ( E ) . Again, it remains true when I is an arbitrary Borel set. In particular,m P (([0 , \ E ) ∩ E ) ≍ m P ([0 , \ E ) × m P ( E )which proves that m P ( E ) = 0 or m P ([0 , \ E ) = 0. (cid:3) Remark . Inequality 5.2 is a particular case of the so called weak quasi-Bernoulliproperty which was introduced by B. Testud in [11].
Corollary 5.3.
Let P and ˜ P be two different irreducible ℓ × ℓ transition matrices.Then m P is singular with respect to m ˜ P .Proof. According to the ergodic theorem, it suffices to show that m P = m ˜ P . Let ν and ˜ ν be the invariant distributions of the stochastic matrix P and ˜ P . If ν i = ˜ ν i for some i , then m P ( I i ) = m ˜ P ( I i ). If ν = ˜ ν and if p ij = ˜ p ij , we can write :m P ( I ij ) = ν i p ij = ˜ ν i ˜ p ij = m ˜ P ( I ij ) . (cid:3) Corollary 5.4.
Let P , ˜ P be two irreducible ℓ × ℓ transition matrices, p and ˜ p tobe two probability vectors. Set m and ˜m be the associated measures. Suppose that supp(m) = supp( ˜m) . Then, there are only two possible cases : (1) P = ˜ P and the measures m and ˜m are strongly equivalent (i.e. m ≍ ˜m ) (2) P = ˜ P and the measures m and ˜m are mutually singular.Proof. Let
A ⊂ S be the set of ranges of the nonzero entries of p (which is also theset of ranges of nonzero entries of ˜ p ). Let F = S ε ∈A I ε . We claim that m is stronglyequivalent to the measure m P restricted to F . This is an easy consequence of thefact that the invariant probability vector ν (satisfying νP = ν ) has strictly positiveentries. In the same way, ˜m is strongly equivalent to the measure m ˜ P restricted to F . Corollary 5.4 is then a consequence of Corollary 5.3. (cid:3) Now, we are able to prove the following theorem on the measure m given byCorollary 4.2.
Theorem 5.5.
Let P = (cid:0) p ij (cid:1) be an irreductible ℓ × ℓ matrix such that p ij ∈ { , } for any ij and let p = (cid:0) p , · · · , p ℓ − (cid:1) be a line vector such that p i ∈ { , } for any i . Suppose that p = (0 , · · · , and define the compact set K by K = \ n ≥ [ p ε p ε ε ··· p εn − εn =1 I ε ··· ε n . Let δ = dim H ( K ) and let m be a measure with support K , driven by a Markovchain X with irreductible transition matrix P . Then, dim m = δ if and only if P = 1 λ D − P D where λ is the spectral radius of P and D is the diagonal matrix whose diagonalentries are the coordinates of the (unique) probability vector π satisfying P π = N MEASURES DRIVEN BY MARKOV CHAINS 13 λ π . Moreover, the case P = λ D − P D is the only case where the measure m is monofractal.Proof. Assume for simplicity that p i = 1 for any i . The general case is a standardmodification. Remember that the support of the measure m only depends on thepositions of the non-zero entries of the matrix P . It follows the non-zero entries ofthe matrices P and P hare located at the same places. Let Q = λ D − P D .Suppose that P = Q . According to Corollary 4.2 and Corollary 5.4, the measurem is strongly equivalent to m = m which satisfies m( I n ( x )) ≍ | I n ( x ) | δ for any x ∈ K . In particular, m is such that dim m = δ and is monofractal.Suppose now that P = Q . Corollary 5.4 says that m is singular with respect tom and we have to prove that dim m < δ . Let τ ( q ) = lim n → + ∞ n log ℓ log X I ∈F n m( I ) q ! . Recall that τ is analytic and such that τ (0) = δ and τ (1) = 0. Using the convexityof τ , it is clear that τ ′ (1) = − δ ⇐⇒ ∀ q ∈ [0 , , τ ( q ) = δ (1 − q ) ⇐⇒ ∀ q ∈ R , τ ( q ) = δ (1 − q ) . In order to prove that dim m < δ , it is then sufficient to establish that τ (2) > − δ .Denote by I , · · · , I ℓ − the intervals of the first generation F . If j ∈ { , · · · , ℓ − } and n ≥
1, let F n ( j ) be the intervals of F n that are included in I j . The measuresm and m being mutually singular, we know that for dm-almost every x ∈ K ,lim n → + ∞ m ( I n ( x ))m( I n ( x )) = 0 , which can be rewritten as lim n → + ∞ ℓ − nδ m( I n ( x )) = 0 . Using Egoroff’s theorem in each I j , we can find a set A ⊂ K such that m( A ∩ I j ) ≥ m( I j ) for any j ∈ { , · · · , ℓ − } and satisfying ∀ ε > , ∃ n ≥ ∀ n ≥ n , ∀ x ∈ A, m( I n ( x )) ≥ ε ℓ − nδ . It follows that X J ∈F n ( j ) m( J ) ≥ X J ∈F n ( j ) ; J ∩ A = ∅ ε ℓ − n δ m( J ) ≥ ε ℓ − n δ m( I j ) . Now, let I ∈ F k and suppose that I ε , ··· ,ε k with ε k = i . Observe that if J ∈F n ( j ), then m( IJ ) = p ij m( I j ) m( I ) m( J ). If we choose j i ∈ { , · · · , ℓ − } such that p ij i = 0, we get X J ∈F n m( IJ ) ≥ X J ∈F n ( j i ) m( IJ ) ≥ p ij i m( I j i ) m( I ) X J ∈F n ( j i ) m( J ) ≥ p ij i ε m( I j i ) ℓ − n δ m( I ) . Let ε = inf i (cid:18) p iji I ji ) (cid:19) and the corresponding n . If η is such that ℓ n η = 2, wecan rewrite the last inequality as X J ∈F n m( IJ ) ≥ ℓ − n ( δ − η ) m( I ) . If we sum this inequality on every interval I of the same generation and iterate theprocess, we get for any p ≥ X I ∈F pn m( I ) ≥ ℓ − ( p − n ( δ − η ) X I ∈F n m( I ) = Cℓ − pn ( δ − η ) which gives τ (2) ≥ − ( δ − η ) > − δ. Moreover, it is clear that τ ( R ) is not reduced to a single point. If follows that themeasure m is multifractal. (cid:3) References
1. F. Ben Nasr, I. Bhouri, Y. Heurteaux,
The validity of multifractal formalism: results andexamples , Adv. Math., (2002), 264–284.2. G. Brown, G. Michon and J. Peyri`ere.
On the Multifractal Analysis of Measures , J. Stat.Phys., (1992), 775–790.3. R. Cawley and R.D. Mauldin, Multifractal decompositions of Moran fractals , Adv. Math., (1992), 196–236.4. K. Falconer, Techniques in Fractal Geometry , John Wiley & Sons, 1997.5. D. J. Feng and K. S. Lau,
The pressure function for products of nonnegative matrices , Math.Res. Lett., (2002), 363–378.6. D.J. Feng and E. Olivier, Multifractal analysis of weak Gibbs measures and phase transition-application to some Bernoulli convolution , Ergodic Theory Dynam. Systems, (2003), 1751–1784.7. U. Frisch and G. Parisi, On the singularity structure of fully developed turbulence , Turbulenceand predictability in geophysical fluid dynamics, Proceedings of the International SummerSchool in Physics Enrico Fermi, North Holland (1985), 84–88.8. Y. Heurteaux,
Dimension of measures : the probabilistic approach , Publ. Mat. (2007),243–290.9. L. Olsen, A multifractal formalism , Adv. Math., (1995), 82–196.10. M. Reed and B. Simon,
Lectures on modern mathematical physics , Academic Press, 1980.11. B. Testud,
Mesures quasi-Bernoulli au sens faible : r´esultats et exemples , Ann. Inst. H.Poincar´e Probab. Statist. (2006), 1–35.12. C. Tricot, G´eom´etries et mesures fractales : une introduction , Ellipses, 2008.13. P. Tukia,
Hausdorff dimension and quasisymetric mappings , Math. Scand., (1989), 152–160. Laboratoire de Math´ematiques, Clermont Universit´e, Universit´e Blaise Pascal andCNRS UMR 6620, BP 80026, 63171 Aubi`ere, France
E-mail address : [email protected] Laboratoire de Math´ematiques, Clermont Universit´e, Universit´e Blaise Pascal andCNRS UMR 6620, BP 80026, 63171 Aubi`ere, France
E-mail address ::