aa r X i v : . [ m a t h . P R ] J u l Exponential rate of convergence for some Markov operators
Hanna Wojewódka Institute of Mathematics, University of Gdańsk,Wita Stwosza 57, 80-952 Gdańsk, Poland ∗ The exponential rate of convergence for some Markov operators is established. The opera-tors correspond to continuous iterated function systems which are a very useful tool in somecell cycle models.
I. INTRODUCTION
We are concerned with Markov operators corresponding to continuous iterated function systems.The main purpose of the paper is to prove spectral gap assuring exponential rate of convergence.The operators under consideration were used in Lasota & Mackey [9], where the authors studiedsome cell cycle model. See also Tyson & Hannsgen [16] or Murray & Hunt [11] to get moredetails on the subject. Lasota and Mackey proved only stability, while we managed to evaluaterate of convergence, bringing some information important from biological point of view. In ourpaper we base on coupling methods introduced in Hairer [4]. In the same spirit, exponential rateof convergence was proved in Ślęczka [15] for classical iterated function systems (see also Hairer &Mattingly [5] or Kapica & Ślęczka [7]). It is worth mentioning here that our result will allow usto show the Central Limit Theorem (CLT) and the Law of Iterated Logarithm (LIL). To do this,we will adapt general results recently proved in Bołt, Majewski & Szarek [2] or in Komorowski &Walczuk [8]. The proof of CLT and LIL will be provided in a future paper.The organization of the paper goes as follows. Section introduces basic notation and definitionsthat are needed throughout the paper. Most of them are adapted from Billingsley [1], Meyn &Tweedie [12], Lasota & Yorke [10] and Szarek [14]. Biological background is shortly presentedin Section . Sections and provide the mathematical derivation of the model and the maintheorem (Theorem 2), which establishes the exponential rate of convergence in the model. Sections - are devoted to the construction of coupling measure for iterated function systems. Thanks tothe results presented in Section we are finally able to present the proof of the main theoremin Section . ∗ Electronic address: [email protected]
II. NOTATION AND BASIC DEFINIOTIONS
Let ( X, ̺ ) be a Polish space. We denote by B X the family of all Borel subsets of X . Let C ( X ) be the space of all bounded and continuous functions f : X → R with the supremum norm.We denote by M ( X ) the family of all Borel measures on X and by M fin ( X ) and M ( X ) itssubfamilies such that µ ( X ) < ∞ and µ ( X ) = 1 , respectively. Elements of M fin ( X ) which satisfy µ ( X ) ≤ are called sub-probability measures. To simplify notation, we write h f, µ i = Z X f ( x ) µ ( dx ) for f ∈ C ( X ) , µ ∈ M ( X ) . An operator P : M fin ( X ) → M fin ( X ) is called a Markov operator if1. P ( λ µ + λ µ ) = λ P µ + λ P µ for λ , λ ≥ , µ , µ ∈ M fin ( X ) ;2. P µ ( X ) = µ ( X ) for µ ∈ M fin ( X ) .If, additionally, there exists a linear operator U : C ( X ) → C ( X ) such that h U f, µ i = h f, P µ i for f ∈ C ( X ) , µ ∈ M fin ( X ) , an operator P is called a Feller operator. Every Markov operator P may be extended to the spaceof signed measures on X denoted by M sig ( X ) = { µ − µ : µ , µ ∈ M fin ( X ) } . For µ ∈ M sig ( X ) we denote by k µ k the total variation norm of µ , i.e. k µ k = µ + ( X ) + µ − ( X ) , where µ + and µ − come from the Hahn-Jordan decomposition of µ (see Halmos [6]). For fixed ¯ x ∈ X we also consider the space M ( X ) of all probability measures with the first moment finite,i.e. M ( X ) = { µ ∈ M ( X ) : R X ̺ ( x, ¯ x ) µ ( dx ) < ∞} . The family is idependent of the choice of ¯ x ∈ X . We call µ ∗ ∈ M fin ( X ) an invariant measure of P if P µ ∗ = µ ∗ . For µ ∈ M fin ( X ) we definethe support of µ by supp µ = { x ∈ X : µ ( B ( x, r )) > for r > } , where B ( x, r ) is the open ball in X with center at x ∈ X and radius r > .In M sig ( X ) we introduce the Fourtet-Mourier norm k µ k L = sup f ∈L |h f, µ i| , where L = { f ∈ C ( X ) : | f ( x ) − f ( y ) | ≤ ̺ ( x, y ) , | f ( x ) | ≤ for x, y ∈ X } . (1)The space M ( X ) with the metric k µ − µ k L is complete (see Fortet & Mourier [3] or Rachev [13]). III. SHORTLY ABOUT THE MODEL OF CELL DIVISION CYCLE
Let (Ω , F , Prob ) be a probability space. Suppose that each cell in a considered populationconsists of d different substances, whose masses are described by the vector y ( t ) = ( y ( t ) , . . . , y d ( t )) ,where t ∈ [0 , T ] denotes an age of a cell. We assume that the evolution of the vector y ( t ) is givenby the formula y ( t ) = Π( x, t ) , where Π( x,
0) = x . Here Π : X × [0 , T ) → X is a given function.A simple example fulfilling these criteria is given by assuming that y ( t ) satisfies a system of ordinarydifferential equations dydt = g ( t, y ) (2)with the initial condition y (0) = x and the solution of (2) is given by y ( t ) = Π( x, t ) .If x n denotes the initial value x = y (0) of substances in the n -th generation and t n denotes themitotic time in the n -th generation, the distribution is given byProb ( t n ∈ I | x n = x ) = Z I p ( x, s ) ds for I ∈ [0 , T ] , n ∈ N . (3)The vector y ( t n ) = Π( x n , t n ) with y (0) = Π( x,
0) = x describes an amount of intercellular substancejust before cell division in the n -th generation. We assume that each daughter cell contains exactlyhalf of the components of its stem cell. Hence x n +1 = 12 Π( x n , t n ) for n = 0 , , , . . . . (4)The bahaviour of (3) and (4) may be also described by the sequence ( µ n ) n ≥ of distributions µ n ( A ) = Prob ( x n ∈ A ) for n = 0 , , , . . . and A ∈ B X .See Lasota & Mackey [9] for more details. IV. ASSUMPTIONS
We assume that ( X, ̺ ) is a Polish space. Fix T < ∞ . We consider a family { t n : n = 0 , , . . . } of indepenent random variables taking values in [0 , T ] . The family is defined on the probabilityspace (Ω , F , Prob ) . Note that Prob ( t n < T | x n = x ) = 1 . Let S : X × [0 , T ) → X be a continuousfunction and x n +1 = S ( x n , t n ) , n = 0 , , , . . . .We assume that p : X × [0 , T ) → [0 , ∞ ) is a lower semi-continuous, non-negative function suchthat, for every x ∈ X , p ( x,
0) = 0 and p ( x, t ) > for t > . In addition, p is normalized, i.e. R T p ( x, u ) du = 1 for x ∈ X . Let us further assume that for each A ∈ B X Prob ( x n +1 ∈ A ) := µ n +1 ( A ) , and P µ n = µ n +1 ,where P µ ( A ) = Z X (cid:18)Z T A ( S ( x, t )) p ( x, t ) dt (cid:19) µ ( dx ) . (5)The following assumptions will be needed throughout the paper:(I) ̺ ( S ( x, t ) , S ( y, t )) ≤ λ ( t ) ̺ ( x, y ) for x, y ∈ X , where λ : [0 , T ) → [0 , ∞ ) is a Borel measurablefunction;(II) a := sup x ∈ X R T λ ( t ) p ( x, t ) dt < ;(III) sup t ∈ [0 ,T ) ̺ ( S (¯ x, t ) , ¯ x ) < ∞ for some ¯ x ∈ X ;(IV) there exists σ such that p : X × [0 , T ) → [ σ, ∞ ) is a continuous function and ¯ c > such that R T | p ( x, t ) − p ( y, t ) | dt ≤ ¯ c̺ ( x, y ) for x, y ∈ X ;(V) function p is bounded and we assume that δ = inf { p ( x, t ) : x ∈ X, t ∈ (0 , T ) } > , M =sup { p ( x, t ) : x ∈ X, t ∈ (0 , T ) } . V. MAIN THEOREM
Let P be the Markov operator in the cell division model defined above. Lasota and Mackeyproved asymptotic stability of P , i.e. the existence of an invariant measure µ ∗ ∈ M ( X ) and weakconvergence of ( P n µ ) to µ ∗ for µ ∈ M ( X ) . The theorem says. Theorem 1.
Let S : X × [0 , T ] → X and p : X × [0 , T ] → [0 , ∞ ) satisfy the following conditions1. ̺ ( S ( x, t ) , S ( y, t )) ≤ λ ( x, t ) ̺ ( x, y ) for x, y ∈ X , t ∈ [0 , T ] and λ and S related to p by theconditions R T λ ( x, t ) p ( x, t ) dt ≤ r and R T | S (0 , t ) | p ( x, t ) dt ≤ r for x ∈ X ;2. R T | p ( x, t ) − p ( y, t ) | dt ≤ r ̺ ( x, y ) for x, y ∈ X ;3. for every x ∈ X there exists a minimal division time τ x ∈ [0 , T ] such that p ( x, t ) = 0 for ≤ t ≤ τ x and p ( x, t ) > for τ x < t ≤ T . We assume moreover that r < and r , r < ∞ . Then, the system (3) and (4) is asymptoticallystable. Obviously, conditions (i) and (ii) of Theorem are satisfied by assumptions (I)-(IV) of the modelin consideration. Note that condition (iii) is also fulfilled with τ x = 0 , as for every x ∈ X we have p ( x,
0) = 0 and p ( x, t ) > for every t > and x ∈ X . That is why we can assume the existance ofan invariant measure in the model.Our aim is to show that rate of convergence is exponential. Theorem 2.
Let µ ∈ M . Under assumptions (I)-(V) there exist C = C ( µ ) > and q ∈ [0 , such that k P n µ − µ ∗ k L ≤ Cq n for n ∈ N. VI. MEASURES ON THE PATHSPACE AND COUPLING
We consider a family of measures { Q x : x ∈ X } on X . We assume measurability of the mappings x Q x ( A ) for each A ∈ B X . Fix n, m ∈ N . Now, suppose that { Q x : x ∈ X } is a family ofmeasures on X n and { R x : x ∈ X } is a family of measures on X m . We can define a family ofmeasures { ( RQ ) x : x ∈ X } on X n × X m ( RQ ) x ( A × B ) = Z A R z n ( B ) Q x ( dz ) , (6)where z = ( z , . . . , z n ) and A ∈ B X n , B ∈ B X m .We consider a family of sub-probability measures { P x : x ∈ X } on X . We assume that themapping x P x ( A ) is measurable for each A ∈ B X . Furthermore, if each P x is a probabilitymeasure, { P x : x ∈ X } is a transition probability function. Thus P x ( A ) is the probability oftransition from x to A . We want to define a family of measures on X ∞ . Set x ∈ X . One-dimensional distributions { P nx : n ∈ N } are defined by induction on nP x ( A ) = δ x ( A ) , . . . , P n +1 x ( A ) = Z X P z ( A ) P nx ( dz ) , (7)where A ∈ B X . Following (6) , we easily obtain two and higher-dimentional distributions. Finally,we get the family { P ∞ x : x ∈ X } of sub-probability measures on X ∞ . This construction wasmotivated by Hairer [4]. The existance of measures P ∞ x is established by the Kolmogorov theorem.More precisely, there exists some probability space, on which we can define a stochastic proces ξ with distribution φ ξ such that φ ξ ( A ) = Prob ( ξ − ( A )) := P ∞ x ( A ) for A ∈ B X ∞ .Therefore, P ∞ x is the distribution of the Markov chain ξ on X ∞ with transition probability function { P x : x ∈ X } and φ ξ = δ x for x ∈ X . If an initial distribution is given by any µ ∈ M fin ( X ) , notnecessarily by δ x , we define P ∞ µ ( A ) = Z X P ∞ x ( A ) µ ( dx ) for A ∈ B X ∞ . Definition 3.
Let a transition probability function { P x : x ∈ X } be given. A family of probabilitymeasures { C x,y : x, y ∈ X } on X × X such that • C x,y ( A × X ) = P x ( A ) for A ∈ B X , • C x,y ( X × B ) = P y ( B ) for B ∈ B X ,where x, y ∈ X , is called coupling. VII. ITERATED FUNCTION SYSTEMS
We consider a continuous mapping S : X × [0 , T ) → X and a lower semi-continuous, non-negative normalized function p : X × [0 , T ) → [0 , ∞ ) . For each A ∈ B X we build a transitionoperator P x ( A ) = Π( x, A ) . Since P µ is given by (5) and ( P µ )( A ) = R X P x ( A ) µ ( dx ) , we define P x to be P x ( A ) = Z T A ( S ( x, t )) p ( x, t ) dt = Z T δ S ( x,t ) ( A ) p ( x, t ) dt. Once again, we apply (6) and (7) to construct measures on products. As previously, P ∞ µ exists for µ ∈ M fin ( X ) . Obviously, P n µ is the n -th marginal of P ∞ µ .Fix ¯ x ∈ X . We define V : X → [0 , ∞ ) to be V ( x ) = ̺ ( x, ¯ x ) . Let us evalute an integral h V, P µ i = R X ̺ ( x, ¯ x ) P µ ( dx ) = R X U ̺ ( x, ¯ x ) µ ( dx ) , where U is a dualoperator to P . Since P is a Feller operator given by (5) , we can define U : C ( X ) → C ( X ) by U f ( x ) = Z T f ( S ( x, t )) p ( x, t ) dt. Hence, from initial assumptions (I) and (II), we obtain h V, P µ i = Z X (cid:18)Z T ̺ ( S ( x, t ) , ¯ x ) p ( x, t ) dt (cid:19) µ ( dx ) ≤ Z X (cid:18)Z T ( ̺ ( S ( x, t ) , S (¯ x, t )) + ̺ ( S (¯ x, t ) , ¯ x )) p ( x, t ) dt (cid:19) µ ( dx ) ≤ Z X (cid:18)Z T λ ( t ) ̺ ( x, ¯ x ) p ( x, t ) dt + Z T ̺ ( S (¯ x, t )¯ x ) p ( x, t ) dt (cid:19) µ ( dx ) ≤ a Z X ̺ ( x, ¯ x ) µ ( dx ) + Z X ˜ cµ ( dx )= a h V, µ i + c, where c = R X ˜ cµ ( dx ) and ˜ c = sup t ∈ [0 ,T ) ̺ ( S (¯ x, t ) , ¯ x ) exists from assumption (III). Fix probabilitymeasures µ, ν ∈ M ( X ) and Borel sets A, B ∈ B X . We consider b ∈ M ( X ) such that b ( A × X ) = µ ( A ) , b ( X × B ) = ν ( B ) and ¯ b ∈ M ( X ) such that ¯ b ( A × X ) = P µ ( A ) , ¯ b ( X × B ) = P ν ( B ) .Furthermore, we define ¯ V : X → [0 , ∞ )¯ V ( x, y ) = V ( x ) + V ( y ) for x, y ∈ X .Note that h ¯ V , ¯ b i ≤ a h ¯ V , b i + 2 c. (8)For measures b ∈ M fin ( X ) finite on X and with the first moment finite we define the linearfunctional φ ( b ) = Z X ̺ ( x, y ) b ( dx, dy ) . Following the above definitions, we easily obtain φ ( b ) ≤ h ¯ V , b i . (9) VIII. COUPLING FOR ITERETED FUNCTION SYSTEMS On X ∞ we define the transition sub-probability function Q x,y ( A × B ) = Z T min { p ( x, t ) , p ( y, t ) } δ ( S ( x,t ) ,S ( y,t )) ( A × B ) dt for A, B ∈ B X . (10)It is easy to check that Q x,y ( A × X ) ≤ Z T p ( x, t ) δ S ( x,t ) ( A ) dt = Z T A ( S ( x, t )) p ( x, t ) dt = P x ( A ) and analogously Q x,y ( X × B ) ≤ P y ( B ) . Let Q b denote the measure Q b ( A × B ) = Z X Q x,y ( A × B ) b ( dx, dy ) for A, B ∈ B X . (11)Note that for every A, B ∈ B X we obtain Q n +1 b ( A × B ) = Z X Q n +1 x,y ( A × B ) b ( dx, dy )= Z X Z X Q z ,z ( A × B ) Q nx,y ( dz , dz ) b ( dx, dy )= Z X Q z ,z ( A × B ) Z X Q nx,y ( dz , dz ) b ( dx, dy )= Z X Q z ,z ( A × B ) Q nb ( dx, dy ) = Q Q nb ( A × B ) . Again, we are able to construct measures on products, as well as we are able to construct Q ∞ b on X ∞ . Now, we check that φ ( Q b ) ≤ aφ ( b ) . (12)Let us observe that φ ( Q b ) = Z X Z X ̺ ( x, y ) Q u,v ( dx, dy ) b ( du, dv )= Z X Z T (cid:18)Z X ̺ ( x, y ) min { p ( u, t ) , p ( v, t ) } δ ( S ( u,t ) ,S ( v,t )) ( dx, dy ) (cid:19) dt b ( du, dv ) ≤ Z X Z T ( ̺ ( S ( u, t ) , S ( v, t )) p ( u, t )) dt b ( du, dv ) ≤ Z X Z T λ ( t ) ̺ ( u, v ) p ( u, t ) dt b ( du, dv ) ≤ a Z X ̺ ( u, v ) b ( du, dv )= aφ ( b ) . We can find such a measure R x,y that the sum of Q x,y and R x,y gives a new coupling measure C x,y , i.e. C x,y ( A × X ) = P x ( A ) and C x,y ( X × B ) = P y ( B ) for A, B ∈ B X . Lemma 4.
There exists the family { R x,y : x, y ∈ X } of measures on X such that we can define C x,y = Q x,y + R x,y for x, y ∈ X and, moreover,(i) the mapping ( x, y ) R x,y ( A × B ) is measurable for every A, B ∈ B X ;(ii) measures R x,y are non-negative for x, y ∈ X ;(iii) measures C x,y are probabilistic for every x, y ∈ X and so { C x,y : x, y ∈ X } is the transitionprobability function on X ;(iv) for every A, B ∈ B X and x, y ∈ X we get C x,y ( A × X ) = P x ( A ) and C x,y ( X × B ) = P y ( B ) .Proof. Fix
A, B ∈ B X . Let R x,y ( A × B ) = (1 − Q x,y ( X )) − ( P x ( A ) − Q x,y ( A × X ))( P y ( B ) − Q x,y ( X × B )) , Q x,y ( X ) < , Q x,y ( X ) = 1 . Obviously, the formula may be extended to the measure. The mapping has all desirable properties(i)-(iv).Lemma shows that we can construct the coupling { C x,y : x, y ∈ X } for { P x : x ∈ X } suchthat Q x,y ≤ C x,y , whereas measures R x,y are non-negative. By (6) and (7) we obtain the family ofprobability measures { C ∞ x,y : x, y ∈ X } on ( X ) ∞ with marginals P ∞ x and P ∞ y . This constructionappears in Hairer [4].Fix ( x , y ) ∈ X . The transition probability function { C x,y : x, y ∈ X } defines the Markov chain Φ on X with starting point ( x , y ) , while the transition probability function { ˆ C x,y,θ : x, y ∈ X, θ ∈{ , }} defines the Markov chain ˆΦ on the augmented space X × { , } with initial distribution ˆ C x ,y = δ ( x ,y , . If ˆΦ n = ( x, y, i ) , where x, y ∈ X , i ∈ { , } , thenProb ( ˆΦ n +1 ∈ A × B × { } | ˆΦ n = ( x, y, i ) , i ∈ { , } ) = Q x,y ( A × B ) , Prob ( ˆΦ n +1 ∈ A × B × { } | ˆΦ n = ( x, y, i ) , i ∈ { , } ) = R x,y ( A × B ) , where A, B ∈ B X . Once again, we refer to (6) and (7) to obtain the measure ˆ C ∞ x ,y on ( X ×{ , } ) ∞ which is associated with the Markov chain ˆΦ .From now on, we assume that processes Φ and ˆΦ taking values in X and X ×{ , } , respectively,are defined on (Ω , F, P ) . The expected value of the measures C ∞ x ,y or ˆ C ∞ x ,y is denoted by E x ,y .0 IX. AUXILIARY THEOREMS
Fix ε ∈ (0 , − a ) . Set K ε = { ( x, y ) ∈ X : ¯ V ( x, y ) < ε − c } , where c is defined in Section VII. Let d : ( X ) ∞ → N denote the time of the first visit in K ε , i.e. d (( x n , y n ) n ∈ N ) = inf { n ≥ x n , y n ) ∈ K ε } . Theorem 5.
For every γ ∈ (0 , there exist positive constants C , C such that E x ,y (cid:16) ( a + ε ) − γd (cid:17) ≤ C ¯ V ( x , y ) + C . Proof.
Fix ( x , y ) ∈ X . Let Φ = ( X n , Y n ) n ∈ N be the Markov chain with starting point ( x , y ) and transition probability function { C x,y : x, y ∈ X } . Let F n ⊂ F , n ∈ N be the natural filtrationin Ω associated with Φ . We define A n = { ω ∈ Ω : Φ i = ( X i ( ω ) , Y i ( ω )) / ∈ K ε for i = 1 , . . . , n } , n ∈ N. Obviously A n +1 ⊂ A n and A n ∈ F n for n ∈ N . The following inequalities are P -a.s. satisfied in Ω1 A n E x ,y (cid:0) ¯ V ( X n +1 , Y n +1 ) | F n (cid:1) ≤ A n ( a ¯ V ( X n , Y n ) + 2 c ) ≤ A n ( a + ε ) ¯ V ( X n , Y n ) . The first inequality is a consequence of (8) , the second follows directly from the definitions of A n and K ε . Accordingly, we obtain Z A n ¯ V ( X n , Y n ) d P ≤ Z A n − ¯ V ( X n , Y n ) d P = Z A n − E (cid:0) ¯ V ( X n , Y n ) | F n − (cid:1) d P ≤ Z A n − (cid:0) a ¯ V ( X n − , Y n − ) + 2 c (cid:1) d P ≤ ( a + ε ) Z A n − ¯ V ( X n − , Y n − ) d P . On applying this estimates finitely many times, we obtain Z A n ¯ V ( X n , Y n ) d P ≤ ( a + ε ) n − Z A ¯ V ( X , Y ) d P ≤ ( a + ε ) n − (cid:0) a ¯ V ( X , Y ) + 2 c (cid:1) . Note that P ( A n ) ≤ Z A n ε (2 c ) − ¯ V ( X n , Y n ) d P ≤ ε (2 c ( a + ε )) − ( a + ε ) n (cid:0) a ¯ V ( X , Y ) + 2 c (cid:1) . Set ˆ c := ε (2 c ( a + ε )) − (cid:0) a ¯ V ( X , Y ) + 2 c (cid:1) . Then, P ( A n ) ≤ ( a + ε ) n ˆ c . Fix γ ∈ (0 , . Since d takesnatural values n ∈ N , we obtain ∞ X n =1 ( a + ε ) − γn P ( A n ) ≤ ∞ X n =1 ( a + ε ) − γn ( a + ε ) n ˆ c = ∞ X n =1 ( a + ε ) (1 − γ ) n ˆ c, which implies convergence of the series. The proof is complete by the definition of ˆ c and withproperly choosen C , C .1For every positive r > we determine the set C r = { ( x, y ) ∈ X : ̺ ( x, y ) < r } . Lemma 6.
Fix ˜ a ∈ ( a, . Let C r be the set defined above and suppose that supp b ⊂ C r . Thereexists ¯ γ > such that Q b ( C ˜ ar ) ≥ ¯ γ k b k for a , δ and M defined in Section IV .Proof. Directly from (11) and (10) we obtain Q b ( C ˜ ar ) = Z X Z T min { p ( x, t ) , p ( y, t ) } δ ( S ( x,t ) ,S ( y,t )) ( C ˜ ar ) dt b ( dx, dy )= Z X (cid:18)Z T min { p ( x, t ) , p ( y, t ) } C ˜ ar ( S ( x, t ) , S ( y, t )) dt (cid:19) b ( dx, dy ) . Note that C ˜ ar ( S ( x, t ) , S ( y, t )) = 1 if and only if t ∈ T , where T := { t ∈ (0 , T ) : ̺ ( S ( x, t ) , S ( y, t )) < ˜ ar } . Set T ′ := (0 , T ) \T . Hence Q b ( C ˜ ar ) = Z X (cid:16) Z T min { p ( x, t ) , p ( y, t ) } C ˜ ar ( S ( x, t ) , S ( y, t )) dt + Z T ′ min { p ( x, t ) , p ( y, t ) } C ˜ ar ( S ( x, t ) , S ( y, t )) dt (cid:17) b ( dx, dy ) . Note that Z T ′ min { p ( x, t ) , p ( y, t ) } ̺ ( S ( x, t ) , S ( y, t )) dt ≤ Z T ′ p ( x, t ) λ ( t ) ̺ ( x, y ) dt ≤ a̺ ( x, y ) , so for ( x, y ) ∈ C r Z T ′ min { p ( x, t ) , p ( y, t ) } ̺ ( S ( x, t ) , S ( y, t )) dt ≤ ar. However, ˜ ar Z T ′ p ( x, t ) dt < Z T ′ p ( x, t ) ̺ ( S ( x, t ) , S ( y, t )) dt. Therefore Z T ′ p ( x, t ) dt < a ˜ a < , T ′ satisfies |T ′ |
For every ε ∈ (0 , − a ) there exists n ∈ N such that k Q ∞ x,y k ≥ 12 ¯ γ n for ( x, y ) ∈ K ε ,where ¯ γ > is given in Lemma 6.Proof. Note that for every ( x, y ) ∈ X Z T (min { p ( x, t ) , p ( y, t ) } + | p ( x, t ) − p ( y, t ) | − p ( x, t )) dt ≥ , and therefore k Q x,y k + Z T | p ( x, t ) − p ( y, t ) | dt ≥ . From assumption (IV) there is ¯ c > such that k Q x,y k ≥ − Z T | p ( x, t ) − p ( y, t ) | dt ≥ − ¯ c̺ ( x, y ) . For every b ∈ M fin ( X ) we get k Q b k = Z X k Q x,y k b ( dx, dy ) ≥ Z X b ( dx, dy ) − ¯ c Z X ̺ ( x, y ) b ( dx, dy ) = k b k − ¯ cφ ( b ) . Property (12) implies that k Q n +1 b k ≥ k b k − ¯ c ( n X k =0 a k ) φ ( b ) ≥ k b k − (1 − a ) − ¯ cφ ( b ) , n ∈ N. If supp b ⊂ C r , then φ ( b ) ≤ Z C r ̺ ( x, y ) b ( dx, dy ) ≤ r k b k . r = (2¯ c ) − (1 − a ) . We obtain k Q ∞ b k ≥ k b k . Fix ε ∈ (0 , − a ) . It is clear that K ε ⊂ C ε − c . If we define n := min { n ≥ a n ( ε ) − c < r } ,then C a n ε − c ⊂ C r . Remembering that Q n + mx,y = Q mQ nx,y and using the Markov property, we obtain Q ∞ x,y ( X ) ≥ Q ∞ Q n x,y ( X ) . According to Lemma , we obtain k Q ∞ x,y k ≥ k Q ∞ Q n x,y k ≥ k Q n x,y k Q n x,y ( C r )2 ≥ Q n x,y ( C a n ε − c )2 ≥ ¯ γ n for ( x, y ) ∈ K ε . This finishes the proof. Definition 8. Coupling time τ : ( X × { , } ) ∞ → N is defined as follows τ (( x n , y n , θ n ) n ∈ N ) = inf { n ≥ θ k = 1 for k ≥ n } . Theorem 9. There exist ˜ q ∈ (0 , and C > such that E x,y (cid:0) ˜ q − τ (cid:1) ≤ C (1 + ¯ V ( x, y )) for ( x, y ) ∈ X .Proof. Fix ε ∈ (0 , − a ) and ( x, y ) ∈ X . To simplify notation, we write β = ( a + ε ) . Let d be therandom moment of the first visit in K ε . Suppose that d = d, d n +1 = d n + d ◦ T d n , where n > and T n are shift operators on ( X × { , } ) ∞ , i.e. T n (( x k , y k , θ k ) k ∈ N ) =( x k + n , y k + n , θ k + n ) k ∈ N . Theorem implies that every d n is C ∞ x,y -a.s. finished. The strong Mar-kov property shows that E x,y (cid:16) β d ◦ T d n | F d n (cid:17) = E ( X dn ,Y dn ) (cid:16) β d (cid:17) for n ∈ N, where F d n denotes the σ -algebra on ( X × { , } ) generated by d n and Φ = ( X n , Y n ) n ∈ N is theMarkov chain with transition probability function { C ∞ x,y : x, y ∈ X } . By Theorem and thedefinition of K ε we obtain E x,y (cid:16) β d n +1 (cid:17) = E x,y (cid:16) β d n E ( X dn ,Y dn ) (cid:16) β d (cid:17)(cid:17) ≤ E x,y (cid:16) β d n (cid:17) ( C ε − c + C ) . Fix η = C ε − c + C . Consequently, E x,y (cid:16) β d n +1 (cid:17) ≤ η n E x,y (cid:16) β d (cid:17) ≤ η n (cid:0) C ¯ V ( x, y ) + C (cid:1) . (13)4We define ˆ τ (( x n , y n , θ n ) n ∈ N ) = inf { n ≥ x n , y n ) ∈ K ε , θ k = 1 for k ≥ n } and σ = inf { n ≥ τ = d n } . By Theorem there is n ∈ N such that ˆ C ∞ x,y ( σ > n ) ≤ (1 − ¯ γ n n for n ∈ N. (14)Let d > . By the Hölder inequality, (13) and (14) we obtain E x,y (cid:16) β ˆ τp (cid:17) ≤ ∞ X k =1 E x,y (cid:18) β dkp σ = k (cid:19) ≤ ∞ X k =1 (cid:16) E x,y (cid:16) β d k (cid:17)(cid:17) p (cid:16) ˆ C ∞ x,y ( σ = k ) (cid:17) (1 − p ) ≤ (cid:0) C ¯ V ( x, y ) + C (cid:1) p η − p ∞ X k =1 η kp (1 − 12 ¯ γ n ) ( k − − p ) = (cid:0) C ¯ V ( x, y ) + C (cid:1) p η − p (1 − 12 ¯ γ n ) − (1 − p ) ∞ X k =1 η − ¯ γ n ! p (1 − 12 ¯ γ n ) k . For p sufficiently large and ˜ q = β − p , we get E x,y (cid:16) ˜ q − ˆ τ (cid:17) = E x,y (cid:16) β ˆ τp (cid:17) ≤ (1 + ¯ V ( x, y )) C for some C . Since τ ≤ ˆ τ , we finish the proof. Theorem 10. There exist q ∈ (0 , and C > such that k P nx − P ny k L ≤ q n C (1 + ¯ V ( x, y )) for x, y ∈ X and n ∈ N. Proof. For n ∈ N we define sets A n = { t ∈ ( X × { , } ) ∞ : τ ( t ) ≤ n } ,B n = { t ∈ ( X × { , } ) ∞ : τ ( t ) > n } . Note that A n ∩ B n = ∅ and A n ∪ B n = ( X × { , } ) ∞ , so for n ∈ N we have ˆ C ∞ x,y = ˆ C ∞ x,y | A n + ˆ C ∞ x,y | B n . Hence, k P nx − P ny k L = sup f ∈L | Z X f ( z )( P nx − P ny )( dz ) | = sup f ∈L | Z X ( f ( z ) − f ( z ))(Π ∗ X Π ∗ n ˆ C ∞ x,y )( dz , dz ) | , where Π n : ( X × { , } ) ∞ → X × { , } are the projections on the n -th component and Π X : X × { , } → X is the projection on X . Now, recalling the definition of the set L (see (1)), we5obtain k P nx − P ny k L = sup f ∈L (cid:12)(cid:12)(cid:12) Z X ( f ( z ) − f ( z ))(Π ∗ X Π ∗ n ˆ C ∞ x,y | A n )( dz , dz )+ Z X ( f ( z ) − f ( z ))(Π ∗ X Π ∗ n ˆ C ∞ x,y | B n )( dz , dz ) (cid:12)(cid:12)(cid:12) ≤ sup f ∈L (cid:12)(cid:12)(cid:12) Z X ( f ( z ) − f ( z ))(Π ∗ X Π ∗ n ˆ C ∞ x,y | A n )( dz , dz ) (cid:12)(cid:12)(cid:12) + 2 ˆ C ∞ x,y ( B n ) ≤ sup f ∈L (cid:12)(cid:12)(cid:12) Z X ̺ ( z , z )(Π ∗ X Π ∗ n ˆ C ∞ x,y | A n )( dz , dz ) (cid:12)(cid:12)(cid:12) + 2 ˆ C ∞ x,y ( B n ) . Note that by iterative application of (12) we obtain Z X ̺ ( z , z )(Π ∗ X Π ∗ n ˆ C ∞ x,y | A n )( dz , dz ) = φ (Π ∗ X Π ∗ n ( ˆ C ∞ x,y | A n )) ≤ a n φ (Π ∗ X Π ∗ n ( ˆ C ∞ x,y | A n )) . Then it follows from (8) and (9) that φ (Π ∗ X Π ∗ n ( ˆ C ∞ x,y | A n )) ≤ a n ¯ V ( x, y ) + 2 c − a We obtain coupling inequality k P nx − P ny k L ≤ a n (cid:18) a n ¯ V ( x, y ) + 2 c − a (cid:19) + 2 ˆ C ∞ x,y ( B n ) . It follows from Theorem and the Chebyshev inequality that ˆ C ∞ x,y ( B n ) = ˆ C ∞ x,y ( { τ > n } ) = ˆ C ∞ x,y ( { ˜ q − τ ≤ ˜ q − n } ) ≤ E x,y (˜ q − τ )˜ q − n ≤ ˜ q n C (1 + ¯ V ( x, y )) for some ˜ q ∈ (0 , and C > . Finally, k P nx − P ny k L ≤ a n C (1 + ¯ V ( x, y )) + 2˜ q n C (1 + ¯ V ( x, y )) , where C = max { a n , (1 − a ) − c } . Setting q := max { a , ˜ q } and C := C + 2 C , gives ourclaim. X. PROOF OF THE MAIN THEOREM Theorem is essential to the following proof. Proof. Theorem implies that k P nx − P ny k L ≤ q n C (1 + ¯ V ( x, y )) for x, y ∈ X and n ∈ N, where q and C are the appropriate constants. Obviously, k P n µ − µ ∗ k L = k P n µ − P n µ ∗ k L = sup f ∈L (cid:12)(cid:12)(cid:12)(cid:12)Z X f ( z ) P n µ ( dz ) − Z X f ( z ) P n µ ∗ ( dz ) (cid:12)(cid:12)(cid:12)(cid:12) . Z X f ( z ) P n µ ( dz ) − Z X f ( z ) P n µ ∗ ( dz ) = Z X Z X f ( z ) P nx ( dz ) µ ( dx ) − Z X Z X f ( z ) P ny ( dz ) µ ∗ ( dy )= Z X Z X (cid:18)Z X f ( z ) P nx ( dz ) − Z X f ( z ) P ny ( dz ) (cid:19) µ ∗ ( dy ) µ ( dx ) ≤ Z X Z X k P nx − P ny k L µ ∗ ( dy ) µ ( dx ) ≤ q n C, where C := R X R X C (1 + ¯ V ( x, y )) µ ∗ ( dy ) µ ( dx ) . Since C is dependant only on µ , the proof iscomplete. [1] P. Billingsley, Convergence of probability measures , John Wiley & Sons, Inc., New York 1968.[2] W. Bołt, A.A. Majewski & T. Szarek, An invariance principle for the law of the iterated logarithm forsome Markov chains , Studia Math. 212 (2012), 41–53.[3] R. Fortet & B. Mourier, Convergence de la r ´ e parition empirigue vers la r ´ e parition th ´ e or ´ e tique , Ann.Sci. ´ E cole Norm. Sup. 70 (1953), 267–285.[4] M. Hairer, Exponential mixing properties of stochastic PDEs through asymptotic coupling , Probab.Theory Related Fields 124 (2002), 345–380.[5] M. Hairer & J.C. Mattingly, Spectral gaps in Wasserstein distances and the 2D stochastic Navier-Stokesequations , The Annals of Probability, Vol. 36, No. 6 (2008), 2050–2091.[6] P.R. Halmos, Measure Theory , Graduate Texts in Mathematics 18, Springer-Verlag (1974), 117–136.[7] R. Kapica & M. Ślęczka, Random iteration with place dependent probabilities , arXiv:1107.0707[math.PR] (2012).[8] T. Komorowski & A. Walczuk, Central limit theorem for Markov processes with spectral gap in theWasserstein metric , Stoch. Proc. Appl. 122 (2012), 2155–2184.[9] A. Lasota & M.C. Mackey, Cell division and the stability of cellular populations , J. Math. Biol. 38(1999), 241–261.[10] A. Lasota & J.A. Yorke, Lower bound technique for Markov operators nad iterated function systems ,Random Comput. Dynam. 2(1) (1994), 41–77.[11] A. Murray & T. Hunt, The Cell Cycle , Oxford University Press (1993).[12] S.P. Meyn & R.L. Tweedie, Markov Chains and Stochastic Stability , Springer, London (1993).[13] S.T. Rachev, Probability Metrics and the Stability od Stochastic Models , John Wiley, New York (1991).[14] T. Szarek, Invariant measures for nonexpansive Markov operators on Polish spaces , Disserationes Math.415 (2003), 1–62. [15] M. Ślęczka, The rate od convergence for iterated function systems , Studia Math 205, no. 3 (2011),201–214.[16] J.J. Tyson & K.B. Hannsgen, Cell growth and division: a deterministic/probabilistic model of the cellcycle , J. Math. Biology 26 (1988), 465–475.[17] C. Villani,