Long time behavior of diffusions with Markov switching
aa r X i v : . [ m a t h . P R ] D ec Long time behavior of diffusions with Markov switching
Jean-Baptiste
Bardet , H´el`ene
Gu´erin , Florent
Malrieu
Preprint – October 15, 2018
Abstract
Let Y be an Ornstein-Uhlenbeck diffusion governed by an ergodic finite stateMarkov process X : dY t = − λ ( X t ) Y t dt + σ ( X t ) dB t , Y given. Under ergodicity condi-tion, we get quantitative estimates for the long time behavior of Y . We also establish atrichotomy for the tail of the stationary distribution of Y : it can be heavy (only somemoments are finite), exponential-like (only some exponential moments are finite) orGaussian-like (its Laplace transform is bounded below and above by Gaussian ones).The critical moments are characterized by the parameters of the model. AMS Classification 2000:
Key words:
Ornstein-Uhlenbeck diffusion, Markov switching, jump process, randomdifference equation, light tail, heavy tail, Laplace transform, convergence to equilibrium.
The aim of this paper is to draw a complete picture of the ergodicity of Ornstein-Uhlenbeckdiffusions with Markov switching (characterization of the tails of the invariant measureand quantitative convergence to equilibrium). In particular we make more precise theresults of [7, 4]. The so-called diffusion with Markov switching Y = ( Y t ) t > is defined asfollows.The switching process X = ( X t ) t > is a Markov process on the finite state space E = { , . . . , d } (with d > A = ( A ( x, ˜ x )) x, ˜ x ∈ E . Let us denoteby a ( x ) the jump rate at state x ∈ E and P = ( P ( x, ˜ x )) x, ˜ x ∈ E the transition matrix of theembedded chain. One has, for x = ˜ x in E , a ( x ) = − A ( x, x ) and P ( x, ˜ x ) = − A ( x, ˜ x ) A ( x, x ) . We assume that P is irreducible recurrent. The process X is ergodic with a uniqueinvariant probability measure denoted by µ . See [10] for details. Let F Xt = σ ( X u , u t ). Moreover, let E x denote the expectation with respect to the law P x of X knowing that X = x .Let B = ( B t ) t > be a standard Brownian motion on R and Y a real-valued randomvariable such that B , Y and X are independent. Conditionnally to X , the process Y =( Y t ) t > is the real-valued diffusion process defined by: Y t = Y − Z t λ ( X u ) Y u du + Z t σ ( X u ) dB u , (1)1here λ and σ are two functions from E to R and (0 , ∞ ) respectively. Of course, if λ and σ are constant, Y is just an Ornstein-Uhlenbeck process with attractive ( λ > λ = 0) or repulsive coefficient ( λ < Y t = Y exp (cid:18) − Z t λ ( X u ) du (cid:19) + Z t exp (cid:18) − Z tu λ ( X v ) dv (cid:19) σ ( X u ) dB u . (2) Remark 1.1.
In others words, the full process ( X, Y ) is the Markov process on E × R associated to the infinitesimal generator A defined by: A f ( x, y ) = X ˜ x ∈ E A ( x, ˜ x )( f (˜ x, y ) − f ( x, y )) + σ ( x ) ∂ f ( x, y ) − λ ( x ) ∂ f ( x, y ) . Previous works investigated the ergodicity of Y and some integrability properties forthe invariant measure. For example, in [2], the multidimensional case is adressed togetherwith the case of diffusion coefficients depending on Y . Stability results and sufficientconditions for the existence of moments are established under Lyapunov-type conditions.In [7], it is proved that the Markov switching diffusion Y is ergodic if and only if X x ∈ E λ ( x ) µ ( x ) > , (3)that is if the process is attractive “in average”. Let us denote by ν its invariant probabilitymeasure of Y . It is also shown in [7] that ν admits a moment of order p if, for any x ∈ E , pλ ( x ) + a ( x ) is positive and the spectral radius of the matrix M p = (cid:18) a ( x ) a ( x ) + pλ ( x ) P ( x, ˜ x ) (cid:19) x, ˜ x ∈ E (4)is smaller than 1. In the sequel ρ ( M ) stands for the spectral radius of a matrix M .In [4], the result is more precise: a dichotomy is exhibited between heavy and lighttails for ν . Let us define λ = min x ∈ E λ ( x ) and λ = max x ∈ E λ ( x ) . (5) Theorem 1.2 (de Saporta-Yao [4]) . Under Assumption (3) , the following dichotomyholds:1. if λ < , then there exists C > such that t κ ν (( t, + ∞ )) −−−−→ t → + ∞ C, where κ is the unique p ∈ (0 , min {− a ( x ) /λ ( x ) , λ ( x ) < } ) such that the spectralradius of M p is equal to 1;2. if λ > , then ν has moments of all order. emark 1.3. Note that the constant κ does not depend on the parameters ( σ ( x )) x ∈ E , andthat Point 1. from previous theorem implies that, for λ < , the p th moment of ν is finiteif and only if p < κ .The main idea of the proofs in [7] and [4] is to study the discrete time Markov chain ( X δn , Y δn ) n > for any δ > with renewal theory and then to let δ goes to 0. The main goal of the present paper is to show that there are three (and not only two)different behaviors for the tails of ν .Let us gather below several useful notations. Notations 1.4.
Let us define for the diffusion coefficients σ = min x ∈ E σ ( x ) and σ = max x ∈ E σ ( x ) . (6) We denote by A p the matrix A − p Λ where Λ is the diagonal matrix with diagonal ( λ (1) , . . . , λ ( d )) and associate to A p the quantity η p := − max γ ∈ Spec( A p ) Re γ. (7) When λ > , the set E is the union of M = { x ∈ E, λ ( x ) > } and N = { x ∈ E, λ ( x ) = 0 } . (8) Let us then define β ( x ) = σ ( x ) a ( x ) and β = max x ∈ N β ( x ) , (9) and, for any v such that v < β − , the matrix P ( N ) v = (cid:18) − β ( x ) v P ( x, x ′ ) (cid:19) x,x ′ ∈ N . (10)We are now able to state our main result. Theorem 1.5.
Let us define κ = sup { p > , η p > } ∈ (0 , + ∞ ] . Then η p is continuous, positive on the set (0 , κ ) and negative on ( κ, + ∞ ) . Under Assump-tion (3) , the following trichotomy holds:1. if λ < then < κ min {− a ( x ) /λ ( x ) , λ ( x ) < } , and the p th moment of ν isfinite if and only if p < κ .2. if λ = 0 , then κ is infinite and the domain of the Laplace transform of ν is ( − v c , v c ) where v c = sup n v > , ρ ( P ( N ) v ) < o ; (11)3 . if λ > , then κ is infinite and ν has a Gaussian-like Laplace transform: for any v ∈ R , exp (cid:18) σ v λ (cid:19) Z e vy ν ( dy ) exp (cid:18) σ v λ (cid:19) . Moreover, its tail looks like the one of the Gaussian law with variance α/ where α = max x ∈ E σ ( x ) /λ ( x ) since y e δy is ν -integrable if and only if δ < /α . Remark 1.6.
In the sequel we will respectively refer to Points 1. 2. and 3. as thepolynomial, exponential-like and Gaussian-like cases.
The first point of this theorem is a reformulation of the first point of Theorem 1.2 byde Saporta and Yao. We can in particular check that our characterization of κ in Theorem1.5 is equivalent to the one given by de Saporta and Yao in Point 1. of Theorem 1.2 (seeRemark 4.3). We provide a direct and simple proof of this result based on Itˆo formulaand some basic results on finite Markov chains. The proof of Points 2. relies on preciseestimates on the Laplace transform of Y t that can be derived from a discrete time modelalready studied in [6, 8, 1].It is straightforward from (2) that, for any measure π on E × R , the Laplace transform L t of Y t is L t ( v ) := E π (cid:0) e vY t (cid:1) = E π (cid:20) exp (cid:18) vY e − R t λ ( X s ) ds + v Z t σ ( X s ) e − R ts λ ( X r ) dr ds (cid:19)(cid:21) . (12)The estimate of the Laplace transform in the Gaussian-like case (Point 3.) is henceeasily deduced from this explicit expression. Assuming that Y = 0, we get from (12) that L t ( v ) E (cid:20) exp (cid:18) v Z t σ e − R ts λ dr ds (cid:19)(cid:21) exp (cid:18)(cid:16) − e − λt (cid:17) σ v λ (cid:19) , which gives the upper bound as t goes to infinity. The lower bound follows from a sym-metric argument.The proofs of Point 2. and of the second part of Point 3. are more delicate (andinteresting). For the exponential case, we first get the critical exponential moment for theprocess Y observed at the hitting times of the subset M defined in (8). Then we showthat the full process has the same critical exponent.At the end of the paper we focus on the convergence of the law of Y t to the invariantmeasure ν . We get an explicit exponential bound for the Wasserstein distance of order p for any p < κ . Classically, let p > P p be the set of the probability measures on R with a finite p th moment. Define the Wasserstein distance W p on P p as follows: for any ρ and ˜ ρ in P p , W p ( ρ, ˜ ρ ) = (cid:18) inf π (cid:26)Z | y − ˜ y | p π ( dy, d ˜ y ) (cid:27)(cid:19) /p , where the infimimum is taken among all the probability measures π on R with marginals ρ and ˜ ρ . It is well-known that ( P p , W p ) is a complete metric space (see [11]).The strategy is to couple two processes ( X, Y ) and ( ˜ X, ˜ Y ) in such a way that theWasserstein distance between L ( Y t ) and L ( ˜ Y t ) goes to zero as t goes to infinity. Thisrequires to couple the initial conditions and the dynamics (of both the Markov chains andthe diffusion part). When X and ˜ X have the same law, the coupling is trivial: we choose X = ˜ X and the same driving Brownian motion.4 heorem 1.7. Let p < κ . Assume that X and ˜ X have the same law. Let Y and ˜ Y be solutions of (1) associated to ( X t ) and ( ˜ X t ) and assume that Y and ˜ Y have finitemoment of order p . Then there exists C ( p ) such that W p (cid:16) L ( Y t ) , L ( ˜ Y t ) (cid:17) p C ( p ) e − η p t W p (cid:16) L ( Y ) , L ( ˜ Y ) (cid:17) p , where η p is given by (7) . If X and ˜ X do not have the same law, one first has to make the Markov chains X and ˜ X stick together and then to use Theorem 1.7. This provides a rather intricate boundwhich is given for convenience in Section 5.The paper is organised as follows. In Section 2 we complete the proof for the Gaussian-like case of Theorem 1.5. The exponential-like case is studied in Section 3. Since thecritical exponential moment is not explicit in the general case, we give also the explicitcomputation of the Laplace transform of ν when E is reduced to { , } . In Section 4 weestablish a uniform bound for the p th moment of ( Y t ) t for any p < κ and the first point ofTheorem 1.5 as a corollary. We finally provide the proof of Theorem 1.7 and its extensionto general initial conditions in Section 5. This section is dedicated to the proof of the second part of Point 3. of Theorem 1.5.
Proof of Point 3. of Theorem 1.5.
Let us denote by α ( x ) = σ ( x ) λ ( x ) for x ∈ E and α = max x ∈ E α ( x ) < + ∞ . For any δ ∈ (0 , /α ), Itˆo’s formula ensures that de δY t = (cid:0) − λ ( X t ) δY t + (2 δ Y t + δ ) σ ( X t ) (cid:1) e δY t dt + dM t where ( M t ) t is a martingale. For any x ∈ E and y ∈ R ,2( − λ ( x ) + δσ ( x ) ) y + σ ( x ) − λ ( x )(1 − δα ) y + αλ ( x ) − λ (1 − δα ) y + αλ, since δα <
1. Moreover, for any a >
0, there exists b > y ∈ R , − λδ (1 − αδ ) y t + λαδ − a + be − δy , thus ddt E (cid:16) e δY t (cid:17) − a E (cid:16) e δY t (cid:17) + b. As a consequence, sup t > E (cid:16) e δY t (cid:17) is finite as soon as E (cid:16) e δY (cid:17) is finite and δα < α (1) = α . Choose ( X , Y )with law ν (the invariant measure of ( X, Y )). For any t >
0, we have E (cid:16) e δY (cid:17) = E (cid:16) e δY t (cid:17) > E h { X =1 } E ,Y (cid:16) { T >t } e δY t (cid:17)i , T is the first jump time of X . On the set { T > t } , Y t L = Y e − λ (1) t + N t where N t is a centered Gaussian random variable with variance α (1)(1 − e − λ (1) t ) / Y and T . Thus, reminding that T ∼ E ( a (1)), we get E ,Y (cid:16) { T >t } e δY t (cid:17) = e − a (1) t E (cid:16) e δ ( Y e − λ (1) t + N t ) (cid:17) . Since a E (cid:16) e δ ( a + N t ) (cid:17) is even and convex, it reaches its minimum at a = 0 and E (cid:16) e δ ( Y e − λ (1) t + N t ) (cid:17) > E (cid:16) e δN t (cid:17) = p − δα (1)(1 − e − λ (1) t ) if δα (1)(1 − e − λ (1) t ) < , + ∞ otherwise.As a consequence, if δ > /α (1), E (cid:16) e δY t (cid:17) is bounded below by a function of t which isinfinite for t large enough. Thus, E (cid:16) e δY t (cid:17) is infinite too. This section is dedicated to the proof of Point 2. in Theorem 1.5. We assume in thesequel that λ = 0. If ( X t ) t > is a two-states Markov process then one can use (12) tocompute explicitely the Laplace transform of the invariant measure ν . This is a warm-upfor the general case, and gives a more explicit formula for the critical exponential moment,whereas it will come from an abstract spectral criterion in the general case. In this subsection we assume that E = { , } and that λ = 0. Let us start with astraightforward computation which suggests that the Laplace transform of the invariantmeasure of Y is infinite outside a bounded interval. Remark 3.1. If T is an exponential random variable with parameter a and B is a standardBrownian motion on R (with T and B independent) then, E (cid:0) e vσB T (cid:1) = Z ∞ E (cid:0) e vσB t (cid:1) ae − at dt = Z ∞ e σ v t/ ae − at dt = 2 a a − σ v . In other words, the law of σB T is a (symmetric) Laplace law. When X spends an expo-nential time in x ∈ E with λ ( x ) = 0 , Y behaves like σ ( x ) B . Theorem 3.2 (The two-states degenerate case) . Assume that E = { , } , λ (1) = λ > and λ (2) = 0 . Then, for any v such that v < /β (2) (see (9) for the definition of β ), L ( v ) = Z + ∞−∞ e vx ν ( dx ) = (cid:18) − µ (1) β (2) v − β (2) v (cid:19)(cid:18) − β (2) v (cid:19) a (1) /λ exp (cid:18) σ (1) v λ (cid:19) . (13) If v > /β (2) , L ( v ) is infinite. roof. Since E = { , } , X is symmetric with respect to µ which is given by µ (1) = a (2) / ( a (1) + a (2)). Let us denote by L t the Laplace transform of Y t when Y = 0 and X is stationnary i.e. L ( X ) = µ . From Equation (12), one has for any v ∈ R , L t ( v ) = E µ (cid:20) exp (cid:18) v Z t σ ( X s ) e − R ts λ ( X r ) dr ds (cid:19)(cid:21) = E µ (cid:20) exp (cid:18) v Z t σ ( X s ) e − R s λ ( X r ) dr ds (cid:19)(cid:21) since µ is reversible. By monotone convergence, we get that, for any v ∈ R , L ( v ) = E µ (cid:20) exp (cid:18) v Z ∞ σ ( X s ) e − R s λ ( X r ) dr ds (cid:19)(cid:21) ∈ [1 , + ∞ ] , where L is the Laplace transform of ν .Let us introduce two auxilliary functions: for x = 1 , L x ( v ) = E x (cid:20) exp (cid:18) v Z ∞ σ ( X s ) e − R s λ ( X r ) dr ds (cid:19)(cid:21) . It is clear that L ( v ) = µ (1) L ( v ) + µ (2) L ( v ) . Moreover, if for any t > F t = σ ( X s , s t ) and T is the first jump time of X , then L x ( v ) = E x (cid:20) E x (cid:26) exp (cid:18) v Z ∞ σ ( X s ) e − R s λ ( X r ) dr ds (cid:19)(cid:12)(cid:12)(cid:12) F T (cid:27)(cid:21) = E x (cid:20) exp (cid:18) v Z T σ ( X s ) e − R s λ ( X r ) dr ds (cid:19) E x,T (cid:21) , where E x,T = E x (cid:26) exp (cid:18) v Z ∞ T σ ( X s ) e − R s λ ( X r ) dr ds (cid:19)(cid:12)(cid:12)(cid:12) F T (cid:27) . For any s ∈ [0 , T [, X s = x and then Z T σ ( X s ) e − R s λ ( X r ) dr ds = σ ( x ) − e − λ ( x ) T λ ( x ) , with the convention (1 − e − × T ) / T . Similarly, for t > T , Z ∞ T σ ( X s ) e − R s λ ( X r ) dr ds = e − λ ( x ) T Z ∞ T σ ( X s ) e − R sT λ ( X r ) dr ds The Markov property implies E x (cid:20) exp (cid:18) v Z ∞ T σ ( X s ) e − R s λ ( X r ) dr ds (cid:19)(cid:12)(cid:12)(cid:12) F T (cid:21) = L X T (cid:16) ve − λ ( x ) T (cid:17) . Thus, L x ( v ) = E " exp v σ ( x ) (1 − e − λ ( x ) T )4 λ ( x ) ! L − x (cid:16) ve − λ ( x ) T (cid:17)(cid:12)(cid:12)(cid:12) X = x . L ( v ) = E (cid:20) exp (cid:18) v σ (1) (1 − e − λT )4 λ (cid:19) L (cid:16) ve − λT (cid:17)(cid:21) , and L ( v ) = E h e v σ (2) T/ L ( v ) i = a (2)2 a (2) − σ (2) v L ( v ) if σ (2) v < a (2) , + ∞ otherwise.Using β (2) = σ (2) / a (2), one easily gets that L satisfies the following equation: for any v < /β (2), L ( v ) = 11 − β (2) v Z ∞ exp (cid:18) σ (1) v (1 − e − λt )4 λ (cid:19) L ( ve − λt ) a (1) e − a (1) t dt = 11 − β (2) v Z exp (cid:18) σ (1) v (1 − u )4 λ (cid:19) L ( vu ) a (1) λ u a (1) /λ − du. With x = uv , L ( v ) = 11 − β (2) v (cid:18) v (cid:19) a (1) /λ e σ (1) v / (4 λ ) Z v e − σ (1) x / (4 λ ) a (1) λ x a (1) /λ − L ( x ) dx. Deriving this relation provides L ′ ( v ) = (cid:18) β (2) v − β (2) v − a (1) λv + σ (1) v λ + 11 − β (2) v a (1) λv (cid:19) L ( v ) . Then L is solution of L ′ ( v ) = (cid:18) σ (1) v λ + β (2)(1 + a (1) /λ ) v − β (2) v (cid:19) L ( v )which leads to L ( v ) = e σ (1) v / (4 λ ) (cid:18) − β (2) v (cid:19) a (1) /λ , since L (0) = 1. Since L is a function of L we get L ( v ) = e σ (1) v / (4 λ ) (cid:18) − µ (1) β (2) v − β (2) v (cid:19)(cid:18) − β (2) v (cid:19) a (1) /λ . .2 The exponential-like case In this subsection we provide the proof of Point 2. ( λ = 0) of Theorem 1.5. We first recallthat, in this case, we split the state space E of the switching process X in two subsets M and N defined in (8). We denote also by F the points of M that can be reached in onestep from N : F = ( x ∈ M, X ˜ x ∈ N P (˜ x, x ) > ) . Assume for simplicity that X ∈ M and define by induction the sequence of times ( T n ) n > by T = 0 and, for n > T n +1 = inf { t > T n , X t ∈ N } , and T n +2 = inf { t > T n +1 , X t ∈ M } . When X is in M , Y looks like a Ornstein-Uhlenbeck process (with variable but attractivedrift) while it looks like a Brownian motion (with variable but bounded below and abovevariance) when X is in N . Thus, heuristically the process Y might be larger after a sojournof X in N than in M .Let us notice that for x ∈ N , Y T = Y + I x where I x = Z T σ ( X xs ) dB s and X x is the process X starting at x and T is the first hitting time of M . Our strategyis to determine the domain of the Laplace transform of I x and then to establish that isalso the one of the process Y at the entrance times of X into the set M i.e at the times( T n ) n > . We will then extend the result to the full process ( X, Y ). Proposition 3.3.
Under previous assumptions, for any v < β − , the two followingconditions are equivalent:1. for any x ∈ N , E ( e vI x ) < + ∞ ;2. ρ ( P ( N ) v ) < , where P ( N ) v is defined in Equation (10) .Proof. Let x , x , . . . , x n − be in N . We denote by ( Z n ) n the embedded chain of X . Onthe set H = { Z = x , . . . , Z n − = x n − , Z n ∈ M } , I x = Z T σ ( X x s ) dB s = n − X j =0 σ ( x j ) √ τ x j G j , where the random variables ( G j ) j , ( τ x j ) j are independent and L ( G j ) = N (0 ,
1) and L ( τ ( x j )) = E ( a ( x j )). As a consequence, E (cid:0) e vI x | H (cid:1) = n − Y j =0 E (cid:20) exp (cid:18) v σ ( x j ) τ x j (cid:19)(cid:21) = n − Y j =0 − β ( x j ) v . E ( e vI x ) = X n > x ,...,x n − ∈ N E ( e vI x | Z = x , . . . , Z n − = x n − , Z n ∈ M ) ×× P x ( Z = x , . . . , Z n − = x n − , Z n ∈ M ))= X n > x ,...,x n − ∈ N P ( x , x )1 − β ( x ) v · · · P ( x n − , x n − )1 − β ( x n − ) v P ( x n − , M )1 − β ( x n − ) v = X n ≥ δ x ( P ( N ) v ) n − ϕ , for ϕ ( x ) = − β ( x ) v P ( x, M ). Notice that ϕ is well-defined since v < /β . Moreover it ispositive because X is irreducible recurrent, so, for any x ∈ N there exists a path thatleads to M .If ρ ( P ( N ) v ) <
1, thenlim sup n → + ∞ (cid:12)(cid:12) δ x ( P ( N ) v ) n − ϕ (cid:12)(cid:12) /n lim sup n → + ∞ (cid:13)(cid:13) ( P ( N ) v ) n (cid:13)(cid:13) /n < , hence the series is convergent.If ρ v := ρ ( P ( N ) v ) >
1, by Perron-Frobenius theorem, there exists a probability measure ν with some positive coefficients such that ν P ( N ) v = ρ v ν , which implies that E ν ( e vI · ) = ν ( ϕ ) X n > ρ n − v = + ∞ , since ϕ is positive. Remark 3.4.
When X is irreducible in restriction to N (i.e. the matrices P ( N ) v areirreducible for any v ), then E ( e vI x ) = + ∞ for all x ∈ N as soon as ρ ( P ( N ) v ) > . Ifthis it not the case, the previous proposition just ensures that when ρ ( P ( N ) v ) > , then E ( e vI x ) = + ∞ for some x ∈ N . Moreover, for any x, x ′ ∈ N such that P ( x, x ′ ) is positivethen E ( e vI x ′ ) = + ∞ implies E ( e vI x ) = + ∞ . We now introduce the sub-process made of the positions of (
X, Y ) at the successivehitting times of M . Proposition 3.5.
For any n > , let us define U n = X T n and V n = Y T n . The process ( U, V ) is a Markov chain on F × R . More precisely, V n +1 = M n ( U n ) V n + Q n ( U n ) , here the sequence of random vectors (cid:0) ( M n ( x ) , Q n ( x )) x ∈ F (cid:1) is i.i.d., and independent of ( U n ) , with law given by M n ( x ) L = exp (cid:18) − Z T λ ( X xr ) dr (cid:19) Q n ( x ) L = Z T σ ( X xs ) exp (cid:18) − Z T s λ ( X xr ) dr (cid:19) dB s + Z T T σ ( X xs ) dB s . For any v < v c where v c = sup n v, ρ ( P ( N ) v ) < o , we have sup n > E (cid:16) e v | V n | (cid:17) < + ∞ . Moreover, if v > v c , this supremum is infinite.Proof. The fact that (
U, V ) is a recurrent Markov chain is a straightforward applicationof the Markov property for X .Let us introduce M n = max x ∈ F M n ( x ) and Q n = max x ∈ F | Q n ( x ) | . The random vari-ables (( M n , Q n )) n > are i.i.d. Define the sequence ( V n ) n > by V = | V | and V n +1 = M n V n + Q n for n > . The domain of the Laplace transforms of ( V n ) n > is known thanks to the exhaustive study[1]. Since P ( Q n = 0) < P (0 < M n <
1) = 1 and for any c ∈ R , P ( Q n + M n c = c ) < E exp (cid:0) vV n (cid:1) ) n is uniformly bounded as soonas the Laplace transform L Q of Q is finite. At last, for any v > x ∈ F E (cid:16) e v | Q ( x ) | (cid:17) E (cid:16) e vQ (cid:17) = E (cid:18) sup x ∈ F e v | Q ( x ) | (cid:19) X x ∈ F E (cid:16) e v | Q ( x ) | (cid:17) . Thus L Q ( v ) is finite if and only if E (cid:0) e v | Q ( x ) | (cid:1) is finite for any x ∈ F . Since | V n | V n forall n >
0, then sup n > E (cid:16) e v | V n | (cid:17) < + ∞ as soon as L Q ( v ) is finite.On the other hand, choose v such that there exists x ∈ F such that E (cid:0) e v | Q ( x ) | (cid:1) isinfinite. Then, for any n > E (cid:16) e v | V n +1 | (cid:17) > E (cid:16) e v | V n +1 | { U n = x } (cid:17) > E (cid:16) e − v | V n | e v | Q n ( x ) | { U n = x } (cid:17) > E (cid:16) { U n = x } e − v | V n | (cid:17) E (cid:16) e v | Q n ( x ) | (cid:17) . The recurrence of U ensures that (cid:8) n > , E (cid:0) e v | V n | (cid:1) = + ∞ (cid:9) is infinite.11he last point is to show that L Q ( v ) is finite if and only if v < v c where v c is defined by(11). For any x ∈ F , the random variable Q n ( x ) is symmetric and its Laplace transformis finite as soon as, for any ˜ x ∈ N , the Laplace transform of I ˜ x = Z T σ ( X ˜ xs ) dB s is finite, which is true for | v | < v c . Indeed, we have for any v E (cid:16) e vQ n ( x ) |F T (cid:17) = exp (cid:18) v Z T σ ( X xs ) exp (cid:18) − Z T s λ ( X xr ) dr (cid:19) dB s (cid:19) E (cid:0) e vI ˜ x (cid:1) | ˜ x = X T . (14)Proposition 3.3 ensures that, if | v | < v c then E (cid:16) e vQ n ( x ) (cid:17) C ( v ) E (cid:18) exp (cid:18) v Z T σ ( X xs ) exp (cid:18) − Z T s λ ( X xr ) dr (cid:19) ds (cid:19)(cid:19) . Denoting σ M = max x ∈ M σ ( x ) and λ M = min x ∈ M λ ( x ), one has E (cid:16) e vQ n ( x ) (cid:17) C ( v ) exp (cid:18) σ M λ M v (cid:19) . By the way, L Q is finite on ( −∞ , v c ).We assume now that v > v c . From Proposition 3.3, we know that, in this case, the set G = { x ∈ N, E ( e vI x ) = + ∞} is non empty. Using the irreducibility of X and Remark3.4, one notices that there exists x ∈ F such that P ( X x T ∈ G ) >
0. From this remarkand (14), one has E ( e vQ n ( x ) ) = + ∞ which conclude the proof.Let us now extend this result to the whole process Y . Theorem 3.6.
For any v < v c where v c = sup n v, ρ ( P ( N ) v ) < o , we have sup t > E (cid:16) e v | Y t | (cid:17) < + ∞ . Moreover, if v > v c , then this supremum is infinite.Proof. Choose t >
0. We have E (cid:16) e v | Y t | (cid:17) = ∞ X n =0 E (cid:16) e v | Y t | { T n t 12y the Markov property applied to X , E (cid:16) e v | Y t | { T n t For any p > , there exist < C ( p ) < C ( p ) < + ∞ such that, for anyinitial probability measure π on E , any t > , C ( p ) e − η p t E π (cid:18) exp (cid:18) − Z t pλ ( X u ) du (cid:19)(cid:19) C ( p ) e − η p t . (15) Proof. Let us define, for any p > t > 0, the matrix A ( p,t ) by A ( p,t ) ( x, ˜ x ) = E x (cid:18) exp (cid:18) − Z t pλ ( X u ) du (cid:19) { X t =˜ x } (cid:19) . On the one hand, one remarks that E π (cid:18) exp (cid:18) − Z t pλ ( X u ) du (cid:19)(cid:19) = πA ( p,t ) (16)where the coordinates of are all equal to 1 and π is a probability measure on E seen asa row vector.On the other hand, a simple application of the Feynman-Kac formula shows that A ( p,t ) = e tA p . This fact relates the spectra of A p and A ( p,t ) . In particular, ρ ( A ( p,t ) ) = e − η p t and, since all coefficients of A ( p,t ) are positive, we can apply the Perron-Frobenius Theoremto ensure that − η p is a simple eigenvalue of A p , all other eigenvalues having a strictlysmaller real part. Let ξ p < − η p be an upper bound for the real parts of these othereigenvalues. 13e then define π p (resp. ϕ p ) the left (resp. right) eigenvector associated to − η p , withpositive coefficients, normalized such that π p ( ) = 1 (resp. π p ( ϕ p ) = 1). Applying [5,Thm VII.1.8], we get that for any t > e tA p = e − η p t ϕ p π p + R p ( t ) , with k R p ( t ) k ∞ P p ( t ) e ξ p t , P p ( t ) being a polynomial of degree less than d . This gives πe tA p = e − tη p ( π ( ϕ p ) + e tη p πR p ( t ) )hence e − tη p ( π ( ϕ p ) − P p ( t ) e t ( η p + ξ p ) ) πe tA p e − tη p ( π ( ϕ p ) + P p ( t ) e t ( η p + ξ p ) ) . This estimate gives (15) thanks to (16) and to the fact that P p ( t ) e t ( η p + ξ p ) tends to 0 as t tends to infinity.Let us now study the function p η p . Proposition 4.2. 1. The function p η p is smooth and concave on R + . Its derivative at p = 0 is equalto X x ∈ E λ ( x ) µ ( x ) > , and η p /p tends to λ as p goes to infinity.2. We have the following dichotomy: • if λ > , then for all p > , η p > , • if λ < , there is κ ∈ (0 , min {− a ( x ) /λ ( x ) , λ ( x ) < } ) such that η p > for p < κ and η p < for p > κ .Proof. The smoothness of the functions η p , π p and ϕ p are classical results of perturbationtheory (see for example [9, chapter 2]). Since π p A p = − η p π p , π p = 1 and A = 0, onehas η p = − π p A p = pπ p Λ = p X x ∈ E π p ( x ) λ ( x ) . (17)Differentiating this relation gives η ′ p = π p Λ + pπ ′ p Λ . In particular, η ′ = µ Λ = P x ∈ E µ ( x ) λ ( x ), since π = µ .We turn to the proof of the concavity of η p . We only have to remark that, for any t > x ∈ E , p M ( x ) t ( p ) = 1 t log E x (cid:18) exp (cid:18) − p Z t λ ( X u ) du (cid:19)(cid:19) is a convex function, as a log-Laplace transform (for example using H¨older’s inequality).But (15) implies that M ( x ) t converges to − η p , hence η p is concave as a limit of concavefunctions. 14bviously, one has, for any t > p > M ( x ) t ( p ) − pλ and η p is greater than pλ .On the other hand, denoting by T the first jump time of ( X t ), one has M ( x ) t ( p ) > t log E x (cid:18) exp (cid:18) − p Z t λ ( X u ) du (cid:19) { T >t } (cid:19) > − pλ ( x ) + 1 t log P x ( T > t ) = − pλ ( x ) − a ( x ) . When t goes to infinity, one gets for any p > η p min x ∈ E ( a ( x ) + pλ ( x )) . (18)In particular, η p /p goes to λ as p goes to infinity.The fact that, when λ > η p is always positive is clear from (17).When λ < 0, for p small enough, η p > p = 0 is positive. But inthis case, we can check that η p < p large enough. Equation (18) implies that η p < p > min x ∈ E,λ ( x ) < − a ( x ) /λ ( x ). This provides the upper bound for κ .With the concavity of η p , these considerations are sufficient to ensure that η p as aunique zero κ , being positive before and negative after. Remark 4.3. The relation η κ = 0 implies that ( A − κ Λ) ϕ κ = 0 which can be rewrittenas M κ ϕ κ = ϕ κ ( M κ being the matrix defined in (4) ). This ensures that ρ ( M κ ) = 1 since M κ is non-negative irreducible and ϕ κ is positive. By the way our characterization of κ inTheorem 1.5 is equivalent to the one given by de Saporta and Yao in Point 1. of Theorem1.2. It is known from [7, 4] that the invariant measure ν of Y has p th finite moment if andonly if p < κ . Their proof is based on a time discretization of the process ( X, Y ) togetherwith generic results on the ergodicity of discrete time Markov processes and renewal theory(see [3]). The previous propositions provide a direct and simple characterization of thecritical moment of ν . Proposition 4.4. For any p > such that η p > ( i.e. p < κ ), and any initial measuresuch that the second marginal has a p th finite moment, one has sup t > E ( | Y t | p ) < + ∞ and Z | y | p ν ( dy ) < + ∞ . On the other hand, for any p such that η p ( i.e. p > κ ) and any initial condition, lim t →∞ E ( | Y t | p ) = + ∞ and Z | y | p ν ( dy ) = + ∞ . Proof. Let us assume that p > 2. If it is not the case, one has to replace the function y 7→ | y | p by the C function y | y | p +2 | y | . Choose T > 0. Itˆo’s formula ensures that d | Y t | p = (cid:18) − pλ ( X t ) | Y t | p + p ( p − σ ( X t ) | Y t | p − (cid:19) dt + pσ ( X t ) Y t | Y t | p − dB t . (19)15et us denote by α p the function defined on [0 , T ] by α p ( t ) = E (cid:0) | Y t | p |F XT (cid:1) . Taking the expectation of (19) conditionnally to X leads to α ′ p ( t ) = − pλ ( X t ) α p ( t ) + p ( p − σ ( X t ) α p − ( t ) , since B and X are independent. For any ε > 0, there exists c such that α ′ p ( t ) ( − pλ ( X t ) + ε ) α p ( t ) + c. This implies that α p ( t ) α p (0) e R t ( − pλ ( X r )+ ε ) dr + c Z t e R tu ( − pλ ( X r )+ ε ) dr du. One has to take the expectation and use (15) to get for any p > η p > E ( | Y t | p ) C ( p ) E ( | Y | p ) e ( − η p + ε ) t + c C ( p ) Z t e − ( − η p + ε ) u du. If ε < η p then sup t> E ( | Y t | p ) is finite.If p = κ , one has α ′ κ ( t ) = − κλ ( X t ) α κ ( t ) + κ ( κ − σ ( X t ) α κ − ( t ) . Then α κ ( t ) = Z t e − κ R ts λ ( X u ) du κ ( κ − σ ( X s ) α κ − ( s ) ds + E ( | Y | κ ) e − κ R t λ ( X u ) du > κ ( κ − σ Z t e − κ R ts λ ( X u ) du α κ − ( s ) ds. As a consequence, using Proposition 4.1 and the relation η κ = 0 (see Proposition 4.2), E ( | Y t | κ ) > κ ( κ − σ Z t E (cid:16) α κ − ( s ) E (cid:16) e − κ R ts λ ( X u ) du |F Xs (cid:17)(cid:17) ds > κ ( κ − σ C ( κ ) Z t E (cid:16) | Y s | κ − (cid:17) ds. From the first part of the proof,lim s →∞ E (cid:16) | Y s | κ − (cid:17) = Z | y | κ − ν ( dy ) > . By the way, lim t →∞ E ( | Y t | κ ) = + ∞ , and the κ th moment of ν is infinite. This is also true for the p th moment for any p > κ .16 Convergence to equilibrium for the switched diffusion Under the assumption that ν has a finite p th moment, one can establish an exponentialconvergence of ( X, Y ) to its invariant measure in terms of mixed total variation (for X )and W p Wasserstein distance (for Y ).Let us start with the easiest case, assuming that L ( X ) = L ( ˜ X ). Proof of Theorem 1.7. Let y and ˜ y be two real numbers. We couple two trajectories of( X, Y ) starting at ( x, y ) and ( x, ˜ y ) by choosing the same first components and the sameBrownian motion to drive Y and ˜ Y . In other words, we compare ( X t , Y t ) x,y and ( ˜ X t , ˜ Y t ) x, ˜ y where X t = ˜ X t ,Y t = y − Z t λ ( X u ) Y u du + Z t σ ( X u ) dB u ˜ Y t = ˜ y − Z t λ ( X u ) ˜ Y u du + Z t σ ( X u ) dB u . Then, d (cid:16) Y t − ˜ Y t (cid:17) = − λ ( X t )( Y t − ˜ Y t ) dt and (cid:12)(cid:12)(cid:12) Y t − ˜ Y t (cid:12)(cid:12)(cid:12) p = | y − ˜ y | p − Z t pλ ( X u ) (cid:12)(cid:12)(cid:12) Y u − ˜ Y u (cid:12)(cid:12)(cid:12) p du. As a conclusion, (15) ensures that E ( x,y ) , ( x, ˜ y ) (cid:16)(cid:12)(cid:12)(cid:12) Y t − ˜ Y t (cid:12)(cid:12)(cid:12) p (cid:17) = E x (cid:18) exp (cid:18) − Z t pλ ( X u ) du (cid:19)(cid:19) | y − ˜ y | p C ( p ) e − η p t | y − ˜ y | p . Then, for any coupling Π of L ( Y ) and L ( ˜ Y ), W p (cid:16) L ( Y t ) , L ( ˜ Y t ) (cid:17) p C ( p ) e − η p t Z | y − ˜ y | p Π( d ( y, ˜ y )) . Taking the infimum over Π provides the result.Let us turn to the general case. Theorem 5.1. Consider two processes ( X, Y ) and ( ˜ X, ˜ Y ) with respective initial laws π and ˜ π two probability measures on E × R such that the second marginal has a finite θ th moment with θ < κ (with κ = + ∞ if λ > ). For any p < θ , we have W p (cid:16) L ( Y t ) , L ( ˜ Y t ) (cid:17) p C ( p )(1 − p c ) − p/θ M ( θ ) p/θ exp (cid:18) − γη p (1 − p/θ ) γ + η p t (cid:19) + p c W pp e − η p t , where p c = X x ∈ E µ ( x ) ∧ ˜ µ ( x ) = 1 − d TV (cid:16) L ( X ) , L ( ˜ X ) (cid:17) ,M ( θ ) p/θ = 2 p (cid:18) sup t > E (cid:16) | Y t | θ (cid:17) + sup t > E (cid:16) | ˜ Y t | θ (cid:17)(cid:19) p/θ ,W p = max x ∈ E W p (cid:16) L ( Y | X = x ) , L ( ˜ Y | ˜ X = x ) (cid:17) , nd γ is such that d TV ( L ( X t ) , L ( ˜ X t )) e − γt d TV (cid:16) L ( X ) , L ( ˜ X ) (cid:17) . Remark 5.2. This estimate can be improved and simplified if λ > . In this case, onecan write instead of (20) that E ( x,y ) , (˜ x, ˜ y ) (cid:16)(cid:12)(cid:12)(cid:12) Y t − ˜ Y t (cid:12)(cid:12)(cid:12) p { T > αt } (cid:17) C P ( T > αt ) thanks to the explicit expression (2) of Y . Since pλ η p this leads to W p (cid:16) L ( Y t ) , L ( ˜ Y t ) (cid:17) p C ( p )(1 − p c ) exp (cid:18) − γpλγ + pλ t (cid:19) + p c W pp e − pλt . Proof of Theorem 5.1. We have to consider the case X = ˜ X . Given x, ˜ x ∈ E (with x = ˜ x ) and y, ˜ y ∈ R , we introduce the three independent processes ( X t ) t > , ( X t ) t > and( B t ) t > where the first one is a chain starting at x , the second one is a chain starting at˜ x and the last one is a standard Brownian motion. The process ˜ X is defined as follows:˜ X t = ( X t if t T,X t if t > T, where T = inf (cid:8) t > , X t = X t (cid:9) . It is well known (since X is a finite irreducible continu-ous time Markov chain) that there exists γ > x, ˜ x ∈ E P x, ˜ x ( T > t ) e − γt . Let us now define for any t > Y t = ye − R t λ ( X u ) du + Z t e − R tu λ ( X v ) dv σ ( X u ) dB u , ˜ Y t = ˜ ye − R t λ ( ˜ X u ) du + Z t e − R tu λ ( ˜ X v ) dv σ ( ˜ X u ) dB u . Let us denote, for any p < κ and y, ˜ y ∈ R , C ( p, x, y ) = sup t > E x,y ( | Y t | p ) and C ( p, x, y, ˜ x, ˜ y ) = 2 p ( C ( p, x, y ) + C ( p, ˜ x, ˜ y )) . Let α ∈ (0 , 1) and s be the conjugate of θ/p . Theorem 1.7 ensures that E ( x,y ) , (˜ x, ˜ y ) (cid:16)(cid:12)(cid:12)(cid:12) Y t − ˜ Y t (cid:12)(cid:12)(cid:12) p (cid:17) = E ( x,y ) , (˜ x, ˜ y ) (cid:16)(cid:12)(cid:12)(cid:12) Y t − ˜ Y t (cid:12)(cid:12)(cid:12) p (cid:0) { T > αt } + { T <αt } (cid:1)(cid:17) C ( θ, x, y, ˜ x, ˜ y ) p/θ e − γαt/s (20)+ E ( x,y ) , (˜ x, ˜ y ) (cid:16)(cid:12)(cid:12)(cid:12) Y T − ˜ Y T (cid:12)(cid:12)(cid:12) p C ( p ) e − η p ( t − T ) { T <αt } (cid:17) C ( p ) C ( θ, x, y, ˜ x, ˜ y ) p/θ (cid:16) e − γαt/s + e − η p (1 − α ) t (cid:17) . α in order to have γα/s = η p (1 − α ) i.e. α = sη p γ + sη p leads to E ( x,y ) , (˜ x, ˜ y ) (cid:16)(cid:12)(cid:12)(cid:12) Y t − ˜ Y t (cid:12)(cid:12)(cid:12) p (cid:17) C ( p ) C ( θ, x, y, ˜ x, ˜ y ) p/θ exp (cid:18) − γη p γ + sη p t (cid:19) . Let us now turn to the case of general initial conditions. Let π and ˜ π be twoprobability measures on E × R such that the second marginal has a finite θ th moment.Let us start coupling the marginals µ and ˜ µ on E . Define the coupling probability p c p c = X x ∈ E µ ( x ) ∧ ˜ µ ( x ) , and D = { x ∈ E, µ ( x ) > ˜ µ ( x ) } . We introduce the random variables U , V , W and Z such that for any x ∈ E P ( U = x ) = µ ( x ) ∧ ˜ µ ( x ) p c , P ( V = x ) = µ ( x ) − ˜ µ ( x )1 − p c D ( x ) , P ( W = x ) = ˜ µ ( x ) − µ ( x )1 − p c D c ( x ) , and P ( Z = 1) = 1 − P ( Z = 0) = p c , Z being independent of ( U, V, W ). We can now define X = ( U if Z = 1 ,V if Z = 0 , ˜ X = ( U if Z = 1 ,W if Z = 0 . We check by a standard computation that the law of X (resp. ˜ X ) is µ (resp. ˜ µ ).Now, for any x ∈ E , let us introduce two random variables Y x and ˜ Y x , independentof ( U, V, W, Z ) such that E (cid:18)(cid:12)(cid:12)(cid:12) Y x − ˜ Y x (cid:12)(cid:12)(cid:12) θ (cid:19) = W θ (cid:16) L ( Y | X = x ) , L ( ˜ Y | ˜ X = x ) (cid:17) θ . With this construction ( X , Y X ) has law π and ( ˜ X , ˜ Y ˜ X ) has law ˜ π . We consider theprocesses ( X, Y ) and ( ˜ X, ˜ Y ) with these initial conditions, the sticky Markov chains andthe same Brownian motion. Thanks to the previous computations, we have E (cid:16)(cid:12)(cid:12)(cid:12) Y t − ˜ Y t (cid:12)(cid:12)(cid:12) p (cid:17) = E (cid:16)(cid:12)(cid:12)(cid:12) Y t − ˜ Y t (cid:12)(cid:12)(cid:12) p (cid:16) { X = ˜ X } + { X = ˜ X } (cid:17)(cid:17) E (cid:16) { X = ˜ X } (cid:12)(cid:12)(cid:12) Y X − ˜ Y ˜ X (cid:12)(cid:12)(cid:12) p (cid:17) e − η p t + C ( p ) E (cid:16) { X = ˜ X } C ( θ, X , Y X , ˜ X , ˜ Y ˜ X ) p/θ (cid:17) exp (cid:18) − γη p γ + sη p t (cid:19) . On the one hand, we have E (cid:16) { X = ˜ X } (cid:12)(cid:12)(cid:12) Y X − ˜ Y ˜ X (cid:12)(cid:12)(cid:12) p (cid:17) = E (cid:16) { X = ˜ X } E (cid:16)(cid:12)(cid:12)(cid:12) Y X − ˜ Y X (cid:12)(cid:12)(cid:12) p | X = ˜ X (cid:17)(cid:17) p c W pp , W p = max x ∈ E W p (cid:16) L ( Y | X = x ) , L ( ˜ Y | ˜ X = x ) (cid:17) . On the other hand, E (cid:16) { X = ˜ X } C ( θ, X , Y X , ˜ X , ˜ Y ˜ X ) p/θ (cid:17) P ( X = ˜ X ) /s E (cid:16) C ( θ, X , Y X , ˜ X , ˜ Y ˜ X ) (cid:17) p/θ . As a conclusion we get the following bound: W p (cid:16) L ( Y t ) , L ( ˜ Y t ) (cid:17) p C ( p )(1 − p c ) /s M ( θ ) p/θ exp (cid:18) − γη p γ + sη p t (cid:19) + p /sc W p/θθ e − η p t , where M ( θ ) p/θ = 2 p (cid:16) E ( C ( θ, X , Y )) + E (cid:16) C ( θ, ˜ X , ˜ Y ) (cid:17)(cid:17) p/θ . References [1] G. Alsmeyer, A. Iksanov, and U. R¨osler, On distributional properties of perpetuities , J. Theoret.Probab. (2009), no. 3, 666–682. MR MR2530108 1, 3.2[2] G. K. Basak, A. Bisi, and M. K. Ghosh, Stability of a random diffusion with linear drift , J. Math.Anal. Appl. (1996), no. 2, 604–622. MR MR1406250 (97g:60091) 1[3] B. de Saporta, Tail of the stationary solution of the stochastic equation Y n +1 = a n Y n + b n withMarkovian coefficients , Stochastic Process. Appl. (2005), no. 12, 1954–1978. MR MR2178503(2006g:60129) 4[4] B. de Saporta and J.-F. Yao, Tail of a linear diffusion with Markov switching , Ann. Appl. Probab. (2005), no. 1B, 992–1018. MR MR2114998 (2005k:60257) 1, 1, 1.2, 1.3, 4[5] N. Dunford and J. T. Schwartz, Linear operators. Part I , Wiley Classics Library, John Wiley & SonsInc., New York, 1988, General theory, With the assistance of William G. Bade and Robert G. Bartle,Reprint of the 1958 original, A Wiley-Interscience Publication. MR MR1009162 (90g:47001a) 4[6] C. M. Goldie and R. Gr¨ubel, Perpetuities with thin tails , Adv. in Appl. Probab. (1996), no. 2,463–480. MR MR1387886 (97f:60124) 1[7] X. Guyon, S. Iovleff, and J.-F. Yao, Linear diffusion with stationary switching regime , ESAIM Probab.Stat. (2004), 25–35 (electronic). MR MR2085603 (2005h:60244) 1, 1, 1, 1.3, 4[8] P. Hitsczenko and J. Weso lowski, Perpetuities with thin tails revisited , Ann. Appl. Probab. (2009),no. 6, 2080–2101. 1[9] T. Kato, Perturbation theory for linear operators , Classics in Mathematics, Springer-Verlag, Berlin,1995, Reprint of the 1980 edition. MR MR1335452 (96a:47025) 4[10] J.R. Norris, Markov chains , Cambridge Series in Statistical and Probabilistic Mathematics, 1997. 1[11] C. Villani, Topics in optimal transportation , Graduate Studies in Mathematics, vol. 58, AmericanMathematical Society, Providence, RI, 2003. MR MR1964483 (2004e:90003) 1 Compiled October 15, 2018. Jean-Baptiste Bardet e-mail: jean-baptiste.bardet(AT)univ-rouen.fr UMR 6085 CNRS Laboratoire de Math´ematiques Rapha¨el Salem (LMRS)Universit´e de Rouen, Avenue de l’Universit´e, BP 12, F-76801 Saint Etienne du Rouvray H´el`ene Gu´erin , e-mail: helene.guerin(AT)univ-rennes1.fr MR 6625 CNRS Institut de Recherche Math´ematique de Rennes (IRMAR)Universit´e de Rennes I, Campus de Beaulieu, F-35042 Rennes Cedex, France. Florent Malrieu , corresponding author, e-mail: florent.malrieu(AT)univ-rennes1.fr