Revisiting the Ruelle thermodynamic formalism for Markov trajectories with application to the glassy phase of random trap models
RRevisiting the Ruelle thermodynamic formalism for Markov trajectorieswith application to the glassy phase of random trap models
C´ecile Monthus
Institut de Physique Th´eorique, Universit´e Paris Saclay, CNRS, CEA, 91191 Gif-sur-Yvette, France
The Ruelle thermodynamic formalism for dynamical trajectories over the large time T correspondsto the large deviation theory for the information per unit time of the trajectories probabilities. Themicrocanonical analysis consists in evaluating the exponential growth in T of the number of trajec-tories with a given information per unit time, while the canonical analysis amounts to analyze theappropriate non-conserved β -deformed dynamics in order to obtain the scaled cumulant generatingfunction of the information, the first cumulant being the famous Kolmogorov-Sinai entropy. Thisframework is described in detail for discrete-time Markov chains and for continuous-time Markovjump processes converging towards some steady-state, where one can also construct the Doob gener-ator of the associated β -conditioned process. The application to the Directed Random Trap modelon a ring of L sites allows to illustrate this general framework via explicit results for all the in-troduced notions. In particular, the glassy phase is characterized by anomalous scaling laws withthe size L and by non-self-averaging properties of the Kolmogorov-Sinai entropy and of the highercumulants of the trajectory information. I. INTRODUCTION
Ruelle thermodynamic formalism for dynamical trajectories has allowed to make the link between the field ofdynamical chaotic systems and the statistical physics of equilibrium [1] and non-equilibrium [2–4]. In particular, theapplication of Ruelle thermodynamic formalism to Markov processes [3, 5–13] has put forward the Kolmogorov-Sinaientropy as an essential observable to characterize the stochastic trajectories. The unifying langage is actually thetheory of large deviations (see the reviews [14, 16] and references therein) that has more generally lead to majoradvances in the analysis of non-equilibrium steady-states (see the reviews with different scopes [17–25], the PhDTheses [6, 26–28] and the HDR Thesis [29]). In particular, the analysis of generating functions of time-additiveobservables via deformed Markov operators has been used extensively in many models [6–12, 17, 22–25, 29–54], withthe formulation of the corresponding ’conditioned’ process via the generalization of Doob’s h-transform.Within this perspective, the Ruelle thermodynamic formalism can be rephrased as the large deviation theory of theinformation per unit time of the trajectories probabilities. For Markov processes, this information is a time-additiveobservable, whose average is the Kolmogorov-Sinai entropy, while it is also important to analyze its higher cumulants,in particular its variance. It is thus interesting to revisit the Ruelle thermodynamic formalism both for discrete-time Markov chains and for continuous-time Markov Jump processes in order to study the generating function ofthe information via the appropriate β -deformed Markov generator and the Doob generator of the corresponding β -conditioned process. As examples of applications where this analysis leads to very explicit results, we will consider theDirected Random Trap Model [55–58], whose large deviations properties have been previously studied for the current[59, 60] and from the point of view of inference [61]. Note that besides this Directed Random Trap Model, manyother trap models have been also analyzed in the context of anomalously slow glassy behaviors [62–71]. Our mainconclusion will be that the glassy phase of the Directed Random Trap Model on a ring of size L can be characterizedvia the anomalous scaling with the size L and the non-self-averaging properties of the Kolmogorov-Sinai entropy andof the higher cumulants of the information.The paper is organized as follows. In section II, the Ruelle thermodynamic formalism for dynamical trajectoriesis described as the large deviation theory for the information per unit time of the trajectories probabilities, withthe corresponding microcanonical and canonical perspectives. The application to discrete-time Markov Chains withsteady-state is described in III, while the example of the discrete-time random trap model on the ring is studied insection IV. The application to continuous-time Markov Jump processes with steady-state is presented in V, whilethe example of the continuous-time random trap model on the ring is analyzed in section VI. Our conclusions aresummarized in section VII. Appendix A contains a reminder on the perturbation theory for an isolated eigenvalue ofnon-symmetric matrix, while Appendix B contains a reminder on L´evy stable laws. a r X i v : . [ c ond - m a t . s t a t - m ec h ] F e b II. REMINDER ON THE RUELLE THERMODYNAMIC FORMALISM FOR TRAJECTORIES
One considers all the possible stochastic trajectories x (0 ≤ t ≤ T ) over the large time T with their probabilities orprobabiliry densities P [ x (0 ≤ t ≤ T )] normalized to unity1 = (cid:88) x (0 ≤ t ≤ T ) P [ x (0 ≤ t ≤ T )] ≡ (cid:88) x ( . ) P [ x ( . )] (1)where the last simplified notations will be used in order to ease the read of some equations. The average of anobservable A [ x (0 ≤ t ≤ T )] of the trajectory x (0 ≤ t ≤ T ) with respect to the probability measure of Eq. 1 will bedenoted by < A [ x ( . )] > ≡ (cid:88) x ( . ) P [ x ( . )] A [ x ( . )] (2) A. Microcanonical analysis : counting the trajectories with a given intensive information I In the microcanonical analysis, the trajectories x (0 ≤ t ≤ T ) are classified with respect to their intensive information I , i.e. their information per unit time I [ x (0 ≤ t ≤ T )] ≡ − ln P [ x (0 ≤ t ≤ T )] T (3)which is a positive observable I [ x ( . )] ≥ I Ω T ( I ) ≡ (cid:88) x (0 ≤ t ≤ T ) δ ( I − I [ x (0 ≤ t ≤ T )]) (4)grows exponentially with the time T Ω T ( I ) (cid:39) T → + ∞ e T s ( I ) (5)where s ( I ) represents the intensive microcanonical Boltzmann entropy of trajectories with intensive information I .The region of positive entropy s ( I ) ≥ T ( I ) ≥ I min , I max ] of possible information I s ( I ) ≥ I min ≤ I ≤ I max (6)The physical meaning is that I min corresponds to the trajectory with the highest individual trajectory probabilitymax x (0 ≤ t ≤ T ) P [ x (0 ≤ t ≤ T )] (cid:39) T → + ∞ e − T I min (7)while I max corresponds to the trajectory with the smallest individual trajectory probabilitymin x (0 ≤ t ≤ T ) P [ x (0 ≤ t ≤ T )] (cid:39) T → + ∞ e − T I max (8)
B. Canonical analysis via the dynamical partition function in terms of the parameter β The dynamical partition function defined in terms of the parameter βZ T ( β ) ≡ (cid:88) x (0 ≤ t ≤ T ) ( P [ x (0 ≤ t ≤ T )]) β ≡ (cid:88) x ( . ) ( P [ x ( . )]) β (9)displays the exponential dependence with the time TZ T ( β ) (cid:39) T → + ∞ e T ψ ( β ) (10)where the sign of the coefficient ψ ( β ) changes at ψ ( β = 1) = 0 as a consequence of the normalization of Eq. 1. ψ ( β ) ≥ β ≤ ψ ( β ) ≤ β ≥ I using Eqs 4 , 5 and 6 Z T ( β ) = (cid:90) I max I min dI Ω T ( I ) e − βT I (cid:39) T → + ∞ (cid:90) I max I min dIe T [ s ( I ) − βI ] (12)For large time T → + ∞ , the saddle-point evaluation of this integral yields that the function ψ ( β ) introduced in Eq.10 corresponds to the Legendre transform of the microcanonical Boltzmann entropy s ( I ) s (cid:48) ( I ) − β = 0 s ( I ) − βI = ψ ( β ) (13)while the reciprocal Legendre transform reads 0 = ψ (cid:48) ( β ) + Is ( I ) = ψ ( β ) + βI (14)This Legendre transform holds in the interval I min ≤ I ≤ I max of Eq. 6, i.e. for β I max ≤ β ≤ β I min where β I min = s (cid:48) ( I min ) β I max = s (cid:48) ( I max ) (15)Outside the interval β I max ≤ β ≤ β I min , the saddle-point will remain frozen at the boundary I min for β > β I min andat the boundary I max for β < β I max with the corresponding linear behaviors ψ ( β ) = − βI min for β > β I min ψ ( β ) = − βI max for β < β I max (16)Let us now discuss some important values of the parameter β . C. The special value β = 0 and the corresponding information I β =0 For β = 0, the dynamical partition function of Eq. 9 Z T ( β = 0) = (cid:88) x (0 ≤ t ≤ T ) (cid:39) T → + ∞ (cid:90) I max I min dIe T s ( I ) (cid:39) T → + ∞ e T ψ (0) (17)simply counts the total number of possible dynamical trajectories of length T . The value β = 0 is associated via theLegendre transform of Eq. 13 to the value I β =0 that maximises the microcanonical entropy s ( I ) s (cid:48) ( I ) = 0 s ( I ) = ψ ( β = 0) (18)i.e. the whole set of trajectories is dominated by the subset of trajectories having exactly this information I . D. Series expansion of ψ ( β ) around β = 1 to obtain the cumulants of the intensive information Via the change of notation β = 1 + (cid:15) , the dynamical partition function of Eq. 9 can be rewritten as the generatingfunction of the moments of the information of Eq. 3 Z T ( β = 1 + (cid:15) ) ≡ (cid:88) x ( . ) P [ x ( . )] e − (cid:15)T I [ x ( . )] = (cid:88) x ( . ) P [ x ( . )] (cid:34) − (cid:15)T I [ x ( . )] + (cid:15) T I [ x ( . )] + + ∞ (cid:88) n =3 ( − (cid:15) ) n n ! T n I n [ x ( . )] (cid:35) = 1 − (cid:15)T < I [ x ( . )] > + (cid:15) T < I [ x ( . )] > + + ∞ (cid:88) k =3 ( − (cid:15) ) n k ! T n < I n [ x ( . )] > (cid:39) T → + ∞ e T ψ (1+ (cid:15) ) = eT (cid:34) (cid:15)ψ (cid:48) (1) + (cid:15) ψ (cid:48)(cid:48) (1) + + ∞ (cid:88) n =3 (cid:15) n n ! ψ ( n ) (1) (cid:35) (19)So ψ ( β = 1 + (cid:15) ) represents the scaled cumulant generating function of the information : its power expansion in (cid:15) around (cid:15) = 0 allows to evaluate the cumulant of order n of the information I [ x (0 ≤ t ≤ T )] in terms of the n th derivative ψ ( n ) (1) of order n at β = 1. Let us now discuss the physical meaning of the two first cumulants n = 1 and n = 2.
1. The averaged value of the information and the Kolmogorov-Sinai entropy h KS = − ψ (cid:48) ( β = 1) = I β =1 At order (cid:15) , Eq. 19 yields that the average of the intensive information I [ x (0 ≤ t ≤ T )] over the trajectories x (0 ≤ t ≤ T ) converges for large time T → + ∞ towards the opposite of the derivative of ψ ( β ) at β = 1 < I [ x ( . )] > (cid:39) T → + ∞ − ψ (cid:48) (1) (20)This averaged value is known as the Kolmogorov-Sinai entropy with the standard notation h KS h KS ≡ lim T → + ∞ − T (cid:88) x (0 ≤ t ≤ T ) P [ x (0 ≤ t ≤ T )] ln ( P [ x (0 ≤ t ≤ T )]) = lim T → + ∞ (cid:88) x ( . ) P [ x ( . )] I [ x ( . )] (cid:39) T → + ∞ − ψ (cid:48) (1) (21)and describes the linear growth in T of the Shannon entropy S dyn ( T ) associated to the probability distribution of thedynamical trajectories S dyn ( T ) ≡ − (cid:88) x (0 ≤ t ≤ T ) P [ x (0 ≤ t ≤ T )] ln ( P [ x (0 ≤ t ≤ T )]) ∝ T → + ∞ T h KS (22)The reciprocal Legendre transform of Eq. 14 for β = 1 yields that the information value I β =1 associated to β = 10 = ψ (cid:48) (1) + I s ( I ) = I (23)coincides with the Kolmogorov-Sinai entropy I = − ψ (cid:48) (1) = h KS (24)that satisfies s ( h KS ) = h KS (25)This means that the average over trajectories with their probabilities P [ x (0 ≤ t ≤ T )] is actually dominated by thenumber (see Eq. 5) Ω T ( h KS ) (cid:39) T → + ∞ e T s ( h KS ) = e T h KS (26)of trajectories associated to the information value I = h KS , where all these trajectories have the same probabilitygiven by e − T h KS = T ( h KS ) .
2. The second derivative ψ (cid:48)(cid:48) ( β = 1) as the scaled variance of the intensive information The expansion at order in (cid:15) of Eq. 19 yields that the second derivative ψ (cid:48)(cid:48) ( β = 1) correspond to the scaled variance V KS of the intensive information I [ x (0 ≤ t ≤ T )] V KS ≡ lim T → + ∞ (cid:2) T (cid:0) < I [ x ( . )] > − < I [ x ( . )] > (cid:1)(cid:3) (cid:39) T → + ∞ ψ (cid:48)(cid:48) (1) (27)and thus characterizes the fluctuations around the averaged value I = − ψ (cid:48) (1) = h KS discussed above. III. DISCRETE-TIME MARKOV CHAINS WITH STEADY-STATE
In this section, we focus on the Markov Chain dynamics for the probability P y ( t ) to be at position y at time tP x ( t + 1) = (cid:88) y W x,y P y ( t ) (28)with the normalization of the Markov matrix (cid:88) x W x,y = 1 (29)in order to apply the formalism described in the previous section. A. Steady-State and finite-time propagator
We will assume that the normalized steady-state P ∗ ( x ) of Eq. 28 P ∗ x = (cid:88) y W x,y P ∗ y (30)exists. From the point of view of the Perron–Frobenius theorem, Eqs 29 and 30 mean that unity is the highesteigenvalue of the positive Markov Matrix W ( ., . ), where the positive left eigenvector l x is trivial l x = 1 (31)while the right eigenvector r ( x ) is the steady state r x = P ∗ x (32)The whole spectral decomposition of the matrix WW = | r (cid:105)(cid:104) l | + (cid:88) k e − ζ k | ζ Rk (cid:105)(cid:104) ζ Lk | (33)involving the other eigenvalues e − ζ k < k , with their right eigenvectors | ζ Rk (cid:105) and their lefteigenvectors (cid:104) ζ Lk | satisfying the closure relation = | r (cid:105)(cid:104) l | + (cid:88) k | ζ Rk (cid:105)(cid:104) ζ Lk | (34)is useful to describe the relaxation of the finite-time propagator (cid:104) x | W t | x (cid:105) towards the steady state P ∗ x (cid:104) x | W t | x (cid:105) = (cid:104) x | r (cid:105)(cid:104) l | x (cid:105) + (cid:88) k e − tζ k (cid:104) x | ζ Rk (cid:105)(cid:104) ζ Lk | x (cid:105) = P ∗ x + (cid:88) k e − tζ k (cid:104) x | ζ Rk (cid:105)(cid:104) ζ Lk | x (cid:105) (35) B. The intensive information I as a time-additive observable of the trajectory x (0 ≤ t ≤ T ) When the initial position x (0) is drawn with the steady-state distribution P ∗ of Eq. 30, the probability of thetrajectory x (0 ≤ t ≤ T ) reads P [ x (0 ≤ t ≤ T )] = (cid:34) T (cid:89) t =1 W x ( t ) ,x ( t − (cid:35) P ∗ x (0) = (cid:34) T (cid:89) t =1 (cid:104) x ( t ) | W | x ( t − (cid:105) (cid:35) (cid:104) x (0) | r (cid:105) (36)If one is interested only in the joint distribution of the positions x ( t n ) at the N times t < t < .. < t N , one justneeds to sum over the possible intermediate positions at the other times with the closure relation to obtain p [ x ( t N ); ...x ( t ); x ( t )] = (cid:32) N − (cid:89) n =2 (cid:104) x ( t n ) | W ( t n − t n − ) | x ( t n − ) (cid:105) (cid:33) (cid:104) x ( t ) | r (cid:105) (37)in terms of the finite-time propagator of Eq. 35 and of the steady-state (cid:104) x ( t ) | r (cid:105) = P ∗ x ( t ) The intensive information of Eq. 3 associated to the trajectories probabilities of Eq. 36 I [ x (0 ≤ t ≤ T )] ≡ − ln P [ x (0 ≤ t ≤ T )] T = − T T (cid:88) t =1 ln (cid:0) W x ( t ) ,x ( t − (cid:1) − T ln (cid:16) P ∗ x (0) (cid:17) (38)corresponds to the sum over the time t of the time-local observable ln ( W ( x ( t ) , x ( t − x (0 ≤ t ≤ T ). As a consequence, the averaged value of theinformation with the trajectories probabilities of Eq. 36 only requires the knowledge of the partial distribution of Eq.37 for N = 2 consecutive times p [ x ( t ); x ( t − W x ( t ) ,x ( t − P ∗ x ( t − (39)in order to to compute < ln (cid:0) W x ( t ) ,x ( t − (cid:1) > = (cid:88) x ( t ) (cid:88) x ( t − p [ x ( t ); x ( t − (cid:0) W x ( t ) ,x ( t − (cid:1) = (cid:88) x ( t ) (cid:88) x ( t − W x ( t ) ,x ( t − P ∗ x ( t − ln (cid:0) W x ( t ) ,x ( t − (cid:1) = (cid:88) x (cid:88) y W x,y P ∗ y ln ( W x,y ) (40)which is independent of the time t as a consequence of the stationarity of the dynamics. So the average over trajectoriesof the information of Eq. 38 reduces to < I [ x ( . )] > = − T T (cid:88) t =1 < ln (cid:0) W x ( t ) ,x ( t − (cid:1) > − T < ln (cid:16) P ∗ x (0) (cid:17) > = − (cid:88) x (cid:88) y W x,y P ∗ y ln ( W x,y ) − T (cid:88) x (0) P ∗ x (0) ln (cid:16) P ∗ x (0) (cid:17) (41)For T → + ∞ , the last contribution of order 1 /T involving the initial condition disappears and one recovers thestandard expression for the Kolmogorov-Sinai entropy of Eq. 21 for discrete-time Markov chains with steady-state h KS = (cid:88) y P ∗ y (cid:34) − (cid:88) x W x,y ln ( W x,y ) (cid:35) (42)So the Kolmogorov-Sinai entropy h KS can be explicitly computed in any model where the steady-state P ∗ is known. C. The scaled variance V KS of the information I in terms of the temporal correlation of the flows Similarly for T → + ∞ , the contribution of the initial condition in the information of Eq. 38 will disappear in thescaled variance V KS of Eq. 27 and one obtains V KS = lim T → + ∞ (cid:2) T (cid:0) < I [ x ( . )] > − < I [ x ( . )] > (cid:1)(cid:3) = lim T → + ∞ (cid:34) T T (cid:88) t =1 (cid:2) < ln (cid:0) W x ( t ) ,x ( t − (cid:1) > − < ln (cid:0) W x ( t ) ,x ( t − (cid:1) > (cid:3)(cid:35) + lim T → + ∞ (cid:20) T T − (cid:88) t =1 T − t − (cid:88) τ =0 (cid:18) < ln (cid:0) W x ( t + τ +1) ,x ( t + τ ) (cid:1) ln (cid:0) W x ( t ) ,x ( t − (cid:1) > − < ln (cid:0) W x ( t + τ +1) ,x ( t + τ ) (cid:1) >< ln (cid:0) W x ( t ) ,x ( t − (cid:1) > (cid:19)(cid:21) The first line and the last term on the second line only involves the partial distribution of Eq. 39 for N = 2 consecutivetimes, while the first term on the second line involves the partial distribution of Eq. 37 for N = 4 times, where thetwo first times and the two last times are consecutive, while the intermediate time τ between the second time and thethird time is arbitrary p [ x ( t + τ + 1); x ( t + τ ); x ( t ); x ( t − W x ( t + τ +1) ,x ( t + τ ) (cid:104) x ( t + τ ) | W τ | x ( t ) (cid:105) W x ( t ) ,x ( t − P ∗ ( x ( t − V KS = (cid:88) x,y W x,y P ∗ y ln ( W x,y ) − (cid:34)(cid:88) x,y W x,y P ∗ y ln ( W x,y ) (cid:35) +2 (cid:88) x,y,x (cid:48) ,y (cid:48) W x (cid:48) ,x ln ( W x (cid:48) ,x ) G x,y W y,y (cid:48) ln ( W y,y (cid:48) ) P ∗ y (cid:48) (44)where the notation G x,y represents the sum over the time τ of the difference between the finite-time propagator (cid:104) x | W τ | y (cid:105) of Eq. 35 and its infinite-limit P ∗ ( x ) G x,y ≡ + ∞ (cid:88) τ =0 [ (cid:104) x | W τ | y (cid:105) − P ∗ ( x )] = + ∞ (cid:88) τ =0 (cid:88) k e − τζ k (cid:104) x | ζ Rk (cid:105)(cid:104) ζ Lk | y (cid:105) = (cid:88) k (cid:104) x | ζ Rk (cid:105)(cid:104) ζ Lk | y (cid:105) − e − ζ k (45)At the operator level, this Green function G = (cid:88) k | ζ Rk (cid:105)(cid:104) ζ Lk | − e − ζ k = (cid:32)(cid:88) k | ζ Rk (cid:105)(cid:104) ζ Lk | (cid:33) − W (cid:32)(cid:88) k | ζ Rk (cid:105)(cid:104) ζ Lk | (cid:33) = ( − | r (cid:105)(cid:104) l | ) − W ( − | r (cid:105)(cid:104) l | ) (46)represents the inverse of the operator ( − W ) within the subspace orthogonal to ( | r (cid:105)(cid:104) l | ).So the explicit computation of the scaled variance V KS requires not only the knowledge of the steady-state P ∗ , butalso the knowledge of the Green function G . This Green function is well-known in the perturbation theory of isolatedeigenvalues (see Appendix A) and appears more directly in the canonical analysis as explained below. However,the more pedestrian derivation described above allows to see the link with the temporal correlations, as explainedpreviously for additive observables of Fokker-Planck dynamics in the context of Anderson localization [72]. D. Canonical analysis via the β -deformed Matrix ˜ W [ β ] x,y Plugging the trajectories probabilities of Eq. 36 into the dynamical partition function of Eq. 9 Z β ( T ) = (cid:88) x (0 ≤ t ≤ T ) ( P [ x (0 ≤ t ≤ T )]) β = [ P ∗ x (0) ] β T (cid:89) t =1 (cid:2) W x ( t ) ,x ( t − (cid:3) β (cid:39) T → + ∞ e T ψ ( β ) (47)yields that one needs to consider the β -deformed matrix˜ W [ β ] x,y ≡ [ W x,y ] β (48)Then e ψ ( β ) corresponds to its highest eigenvalue that will dominate the deformed propagator for large T (cid:104) x T | (cid:16) ˜ W [ β ] (cid:17) T | x (cid:105) (cid:39) T → + ∞ e T ψ ( β ) ˜ r [ β ] x T ˜ l [ β ] x (49)where ˜ r [ β ] . and ˜ l [ β ] . are the corresponding positive right and left eigenvectors of the Perron-Frobenius theorem e ψ ( β ) ˜ r [ β ] x = (cid:88) y ˜ W [ β ] x,y ˜ r [ β ] y = (cid:88) y [ W x,y ] β ˜ r [ β ] y e ψ ( β ) ˜ l β ( y ) = (cid:88) x ˜ l [ β ] x ˜ W [ β ] x,y = (cid:88) x ˜ l [ β ] x [ W x,y ] β (50)with the normalization (cid:88) x ˜ l [ β ] x ˜ r [ β ] x = 1 (51) E. Perturbation theory for the highest eigenvalue e ψ ( β =1+ (cid:15) ) at second order in (cid:15) The perturbation theory in β = 1 + (cid:15) of the deformed matrix of Eq. 48˜ W [ β =1+ (cid:15) ] x,y = [ W x,y ] (cid:15) = W x,y + (cid:15)W (1) x,y + (cid:15) W (2) x,y + O ( (cid:15) ) (52)involves the first-order and the second-order perturbations W (1) x,y = W x,y ln ( W x,y ) W (2) x,y = W x,y ln ( W x,y )2 (53)The perturbation theory for its highest eigenvalue e ψ ( β =1+ (cid:15) ) = e ψ (1)+ (cid:15)ψ (cid:48) (1)+ (cid:15) ψ (cid:48)(cid:48) (1)+ O ( (cid:15) ) = 1 + (cid:15)ψ (cid:48) (1) + (cid:15) (cid:2) ψ (cid:48)(cid:48) (1) + [ ψ (cid:48) (1)] (cid:3) + O ( (cid:15) ) (54)is recalled in Appendix A and yields the following results at first-order and second-order respectively.
1. First-order perturbation to recover the Kolmogorov-Sinai entropy h KS = − ψ (cid:48) (1) Using the unperturbed left and right eigenvectors of Eqs 31 and 32, one obtains that the first-order correction ofEq. A10 for the eigenvalue of Eq. 54 reads ψ (cid:48) (1) = (cid:104) l | W (1) | r (cid:105) = (cid:88) x,y l x W (1) x,y r y = (cid:88) x,y W x,y ln ( W x,y ) P ∗ y (55)in agreement with the expression of Eq. 42 for the Kolmogorov-Sinai entropy h KS = − ψ (cid:48) (1).
2. Second-order perturbation to recover the scaled variance V KS = ψ (cid:48)(cid:48) (1) The second-order correction of Eq. A21 for the eigenvalue of Eq. 54 reads in terms of the unperturbed left andright eigenvectors of Eqs 31 and 32 ψ (cid:48)(cid:48) (1) + [ ψ (cid:48) (1)] (cid:104) l | W (1) GW (1) | r (cid:105) + (cid:104) l | W (2) | r (cid:105) = (cid:88) x,y,x (cid:48) ,y (cid:48) l x (cid:48) W (1) x (cid:48) ,x G x,y W (1) y,y (cid:48) r y (cid:48) + (cid:88) x,y l x W (2) x,y r y = (cid:88) x,y,x (cid:48) ,y (cid:48) W x (cid:48) ,x ln ( W x (cid:48) ,x ) G x,y W y,y (cid:48) ln ( W y,y (cid:48) ) P ∗ y (cid:48) + (cid:88) x,y W x,y ln ( W x,y )2 P ∗ y (56)where the Green function satisfies the matrix Eqs A15 and A16 and thus coincides with Eq. 46. The equations A15and A16 for the Green function read more explicitly in coordinates G x,y − (cid:88) x (cid:48) W x,x (cid:48) G x (cid:48) ,y = δ x,y − P ∗ x G x,y − (cid:88) y (cid:48) G x,y (cid:48) W y (cid:48) ,y = δ x,y − P ∗ x (cid:88) x G x,y = 0 (cid:88) y G x,y P ∗ y = 0 (57)The final result for ψ (cid:48)(cid:48) (1) ψ (cid:48)(cid:48) (1) = 2 (cid:88) x,y,x (cid:48) ,y (cid:48) W x (cid:48) ,x ln ( W x (cid:48) ,x ) G x,y W y,y (cid:48) ln ( W y,y (cid:48) ) P ∗ y (cid:48) + (cid:88) x,y W x,y ln ( W x,y ) P ∗ y − (cid:34)(cid:88) x,y W x,y ln ( W x,y ) P ∗ y (cid:35) (58)coincides with Eq. 44 for the scaled variance V KS = ψ (cid:48)(cid:48) (1) as it should (Eq. 27). F. Conditioned process constructed via the generalization of Doob’s h-transform
The normalized probability to be at position x at some interior time 0 (cid:28) t (cid:28) T for the dynamics generated bythe β -deformed matrix of Eq. 48 reads using the spectral asymptotic form of Eq. 49 for both time intervals [0 , t ] and[ t, T ] ˜ P x ( t ) = (cid:104) x T | (cid:16) ˜ W [ β ] (cid:17) T − t | x (cid:105)(cid:104) x | (cid:16) ˜ W [ β ] (cid:17) t | x (cid:105) (cid:88) x (cid:48) (cid:104) x T | (cid:16) ˜ W [ β ] (cid:17) T − t | x (cid:48) (cid:105)(cid:104) x (cid:48) | (cid:16) ˜ W [ β ] (cid:17) t | x (cid:105) (cid:39) (cid:28) t (cid:28) T e ( T − t ) ψ ( β ) ˜ r [ β ] x T ˜ l [ β ] x e tψ ( β ) ˜ r [ β ] x ˜ l [ β ] x (cid:88) x (cid:48) e ( T − t ) ψ ( β ) ˜ r [ β ] x T ˜ l [ β ] x (cid:48) e tψ ( β ) ˜ r [ β ] x (cid:48) ˜ l [ β ] x (cid:39) (cid:28) t (cid:28) T ˜ l [ β ] x ˜ r [ β ] x (59)Since it is independent of the interior time t as long as 0 (cid:28) t (cid:28) T , it is useful to introduce the notation˜˜ ρ [ β ] x ≡ ˜ l [ β ] x ˜ r [ β ] x (60)for the stationary density of the β -deformed dynamics in the interior time region 0 (cid:28) t (cid:28) T , and to construct thecorresponding probability-preserving Markov matrix via the generalization of Doob’s h-transform˜˜ W [ β ] x,y = e − ψ ( β ) ˜ l [ β ] x ˜ W [ β ] x,y l [ β ] y (61)whose highest eigenvalue unity is associated to the trivial left eigenvector˜˜ l [ β ] x = 1 (62)and to the right eigenvector ˜˜ l [ β ] x = ˜˜ ρ [ β ] x of Eq. 60 that represents the normalized density conditioned to the informationvalue I β = − ψ (cid:48) ( β ) of the Legendre transform of Eq. 14.So the explicit evaluation of the Doob generator of Eq. 61 requires the knowledge of the eigenvalue e ψ ( β ) and ofthe corresponding left eigenvector ˜ l [ β ] . of Eq. 50. IV. APPLICATION TO THE DISCRETE-TIME DIRECTED RANDOM TRAP MODEL ON THE RING
In this section, the general analysis for discrete-time Markov chains described in the previous section is applied tothe directed trap model on the ring.
A. Model parametrization in terms of L trapping times τ y The model is defined on a ring of L sites with periodic boundary conditions x + L ≡ x , and corresponds to thedynamics of Eq. 28 where the Markov Matrix W x,y = δ x,y +1 τ y + δ x,y (cid:18) − τ y (cid:19) (63)involves the L parameters τ y >
1. So when the particle is on site y at time t , it can either jump to the right neighbor( y + 1) with probability τ y ∈ ]0 ,
1[ or it remains on site y with the complementary probability (cid:16) − τ y (cid:17) ∈ ]0 , t from the site y follows the geometric distribution for t = 1 , , ...p escapey ( t ) = 1 τ y (cid:18) − τ y (cid:19) t − (64)whose averaged value is directly τ y + ∞ (cid:88) t =1 tp escapey ( t ) = τ y (65)So the L parameters τ y represent the characteristic times needed to escape from the L sites y = 1 , .., L of the ring.0 B. Minimal information I min and maximal information I max from extreme trajectories Let us now consider some extreme trajectories. The L possible trajectories starting at x (0) ∈ , , .., L and jumpingforward at any time step t = 1 , .., T , making the large number TL of laps around the ring have for probabilities P [ x ( t ) = x (0) + t ] (cid:39) T → + ∞ (cid:32) L (cid:89) y =1 τ y (cid:33) TL (66)and correspond to the same intensive information that will be denoted by I jump I [ x ( t ) = x (0) + t ] = − L L (cid:88) y =1 ln (cid:18) τ y (cid:19) ≡ I jump (67)On the contrary, the L possible trajectories that remain on the same site y of the ring for 0 ≤ t ≤ T have forprobabilities P [ x ( t ) = y ] = P ∗ y (cid:18) − τ y (cid:19) T (68)and correspond to the different intensive informations I [ x ( t ) = y ] = − ln (cid:18) − τ y (cid:19) ≡ I locy (69)It is thus useful to introduce the site y max of the ring with the maximal trapping time and the site y min of the ringwith the minimal trapping time τ y max = max ≤ y ≤ L τ y τ y min = min ≤ y ≤ L τ y (70)To determine the minimal and the maximal informations, one should then distinguish three cases:(i) If the probability (cid:16) − τ y (cid:17) to remain on the site y is always higher than the probability τ y to jump to the rightneighbor ( y + 1) 1 τ y < < − τ y for y = 1 , , .., L (71)then the maximal information will be given by Eq. 67, while the minimal information will be given by Eq. 69 for y = y max I max = I jump = 1 L L (cid:88) y =1 ln( τ y ) I min = I locy max = − ln (cid:18) − τ y max (cid:19) (72)(ii) If the probability (cid:16) − τ y (cid:17) to remain on the site y is always smaller than the probability τ y to jump to theright neighbor ( y + 1) 1 − τ y < < τ y for y = 1 , , .., L (73)then the maximal information will be given by Eq. 69 for y = y min , while the minimal information will be given byEq. 67 I max = I locy min = − ln (cid:18) − τ y min (cid:19) I min = I jump = 1 L L (cid:88) y =1 ln( τ y ) (74)1(iii) In the remaining cases 1 τ y max < < τ y min (75)the maximal information will be given by Eq. 69 for y = y min , while the minimal information will be given by Eq.69 for y = y max I max = I locy min = − ln (cid:18) − τ y min (cid:19) I min = I locy max = − ln (cid:18) − τ y max (cid:19) (76) C. Explicit results for the Kolmogorov-Sinai entropy h KS The normalized steady state of Eq. 30 P ∗ x = (cid:88) y W x,y P ∗ y = 1 τ x − P ∗ x − + (cid:18) − τ x (cid:19) P ∗ x (77)is simply given by the weight of the trapping time τ y within the sum of all the trapping times of the ring P ∗ y = τ yL (cid:88) x =1 τ x (78)As a consequence, the Kolmogorov-Sinai entropy of Eq. 42 reads for a given disordered ring parametrized by the L trapping times τ y =1 , ,..,L h KS [ τ y =1 , ,..,L ] = L (cid:88) y =1 P ∗ y [ − W y +1 ,y ln ( W y +1 ,y ) − W y,y ln ( W y,y )] = L (cid:88) y =1 τ y (cid:20) − τ y ln (cid:18) τ y (cid:19) − (cid:18) − τ y (cid:19) ln (cid:18) − τ y (cid:19)(cid:21) L (cid:88) x =1 τ x = L (cid:88) y =1 (cid:20) ln ( τ y ) − ( τ y −
1) ln (cid:18) − τ y (cid:19)(cid:21) L (cid:88) x =1 τ x (79)Let us now analyze its behavior for large L when the probability distribution q ( τ ) of the trapping times τ ∈ ]1 , + ∞ [is the power-law of Eq. B1 depending on the parameter µ > µ > τ of the trapping time is finite (Eq. B3), both the numeratorand the denominator of Eq. 79 will follow the law of large numbers and the Kolmogorov-Sinai entropy will convergetowards the finite asymptotic value h ( L = ∞ ) KS = (cid:90) + ∞ dτ q ( τ ) (cid:20) ln ( τ ) − ( τ −
1) ln (cid:18) − τ (cid:19)(cid:21)(cid:90) + ∞ dτ τ q ( τ ) for µ > < µ < τ of the trapping time is infinite (Eq. B3), the numerator ofEq. 79 will still follow the law of large numbers, while the denominator is a L´evy sum that remains distributed asrecalled in Appendix B. As a consequence, the Kolmogorov-Sinai entropy will not remain finite as in Eq. 80, but willvanish with the scaling L − µ h ( L ) KS (cid:39) L → + ∞ L − µ θ (cid:90) + ∞ dτ q ( τ ) (cid:20) ln ( τ ) − ( τ −
1) ln (cid:18) − τ (cid:19)(cid:21) for 0 < µ < L since the rescaled variable θ of Eq. B12 is distributedwith the L´evy law L µ ( θ ) of index µ ∈ ]0 ,
1[ of Eq. B13.
D. Canonical analysis via the β -deformed Markov Matrix For the Markov matrix of Eq. 63, the β -deformed matrix of Eq. 48 reads˜ W [ β ] x,y ≡ [ W x,y ] β = δ x,y +1 (cid:18) τ y (cid:19) β + δ x,y (cid:18) − τ y (cid:19) β (82)and the corresponding eigenvalues Eqs 50 become e ψ ( β ) ˜ r [ β ] x = (cid:88) y (cid:34) δ x,y +1 (cid:18) τ y (cid:19) β + δ x,y (cid:18) − τ y (cid:19) β (cid:35) ˜ r [ β ] y = 1 τ βx − ˜ r [ β ] x − + (cid:18) − τ x (cid:19) β ˜ r [ β ] x e ψ ( β ) ˜ l [ β ] y = (cid:88) x ˜ l [ β ] x (cid:34) δ x,y +1 (cid:18) τ y (cid:19) β + δ x,y (cid:18) − τ y (cid:19) β (cid:35) = 1 τ βy ˜ l [ β ] y +1 + (cid:18) − τ y (cid:19) β ˜ l [ β ] y (83)The solutions of these recursions read˜ r [ β ] x = 1 τ βx − (cid:20) e ψ ( β ) − (cid:16) − τ x (cid:17) β (cid:21) ˜ r [ β ] x − = ˜ r [ β ]0 x (cid:89) y =1 τ βy − (cid:20) e ψ ( β ) − (cid:16) − τ y (cid:17) β (cid:21) ˜ l [ β ] y = τ βy − (cid:34) e ψ ( β ) − (cid:18) − τ y − (cid:19) β (cid:35) ˜ l [ β ] y − = ˜ l [ β ]0 y (cid:89) x =1 τ βx − (cid:34) e ψ ( β ) − (cid:18) − τ x − (cid:19) β (cid:35) (84)The periodic boundary conditions ˜ r [ β ] L = ˜ r [ β ]0 and ˜ l [ β ] L = ˜ l [ β ]0 yield the equation for the eigenvalue e ψ ( β ) L (cid:89) x =1 (cid:32) τ βx (cid:34) e ψ ( β ) − (cid:18) − τ x (cid:19) β (cid:35)(cid:33) = L (cid:89) x =1 (cid:104) e ψ ( β ) τ βx − ( τ x − β (cid:105) (85)while the positivity of the components of the Perron-Froebenius eigenvectors of Eqs 83 imply e ψ ( β ) ≥ (cid:18) − τ x (cid:19) β for x = 1 , , .., L (86) E. Corresponding conditioned process constructed via the generalization of Doob’s h-transform
Using the left eigenvector of Eq. 83, one obtains that probability-preserving Markov matrix obtained via thegeneralization of Doob’s h-transform of Eq. 61 is of the same form of the initial Markov matrix of Eq. 63˜˜ W [ β ] x,y = e − ψ ( β ) ˜ l [ β ] x ˜ W [ β ] x,y l [ β ] y = δ x,y +1 τ β ( y ) + δ x,y (cid:18) − τ β ( y ) (cid:19) (87)where the modified trapping time ˜˜ τ [ β ] y at position y depends on the initial trapping time τ y at position y , on β andthe eigenvalue ψ ( β ) 1˜˜ τ [ β ] y = 1 − e − ψ ( β ) (cid:18) − τ y (cid:19) β (88)The corresponding conditioned density of Eq. 60 is given by the analog of the steady state of Eq. 78 with the modifiedtrapping times of Eq. 88 ˜˜ ρ [ β ] y = ˜˜ τ [ β ] yL (cid:88) x =1 ˜˜ τ [ β ] x (89)Let us now describe special values of β .3 F. Special value β = 0 For β = 0, Eq. 85 leads to the simple value independent of the trapping times e ψ ( β =0) = 2 (90)as it should to reproduce the total number Z T ( β = 0) = 2 T (91)of possible trajectories of T steps of Eq. 17 for the present model, where there are two possibilities at each step (Eq.63). The modified trapping times of Eq. 88 become all equal to 21˜˜ τ [ β =0] y = 12 (92)as it should to have equal probabilities (cid:0) , (cid:1) for the two possibilities to jump or to remain on site. Accordingly, thecorresponding conditioned density of Eq. 89 becomes uniform˜˜ ρ [ β =0] y = 1 L (93) G. Limit β → + ∞ and the minimal intensive information I min In the limit β → + ∞ , one expects that ψ ( β ) is negative with the linear behavior of Eq. 16 ψ ( β ) (cid:39) β → + ∞ − βI min (94)Then the condition of Eq. 86 yields I min ≤ − ln (cid:18) − τ x (cid:19) for x = 1 , , .., L (95)while Eq. 85 becomes0 (cid:39) β → + ∞ L (cid:88) x =1 ln (cid:16) τ βx (cid:104) e − βI min − e β ln ( − τx ) (cid:105)(cid:17) (cid:39) β → + ∞ β L (cid:88) x =1 ln ( τ x ) + L (cid:88) x =1 ln (cid:104) e − βI min − e β ln ( − τx ) (cid:105) (96)so that one needs to distinguish whether the inequality of Eq. 95 is strict or not.
1. Case where the inequality of Eq. 95 remains strict
If the inequality of Eq. 95 remains strict I min < − ln (cid:18) − τ y max (cid:19) (97)then Eq. 96 leads to the solution I min = 1 L L (cid:88) x =1 ln ( τ x ) ≡ I jump (98)corresponding to the value I jump of Eq. 67. This solution is valid only if I jump satisfies the strict bound of Eq. 97 I jump ≡ L L (cid:88) x =1 ln ( τ x ) < − ln (cid:18) − τ y max (cid:19) (99)4In the Doob generator of the conditioned process, the modified trapping times of Eq. 88 become all equal to unity˜˜ τ β ( y ) (cid:39) β → + ∞ − e β (cid:104) I jump +ln (cid:16) − τy (cid:17)(cid:105) (cid:39) β → + ∞ ρ β ( y ) (cid:39) β → + ∞ L (101)
2. Case where the inequality of Eq. 95 cannot remain strict If I jump does not satisfy the inequality of Eq. 99, then the solution of Eq. 96 is instead I min = − ln (cid:18) − τ y max (cid:19) ≡ I locy max (102)corresponding to the value I locy max discussed in Eqs 69 and 70.In the Doob generator of the conditioned process, the modified trapping time of Eq. 88˜˜ τ [ β ] y (cid:39) β → + ∞ − e β (cid:104) I locymax +ln (cid:16) − τy (cid:17)(cid:105) = 11 − e β (cid:104) − ln (cid:16) − τymax (cid:17) +ln (cid:16) − τy (cid:17)(cid:105) (cid:39) β → + ∞ y (cid:54) = y max (cid:39) β → + ∞ + ∞ if y = y max (103)remains finite for y (cid:54) = y max but diverges for y = y max , so that the corresponding conditioned density of Eq. 89 is fullylocalized on the site y max ˜˜ ρ [ β ] y (cid:39) β → + ∞ δ y,y max (104) H. Series expansion in β = 1 + (cid:15) up to order (cid:15) For β = 1 + (cid:15) , the expansion of the logarithm of Eq. 85 reads up to second order in (cid:15) L (cid:88) x =1 ln (cid:104) e ψ (1+ (cid:15) ) τ (cid:15)x − ( τ x − (cid:15) (cid:105) = L (cid:88) x =1 ln (cid:104) τ x e (cid:15) [ ψ (cid:48) (1)+ln( τ x ) ] + (cid:15) ψ (cid:48)(cid:48) (1) − ( τ x − e (cid:15) ln( τ x − (cid:105) = L (cid:88) x =1 ln (cid:20) (cid:15) [ τ x [ ψ (cid:48) (1) + ln( τ x )] − ( τ x −
1) ln( τ x − (cid:15) (cid:104) τ x (cid:104) ψ (cid:48)(cid:48) (1) + [ ψ (cid:48) (1) + ln( τ x )] (cid:105) − ( τ x −
1) ln ( τ x − (cid:105)(cid:21) = (cid:15) L (cid:88) x =1 [ τ x ψ (cid:48) (1) + τ x ln( τ x ) − ( τ x −
1) ln( τ x − (cid:15) L (cid:88) x =1 (cid:16) τ x ψ (cid:48)(cid:48) (1) − τ x ( τ x −
1) [ ψ (cid:48) (1) + ln( τ x ) − ln( τ x − (cid:17) (105)So the vanishing at order (cid:15) yields the first derivative ψ (cid:48) (1) = − L (cid:88) x =1 [ τ x ln ( τ x ) − ( τ x −
1) ln ( τ x − L (cid:88) y =1 τ y (106)in agreement with h KS = − ψ (cid:48) (1) of Eq. 79. The vanishing at order (cid:15) yields the second derivative ψ (cid:48)(cid:48) (1) ψ (cid:48)(cid:48) (1) = L (cid:88) x =1 τ x ( τ x −
1) [ ψ (cid:48) (1) + ln( τ x ) − ln( τ x − L (cid:88) y =1 τ y (107)5The scaled variance V KS of Eq. 27 thus reads for a given disordered ring parametrized by the L trapping times τ y =1 , ,..,L V KS [ τ y =1 , ,..,L ] = L (cid:88) x =1 τ x ( τ x − (cid:20) − h KS [ τ . ] − ln (cid:18) − τ x (cid:19)(cid:21) L (cid:88) y =1 τ y = h KS [ τ . ] L (cid:88) x =1 ( τ x − τ x ) + 2 h KS [ τ . ] L (cid:88) x =1 ( τ x − τ x ) ln (cid:18) − τ x (cid:19) + L (cid:88) x =1 ( τ x − τ x ) ln (cid:18) − τ x (cid:19) L (cid:88) y =1 τ y (108)where h KS [ τ y =1 , ,..,L ] was given in Eq. 79.Let us now analyze its behavior for large L when the probability distribution q ( τ ) of the trapping times τ ∈ ]1 , + ∞ [is the power-law of Eq. B1 depending on the parameter µ > µ > τ of the trapping time is finite (Eq. B4), both the numeratorand the denominator of Eq. 108 will follow the law of large numbers while the Kolmogorov-Sinai entropy convergestowards the finite asymptotic value of Eq. 80. As a consequence, the scaled variance of Eq. 108 will then convergestowards the finite asymptotic value V ( ∞ ) KS = (cid:90) + ∞ dτ q ( τ ) τ ( τ − (cid:20) − h ( ∞ ) KS − ln (cid:18) − τ (cid:19)(cid:21) (cid:90) + ∞ dτ τ q ( τ ) for µ > (cid:104) h ( ∞ ) KS (cid:105) (cid:90) + ∞ dτ q ( τ )( τ − τ ) + 2 h ( ∞ ) KS (cid:90) + ∞ dτ q ( τ )( τ − τ ) ln (cid:18) − τ (cid:19) + (cid:90) + ∞ dτ q ( τ )( τ − τ ) ln (cid:18) − τ (cid:19)(cid:90) + ∞ dτ τ q ( τ )(ii) in the region 1 < µ < τ of the trapping time is infinite (Eq. B4) while the firstmoment τ remains finite (Eq. B3), the only anomalous scaling will come from the sum of the square of the trappingtimes discussed around Eq. B18. As a consequence, the scaled variance V KS will not remain finite as in Eq. 109, butwill diverge with the scaling L µ − of exponent (cid:16) µ − (cid:17) ∈ ]0 , V ( L ) KS (cid:39) L → + ∞ L µ − ϑ (cid:104) h ( ∞ ) KS (cid:105) (cid:90) + ∞ dτ τ q ( τ ) for 1 < µ < L since the rescaled variable ϑ of Eq. B21 isdistributed with the L´evy law L µ ( ϑ ).(iii) in the region 0 < µ < τ and the second moment τ are infinite (Eqs B3 B4)while the Kolmogorov-Sinai entropy does not converge anymore towards the finite asymptotic value of Eq. 80, oneneeds to return to the finite-size expression of Eq. 79 for the Kolmogorov-Sinai entropy and to re-analyze the leadingbehavior of Eq. 108 in terms of the sum Σ L of Eq. B6 and Υ L of Eq. B18 V KS [ τ y =1 , ,..,L ] (cid:39) L → + ∞ Υ L Σ L L (cid:18)(cid:90) + ∞ dτ q ( τ ) (cid:20) ln ( τ ) − ( τ −
1) ln (cid:18) − τ (cid:19)(cid:21)(cid:19) ∝ L → + ∞ L − µ for 0 < µ < L − µ is different from Eq. 110, while the limit distribution would require a more refined analysis ofthe ratio Υ L Σ L involving the two correlated sums of Eq. B6 and Eq. B18.6 I. Direct analysis of self-averaging observables in the thermodynamic limit of an infinite ring L → + ∞ As discussed above, the Kolmogorov-Sinai entropy h KS = − ψ (cid:48) (1) is self-averaging in the thermodynamic limit L → + ∞ only for µ > V KS = ψ (cid:48)(cid:48) (1) is self-averaging in the thermodynamiclimit L → + ∞ only for µ > q ( τ ) has all its moments finite (in contrast to the power-law form of Eq.B1 discussed up to now), then the scaled cumulant generating function ψ ( β ) and its derivative will be self-averagingin the thermodynamic limit L → + ∞ . If one rewrites Eq. 85 via its logarithm and divide by the size L of the ring0 = 1 L L (cid:88) x =1 ln (cid:104) e ψ ( β ) τ βx − ( τ x − β (cid:105) (112)one obtains that the self-averaging value ψ L = ∞ ( β ) in the thermodynamic limit L → + ∞ is determined by the equation0 = (cid:90) + ∞ dτ q ( τ ) ln (cid:104) e ψ ∞ ( β ) τ β − ( τ − β (cid:105) (113)However, whenever there are non-self-averaging effets, one should return to the finite-size Eq. 112 to analyze them,as described above for the two first derivatives ψ (cid:48) (1) = − h KS and ψ (cid:48)(cid:48) (1) = V KS . V. MARKOV JUMP PROCESSES IN CONTINUOUS TIME WITH STEADY-STATE
In this section, we focus on the continuous-time dynamics in discrete space defined by the Master Equation ∂P x ( t ) ∂t = (cid:88) y w x,y P y ( t ) (114)where the off-diagonal x (cid:54) = y matrix elements are positive w x,y ≥ y to x ,while the diagonal elements are negative w x,x ≤ w x,x = − (cid:88) y (cid:54) = x w y,x (115) A. Steady-State and finite-time propagator
We will assume that the normalized steady-state P ∗ ( x ) of Eq. 1140 = (cid:88) y w x,y P ∗ y = (cid:88) y (cid:54) = x (cid:2) w x,y P ∗ y − w y,x P ∗ x (cid:3) (116)exists. Eqs 115 and 116 mean that zero is the highest eigenvalue of the Markov Matrix w .,. , with the positive lefteigenvector l x = 1 (117)and the positive right eigenvector r x given by the steady state r x = P ∗ x (118)The whole spectral decomposition of the matrix ww = − (cid:88) k ζ k | ζ Rk (cid:105)(cid:104) ζ Lk | (119)involving the other eigenvalues ( − ζ k ) < k , with their right eigenvectors | ζ Rk (cid:105) and their left eigenvectors (cid:104) ζ Lk | satisfying the closure relation = | r (cid:105)(cid:104) l | + (cid:88) k | ζ Rk (cid:105)(cid:104) ζ Lk | (120)is useful to describe the relaxation of the finite-time propagator towards the steady state (cid:104) x | e wt | x (cid:105) = (cid:104) x | r (cid:105)(cid:104) l | x (cid:105) + (cid:88) k e − tζ k (cid:104) x | ζ Rk (cid:105)(cid:104) ζ Lk | x (cid:105) = P ∗ ( x ) + (cid:88) k e − tζ k (cid:104) x | ζ Rk (cid:105)(cid:104) ζ Lk | x (cid:105) (121)7 B. The intensive information I as a time-additive observable of the trajectory x (0 ≤ t ≤ T ) A dynamical trajectory x ( t ) on the time interval 0 ≤ t ≤ T corresponds to a certain number M ≥ m = 1 , .., M occuring at times 0 < t < ... < t M < T between the successive configurations ( x → x → x .. → x M )that are visited between these jumps. The probability density of this trajectory x (0 ≤ t ≤ T ) = ( x ; t ; x ; t ; ... ; x M − ; t M ; x M ) (122)when the initial condition x is drawn with the steady-state distribution P ∗ of Eq. 116 reads in terms of the transitionsrates P [ x (0 ≤ t ≤ T ) = ( x ; t ; x ; t ; ... ; x M − ; t M ; x M )]= e ( T − t M ) w xM ,xM w x M ,x M − e ( t M − t M − ) w xM − ,xM − .......w x ,x e ( t − t ) w x ,x w x ,x e t w x ,x = P ∗ ( x ) e ( T − t M ) w xM ,xM M (cid:89) m =1 (cid:104) w x m ,x m − e ( t m − t m − ) w xm − ,xm − (cid:105) (123)The normalization over all possibles trajectories on [0 , T ] involves the sum over the number M of jumps, the sumover the M configurations ( x , x , ..., x M ) where x m has to be different from x m − , and the integration over the jumptimes with the measure dt ...dt M and the constraint 0 < t < ... < t M < T + ∞ (cid:88) M =0 (cid:90) T dt M (cid:90) t M dt M − ... (cid:90) t dt (cid:88) x M (cid:54) = x M − (cid:88) x M − (cid:54) = x M − ... (cid:88) x (cid:54) = x (cid:88) x (cid:54) = x (cid:88) x P [ x (0 ≤ t ≤ T ) = ( x ; t ; x ; ; ... ; x M − ; t M ; x M )] (124)The trajectory probability density of Eq. 123 can be rewritten more compactly without the explicit enumeration ofall the jumps as P [ x (0 ≤ t ≤ T )] = P ∗ x (0) e (cid:88) t : x ( t + ) (cid:54) = x ( t − ) ln( w x ( t + ) ,x ( t − ) ) + (cid:90) T dtw x ( t ) ,x ( t ) (125)The corresponding intensive information of Eq. 3 I [ x (0 ≤ t ≤ T )] = − T (cid:88) t : x ( t + ) (cid:54) = x ( t − ) ln( w x ( t + ) ,x ( t − ) ) − T (cid:90) T dtw x ( t ) ,x ( t ) − T ln (cid:16) P ∗ x (0) (cid:17) (126)is a time-additive observable. As a consequence, its averaged value with the trajectories probabilities of Eq. 125 reads < I [ x ( . )] > ≡ (cid:88) x ( . ) P [ x ( . )] I [ x ( . )] = − T (cid:88) t : x ( t + ) (cid:54) = x ( t − ) < ln( w x ( t + ) ,x ( t − ) ) > − T (cid:90) T dt < w x ( t ) ,x ( t ) > − T < ln (cid:16) P ∗ x (0) (cid:17) > = − (cid:88) y (cid:88) x (cid:54) = y w x,y P ∗ y ln( w x,y ) − (cid:88) y P ∗ y w y,y − T (cid:88) x (0) P ∗ x (0) ln (cid:16) P ∗ x (0) (cid:17) (127)For T → + ∞ , the last contribution of order 1 /T involving the initial condition disappears and one obtains theKolmogorov-Sinai entropy of Eq. 21 that can be rewritten in terms of the off-diagonal elements w x,y alone using Eq.115 h KS = lim T → + ∞ ( < I [ x ( . )] > ) = − (cid:88) y (cid:88) x (cid:54) = y w x,y P ∗ y ln( w x,y ) − (cid:88) y P ∗ y − (cid:88) x (cid:54) = y w x,y = (cid:88) y P ∗ y (cid:88) x (cid:54) = y w x,y [1 − ln( w x,y )] (128)As in Eq. 42, the Kolmogorov-Sinai entropy can be thus explicitly computed in any model where the steady-state P ∗ is known.8 C. The scaled variance V KS of the information I in terms of the temporal correlations Similarly for T → + ∞ , the contribution of the initial condition in the information of Eq. 126 will disappear inthe scaled variance of Eq. 27 and one obtains the contributions corresponding to the various connected temporalcorrelations involving either two off-diagonal matrix elements, one off-diagonal and one diagonal matrix elements, ortwo diagonal matrix elements V KS = lim T → + ∞ (cid:2) T (cid:0) < I [ x ( . )] > − < I [ x ( . )] > (cid:1)(cid:3) = lim T → + ∞ T (cid:88) t : x ( t + ) (cid:54) = x ( t − ) < ln ( w x ( t + ) ,x ( t − ) ) > + lim T → + ∞ T (cid:88) t : x ( t + ) (cid:54) = x ( t − ) τ> x (( t + τ ) + ) (cid:54) = x (( t + τ ) − ) (cid:2) < ln( w x (( t + τ ) + ) ,x (( t + τ ) − ) ) ln( w x ( t + ) ,x ( t − ) ) > − < ln( w x (( t + τ ) + ) ,x (( t + τ ) − ) ) >< ln( w x ( t + ) ,x ( t − ) ) > (cid:3) + lim T → + ∞ (cid:20) T (cid:88) t : x ( t + ) (cid:54) = x ( t − ) (cid:90) T − t dτ (cid:2) < w x ( t + τ ) ,x ( t + τ ) ln( w x ( t + ) ,x ( t − ) ) > − < w x ( t + τ ) ,x ( t + τ ) >< ln( w x ( t + ) ,x ( t − ) ) > (cid:3) (cid:21) + lim T → + ∞ (cid:20) T (cid:90) T dt (cid:88) <τ
1. First-order perturbation theory to recover the Kolmogorov-Sinai entropy h KS = − ψ (cid:48) (1) Using the unperturbed left and right eigenvectors of Eqs 117 and 118, one obtains that the first-order correction ofEq. A10 for the eigenvalue of Eq. 142 reads ψ (cid:48) (1) = (cid:104) l | w (1) | r (cid:105) = (cid:88) x,y l x w (1) x,y r y = (cid:88) y (cid:88) x (cid:54) = y l x w (1) x,y r y + l y w (1) y,y r y = (cid:88) y (cid:88) x (cid:54) = y w x,y ln ( w x,y ) + w y,y P ∗ y = (cid:88) y (cid:88) x (cid:54) = y w x,y ln ( w x,y ) − (cid:88) x (cid:54) = y w x,y P ∗ y (143)in agreement with the expression of Eq. 128 for the Kolmogorov-Sinai entropy h KS = − ψ (cid:48) (1).
2. Second-order perturbation theory to recover the scaled variance V KS = ψ (cid:48)(cid:48) (1) The second-order correction of Eq. A21 for the eigenvalue of Eq. 142 reads in terms of the unperturbed left andright eigenvectors of Eqs 117 and 118 ψ (cid:48)(cid:48) (1)2 = (cid:104) l | w (2) | r (cid:105) + (cid:104) l | w (1) Gw (1) | r (cid:105) = (cid:88) x,y l x w (2) x,y r y + (cid:88) x,x (cid:48) ,y (cid:48) ,y l x (cid:48) w (1) x (cid:48) ,x G x,y w (1) y,y (cid:48) r y (cid:48) = (cid:88) y (cid:88) x (cid:54) = y w x,y ln ( w x,y )2 P ∗ y + (cid:88) x,x (cid:48) ,y (cid:48) ,y w (1) x (cid:48) ,x G x,y w (1) y,y (cid:48) P ∗ y (cid:48) (144)where the Green function satisfies the matrix Eqs A15 and A16 and thus coincides with Eq. 131. The Equations A15and A16 for the Green function read more explicitly in coordinates − (cid:88) x (cid:48) w x,x (cid:48) G x (cid:48) ,y = δ x,y − P ∗ x − (cid:88) y (cid:48) G x,y (cid:48) w y (cid:48) ,y = δ x,y − P ∗ x (cid:88) x G x,y = 0 (cid:88) y G x,y P ∗ y = 0 (145)Using the diagonal and off-diagonal elements of the first-order perturbation matrix w (1) of Eq. 140, one obtains thatthe final result for ψ (cid:48)(cid:48) (1) of Eq. 144 ψ (cid:48)(cid:48) (1) = (cid:88) y (cid:88) x (cid:54) = y w x,y ln ( w x,y ) P ∗ y +2 (cid:88) x,y (cid:88) x (cid:48) (cid:54) = x (cid:88) y (cid:48) (cid:54) = y w x (cid:48) ,x ln ( w x (cid:48) ,x ) G x,y w y,y (cid:48) ln ( w y,y (cid:48) ) P ∗ y (cid:48) +2 (cid:88) x,y (cid:88) y (cid:48) (cid:54) = y w x,x G x,y w y,y (cid:48) ln ( w y,y (cid:48) ) P ∗ y (cid:48) +2 (cid:88) x,y (cid:88) x (cid:48) (cid:54) = x w x (cid:48) ,x ln ( w x (cid:48) ,x ) G x,y w y,y P ∗ y +2 (cid:88) x,y w x,x G x,y w y,y P ∗ y (146)coincides with Eq. 132 for the scaled variance V KS = ψ (cid:48)(cid:48) (1) as it should.1 F. Corresponding conditioned process constructed via the generalization of Doob’s h-transform
The analysis analogous to Eq. 59 yields that ˜˜ ρ [ β ] x ≡ ˜ l [ β ] x ˜ r [ β ] x (147)represents the stationary density of the β -deformed dynamics in the interior time region 0 (cid:28) t (cid:28) T and can beinterpreted as the normalized density conditioned to the information value I = − ψ (cid:48) ( β ) of the Legendre transform ofEq. 14.The corresponding probability-preserving Markov jump process whose highest eigenvalue is zero with the corre-sponding trivial left eigenvector ˜˜ l β ( x ) = 1 and the corresponding right eigenvector given by Eq. 147 is generated bythe following matrix corresponding to the generalization of Doob’s h-transform˜˜ w [ β ] x,y = ˜ l [ β ] x ˜ w [ β ] x,y l [ β ] y − ψ ( β ) δ x,y (148)So the explicit evaluation of this Doob generator requires the knowledge of the eigenvalue ψ ( β ) and of the correspondingleft eigenvector ˜ l [ β ] . of Eq. 137. VI. APPLICATION TO THE CONTINUOUS-TIME DIRECTED RANDOM TRAP MODEL
In this section, the general analysis for continuous-time Markov jump processes described in the previous section isapplied to the directed trap model on the ring.
A. Model parametrization in terms of L trapping times τ y The model is defined on a ring of L sites with periodic boundary conditions x + L ≡ x , and corresponds to thedynamics of Eq. 114 where the Markov Matrix w x,y = δ x,y +1 − δ x,y τ y (149)involves the L parameters τ y >
0. So when the particle is on site y at time t , it can either jump to the right neighbor( y + 1) with rate τ y per unit time, or it remains on site y . As a consequence, the escape-time t ∈ [0 , + ∞ [ from thesite y follows the exponential distribution p escapey ( t ) = 1 τ y e − tτy (150)whose averaged value is directly τ y (cid:90) + ∞ dttp escapey ( t ) = τ y (151)So the L parameters τ y represent again the characteristic times needed to escape from the L sites y = 1 , .., L of thering. B. Minimal information I min and maximal information I max from extreme trajectories The L possible trajectories that remain on the same site y of the ring for 0 ≤ t ≤ T have for probabilities P [ x ( t ) = y ] = P ∗ y e − Tτ y (152)and correspond to the different intensive informations I [ x ( t ) = y ] = 1 τ y ≡ I locy (153)2In terms of the positions y max and y min of the ring with the maximal and the minimal trapping time (Eq 70), themaximal information and the minimal information are thus given by I max = I locy min = 1 τ y min I min = I locy max = 1 τ y max (154) C. Explicit results for the Kolmogorov-Sinai entropy h KS The normalized steady state of Eq. 116 0 = 1 τ x − P ∗ x − − τ x P ∗ x (155)is simply given by P ∗ y = τ yL (cid:88) x =1 τ x (156)As a consequence, the Kolmogorov-Sinai entropy of Eq. 128 reads for a given disordered ring parametrized by the L trapping times τ y =1 , ,..,L h KS [ τ y =1 , ,..,L ] = (cid:88) y P ∗ y w y +1 ,y [1 − ln( w y +1 ,y )] = L (cid:88) y =1 [1 + ln( τ y )] L (cid:88) x =1 τ x = L + L (cid:88) y =1 ln( τ y ) L (cid:88) x =1 τ x (157)Let us now analyze its behavior for large L when the probability distribution q ( τ ) of the trapping times τ ∈ ]1 , + ∞ [is the power-law of Eq. B1 depending on the parameter µ > µ > τ of the trapping time is finite (Eq. B3), both the numeratorand the denominator of Eq. 157 will follow the law of large numbers and the Kolmogorov-Sinai entropy will convergetowards the finite asymptotic value h ( L = ∞ ) KS = 1 + (cid:90) + ∞ dτ q ( τ ) ln ( τ ) (cid:90) + ∞ dτ τ q ( τ ) for µ > < µ < τ of the trapping time is infinite (Eq. B3), the numeratorof Eq. 157 will still follow the law of large numbers, while the denominator is a L´evy sum that remains distributedas recalled in Appendix B. As a consequence, the Kolmogorov-Sinai entropy will not remain finite as in Eq. 158, butwill vanish with the scaling L − µ h ( L ) KS (cid:39) L → + ∞ L − µ θ (cid:20) (cid:90) + ∞ dτ q ( τ ) ln ( τ ) (cid:21) for 0 < µ < L since the rescaled variable θ of Eq. B12 is distributedwith the L´evy law L µ ( θ ) of index µ ∈ ]0 ,
1[ of Eq. B13.
D. Canonical analysis via the β -deformed Markov Matrix For the Markov matrix of Eq. 149, the β -deformed matrix of Eqs 134 and 135 reads˜ w [ β ] x,y = δ x,y +1 (cid:18) τ y (cid:19) β − δ x,y βτ y (160)3and the corresponding eigenvalues Eqs 137 become ψ ( β )˜ r [ β ] x = ˜ r [ β ] x − τ βx − − βτ x ˜ r [ β ] x ψ ( β )˜ l [ β ] y = ˜ l [ β ] y +1 τ βy − βτ y ˜ l [ β ] y (161)The solutions of these recursions read˜ r [ β ] x = ˜ r [ β ] x − τ βx − (cid:104) ψ ( β ) + βτ x (cid:105) = ˜ r [ β ]0 x (cid:89) y =1 (cid:18) τ βy − (cid:20) ψ ( β ) + βτ y (cid:21)(cid:19) ˜ l [ β ] y = τ βy − (cid:20) ψ ( β ) + βτ y − (cid:21) ˜ l [ β ] y − = ˜ l [ β ]0 y (cid:89) x =1 (cid:18) τ βx − (cid:20) ψ ( β ) + βτ x − (cid:21)(cid:19) (162)The periodic boundary conditions ˜ r [ β ] L = ˜ r [ β ]0 and ˜ l [ β ] L = ˜ l [ β ]0 yield the equation for the eigenvalue ψ ( β )1 = L (cid:89) x =1 (cid:18) τ βx (cid:20) ψ ( β ) + βτ x (cid:21)(cid:19) = L (cid:89) x =1 (cid:2) ψ ( β ) τ βx + βτ β − x (cid:3) (163)while the positivity of the components of the Perron-Froebenius eigenvectors of Eqs 162 imply ψ ( β ) ≥ − βτ x for x = 1 , , .., L (164) E. Corresponding conditioned process constructed via the generalization of Doob’s h-transform
Using the left eigenvector of Eq. 162, one obtains that probability-preserving Markov matrix obtained via thegeneralization of Doob’s h-transform of Eq. 148 is of the same form of the initial Markov matrix of Eq. 149˜˜ w [ β ] x,y = ˜ l [ β ] x ˜ w [ β ] x,y l [ β ] y − ψ ( β ) δ x,y = δ x,y +1 − δ x,y ˜˜ τ β ( y ) (165)with the modified trapping times 1˜˜ τ [ β ] y = ψ ( β ) + βτ y (166)The corresponding conditioned density of Eq. 60 is given by the analog of the steady state of Eq. 156 with themodified trapping times of Eq. 166 ˜˜ ρ [ β ] y = ˜˜ τ [ β ] yL (cid:88) x =1 ˜˜ τ [ β ] x (167)Let us now describe special values of β . F. Special value β = 0 For β = 0, Eq. 163 yields the following value independent of the trapping times ψ ( β = 0) = 1 (168)4This simple value can be understood from the integration of the measure on the first line of Eq. 124 where the sumsover the positions disappear as a consequence of the one-dimensional directed character of the present directed trapmodel + ∞ (cid:88) M =0 (cid:90) T dt M (cid:90) t M dt M − ... (cid:90) t dt = + ∞ (cid:88) M =0 M ! M (cid:89) m =1 (cid:90) T dt m = + ∞ (cid:88) M =0 T M M ! = e T (169)The modified trapping times of Eq. 166 1˜˜ τ [ β ] x = ψ ( β = 0) = 1 (170)and the corresponding conditioned density of Eq. 167 becomes uniform˜˜ ρ [ β =0] y = 1 L (171) G. Limit β → + ∞ and the minimal intensive information I min In the limit β → + ∞ , one expects that ψ ( β ) is negative with the linear behavior of Eq. 16 ψ ( β ) (cid:39) β → + ∞ − βI min (172)The constraint of Eq. 164 yields I min ≤ τ x for x = 1 , , .., L (173)while Eq. 163 becomes0 (cid:39) β → + ∞ L (cid:88) x =1 ln (cid:18) βτ βx (cid:20) τ x − I min (cid:21)(cid:19) (cid:39) β → + ∞ L ln( β ) + β L (cid:88) x =1 ln ( τ x ) + L (cid:88) x =1 ln (cid:20) τ x − I min (cid:21) (174)So the minimum information is determined by the maximal trapping time of the ring occurring at some position y max (Eq. 70) I min = 1max ≤ y ≤ L τ y = 1 τ y max (175)in agreement with the direct analysis of Eq. 154, where the corresponding trajectory with the highest individualtrajectory probability of Eq. 7 is the trajectory that remains on the site y max (Eq. 152). In the Doob generator ofthe conditioned process, the modified trapping times of Eq. 166˜˜ τ [ β ] y (cid:39) β → + ∞ β (cid:16) τ y − I min (cid:17) (cid:39) β → + ∞ β (cid:16) τ y − τ ymax (cid:17) (cid:39) β → + ∞ y (cid:54) = y max (cid:39) β → + ∞ + ∞ if y = y max (176)vanish at all the sites y (cid:54) = y max but diverges for y = y max , so that the corresponding conditioned density of Eq. 167is fully localized on the site y max ˜˜ ρ [ β ] y (cid:39) β → + ∞ δ y,y max (177)in agreement with the physical interpretation of the localized trajectory of Eq. 152.5 H. Series expansion in β = 1 + (cid:15) up to order (cid:15) For β = 1 + (cid:15) , the expansion of the logarithm of Eq. 163 up to second order in (cid:15) L (cid:88) x =1 (ln [ ψ (1 + (cid:15) ) τ x + (1 + (cid:15) )] + (cid:15) ln( τ x )) = L (cid:88) x =1 ln (cid:20) (cid:15) (1 + ψ (cid:48) (1) τ x ) + (cid:15) ψ (cid:48)(cid:48) (1) τ x (cid:21) + (cid:15) L (cid:88) x =1 ln( τ x )= L (cid:88) x =1 (cid:20) (cid:15) (1 + ψ (cid:48) (1) τ x + ln( τ x )) + (cid:15) ψ (cid:48)(cid:48) (1) τ x − (1 + ψ (cid:48) (1) τ x ) ) (cid:21) = (cid:15) (cid:32) L + ψ (cid:48) (1) L (cid:88) x =1 τ x + L (cid:88) x =1 ln( τ x ) (cid:33) + (cid:15) (cid:34) ψ (cid:48)(cid:48) (1) L (cid:88) x =1 τ x − L (cid:88) x =1 (1 + ψ (cid:48) (1) τ x ) (cid:35) (178)So the order (cid:15) allows to recover the Kolmogorov-Sinai entropy of Eq. 157 h KS [ τ y =1 , ,..,L ] = − ψ (cid:48) (1) = L + L (cid:88) y =1 ln( τ y ) L (cid:88) x =1 τ x (179)while the order (cid:15) yields the second derivative ψ (cid:48)(cid:48) (1) = L (cid:88) y =1 (1 + ψ (cid:48) (1) τ y ) L (cid:88) x =1 τ x = 2 ψ (cid:48) (1) + L + [ ψ (cid:48) (1)] L (cid:88) y =1 τ yL (cid:88) x =1 τ x (180)As a function of the L trapping times τ y =1 , ,..,L of the disordered ring, the scaled variance of Eq. 27 reads V KS [ τ y =1 , ,..,L ] = L (cid:88) y =1 (1 + τ y h KS [ τ . ]) L (cid:88) x =1 τ x = − h KS [ τ . ] + L + h KS [ τ . ] L (cid:88) y =1 τ yL (cid:88) x =1 τ x (181)where h KS [ τ y =1 , ,..,L ] was given in Eq. 157.Let us now analyze its behavior for large L when the probability distribution q ( τ ) of the trapping times τ ∈ ]1 , + ∞ [is the power-law of Eq. B1 depending on the parameter µ > µ > τ of the trapping time is finite (Eq. B4), both the numeratorand the denominator of Eq. 181 will follow the law of large numbers while the Kolmogorov-Sinai entropy convergestowards the finite asymptotic value of Eq. 158. As a consequence, the scaled variance of Eq. 108 will then convergestowards the finite asymptotic value V ( ∞ ) KS = (cid:90) + ∞ dτ q ( τ ) (cid:104) τ h ( ∞ ) KS (cid:105) (cid:90) + ∞ dτ τ q ( τ ) = − h ( ∞ ) KS + 1 + (cid:104) h ( ∞ ) KS (cid:105) (cid:90) + ∞ dτ q ( τ ) τ (cid:90) + ∞ dτ τ q ( τ ) for µ > < µ < τ of the trapping time is infinite (Eq. B4) while the firstmoment τ remains finite (Eq. B3), the only anomalous scaling will come from the sum of the square of the trappingtimes discussed around Eq. B18. As a consequence, the scaled variance V KS will not remain finite as in Eq. 109, butwill diverge with the scaling L µ − of exponent (cid:16) µ − (cid:17) ∈ ]0 , V ( L ) KS (cid:39) L → + ∞ L µ − ϑ (cid:104) h ( ∞ ) KS (cid:105) (cid:90) + ∞ dτ τ q ( τ ) for 1 < µ < L since the variable ϑ of Eq. B21 is distributed withthe L´evy law L µ ( ϑ ).(iii) in the region 0 < µ < τ and the second moment τ (Eqs B3 B4), while theKolmogorov-Sinai entropy does not converge anymore towards the finite asymptotic value of Eq. 158, one needs toreturn to the finite-size expression of Eq. 157 for the Kolmogorov-Sinai entropy and to re-analyze the leading behaviorof Eq. 181 in terms of the sum Σ L of Eq. B6 and Υ L of Eq. B18 V KS [ τ y =1 , ,..,L ] (cid:39) L → + ∞ Υ L Σ L L (cid:18)(cid:20) (cid:90) + ∞ dτ q ( τ ) ln ( τ ) (cid:21)(cid:19) ∝ L → + ∞ L − µ for 0 < µ < L − µ is different from Eq. 183, while the limit distribution would require a more refined analysis ofthe ratio Υ L Σ L involving the two correlated sums of Eq. B6 and Eq. B18. I. Direct analysis of self-averaging observables in the thermodynamic limit of an infinite ring L → + ∞ As discussed above, the Kolmogorov-Sinai entropy h KS = − ψ (cid:48) (1) is self-averaging in the thermodynamic limit L → + ∞ only for µ > V KS = ψ (cid:48)(cid:48) (1) is self-averaging in the thermodynamiclimit L → + ∞ only for µ > q ( τ ) has all its moments finite (in contrast to the power-law form of Eq.B1 discussed up to now), then the scaled cumulant generating function ψ ( β ) and its derivative will be self-averagingin the thermodynamic limit L → + ∞ . If one rewrites Eq. 163 via its logarithm and divide by the size L of the ring0 = 1 L L (cid:88) x =1 ln (cid:2) ψ ( β ) τ βx + βτ β − x (cid:3) (185)one obtains that the self-averaging value ψ L = ∞ ( β ) in the thermodynamic limit L → + ∞ is determined by the equation0 = (cid:90) + ∞ dτ q ( τ ) ln (cid:2) ψ ∞ ( β ) τ β + βτ β − (cid:3) (186)However, whenever there are non-self-averaging effets, one should return to the finite-size Eq. 185 to analyze them,as described above for the two first derivatives ψ (cid:48) (1) = − h KS and ψ (cid:48)(cid:48) (1) = V KS . VII. CONCLUSION
In this paper, we have revisited the Ruelle thermodynamic formalism for discrete-time Markov chains and forcontinuous-time Markov Jump processes in the langage of the large deviation theory for the intensive informationthat represents a particularly interesting additive observable of the dynamical trajectories. We have described howthe generating function of the information can be analyzed via the appropriate β -deformed Markov generator and theDoob generator of the associated β -conditioned process. In particular, we have stressed that the Kolmogorov-Sinaientropy h KS only requires the knowledge of the steady-state P ∗ , while the scaled variance V KS of the informationrequires the knowledge of the Green function G . As examples of applications where all the introduced notions can beexplicitly evaluated as a function of the parameter β , we have focused on the Directed Random Trap Model both indiscrete time and in continuous time, in order to show explicitly that, despite some important technical differencesbetween the two frameworks, the same conclusions emerge for the physical observables that characterize the glassinessof the dynamics. In particular, we have analyzed in detail how the Kolmogorov-Sinai entropy h KS and the scaledvariance V KS display anomalous scaling laws with the size L and and non-self-averaging effects in some regions ofparameters., Appendix A: Reminder on the perturbation theory for an isolated eigenvalue of a non-symmetric matrix
In this Appendix, we consider the expansion of the non-symmetric matrix M ( (cid:15) ) = M (0) + (cid:15)M (1) + (cid:15) M (2) + O ( (cid:15) ) (A1)7Th goal is to compute the series expansion of the isolated eigenvalue denoted by the index 0 E ( (cid:15) ) = E (0)0 + (cid:15)E (1)0 + (cid:15) E (2)0 + O ( (cid:15) ) (A2)with its corresponding right and left eigenvectors | r ( (cid:15) ) (cid:105) = | r (0)0 (cid:105) + (cid:15) | r (1)0 (cid:105) + (cid:15) | r (2)0 (cid:105) + O ( (cid:15) ) (cid:104) l ( (cid:15) ) | = (cid:104) l (0)0 | + (cid:15) (cid:104) l (1)0 | + (cid:15) (cid:104) l (2)0 | + O ( (cid:15) ) (A3)
1. Eigenvalue equations and normalization of the eigenvectors
One writes the series expansion of the eigenvalue equation for the right eigenvector0 = ( M ( (cid:15) ) − E ( (cid:15) )) | r ( (cid:15) ) (cid:105) = (cid:16) M (0) − E (0)0 (cid:17) | r (0)0 (cid:105) + (cid:15) (cid:104)(cid:16) M (0) − E (0)0 (cid:17) | r (1)0 (cid:105) + (cid:16) M (1) − E (1)0 (cid:17) | r (0)0 (cid:105) (cid:105) + (cid:15) (cid:104)(cid:16) M (0) − E (0)0 (cid:17) | r (2)0 (cid:105) + (cid:16) M (1) − E (1)0 (cid:17) | r (1)0 (cid:105) + (cid:16) M (2) − E (2)0 (cid:17) | r (0)0 (cid:105) (cid:105) + O ( (cid:15) )(A4)and for the left eigenvector0 = (cid:104) l ( (cid:15) ) | ( M ( (cid:15) ) − E ( (cid:15) )) = (cid:104) l (0)0 | (cid:16) M (0) − E (0)0 (cid:17) + (cid:15) (cid:104) (cid:104) l (1)0 | (cid:16) M (0) − E (0)0 (cid:17) + (cid:104) l (0)0 | (cid:16) M (1) − E (1)0 (cid:17)(cid:105) + (cid:15) (cid:104) (cid:104) l (2)0 | (cid:16) M (0) − E (0)0 (cid:17) + (cid:104) l (1)0 | (cid:16) M (1) − E (1)0 (cid:17) + (cid:104) l (0)0 | (cid:16) M (2) − E (2)0 (cid:17)(cid:105) + O ( (cid:15) )(A5)as well as the series expansion of the normalization1 = (cid:104) l ( (cid:15) ) | r ( (cid:15) ) (cid:105) = (cid:104) l (0)0 | r (0)0 (cid:105) + (cid:15) (cid:16) (cid:104) l (0)0 | r (1)0 (cid:105) + (cid:104) l (1)0 | r (0)0 (cid:105) (cid:17) + (cid:15) (cid:16) (cid:104) l (0)0 | r (2)0 (cid:105) + (cid:104) l (1)0 | r (1)0 (cid:105) + (cid:104) l (0)0 | r (2)0 (cid:105) (cid:17) + O ( (cid:15) )(A6)For (cid:15) = 0, one assumes that one knows the unperturbed eigenvalue E (0)0 of the unperturbed matrix M (0) togetherwith its right and left eigenvectors properly normalized0 = (cid:16) M (0) − E (0)0 (cid:17) | r (0)0 (cid:105) (cid:104) l (0)0 | (cid:16) M (0) − E (0)0 (cid:17) (cid:104) l (0)0 | r (0)0 (cid:105) (A7)
2. First-order perturbation
At order (cid:15) , the standard choice that respects the normalization of Eq A6 is given by the orthogonality conditionsfor the first-order corrections of the eigenvectors with respect to the unperturbed eigenvectors0 = (cid:104) l (0)0 | r (1)0 (cid:105) (cid:104) l (1)0 | r (0)0 (cid:105) (A8)Then the eigenvalue Eq. A4 for the right eigenvector at order (cid:15) (cid:16) M (0) − E (0)0 (cid:17) | r (1)0 (cid:105) + (cid:16) M (1) − E (1)0 (cid:17) | r (0)0 (cid:105) (A9)can be projected onto the unperturbed left eigenvector (cid:104) l (0)0 | to obtain the first-order correction of the eigenvalue E (1)0 = (cid:104) l (0)0 | M (1) | r (0)0 (cid:105) (A10)Equivalently, the eigenvalue Eq. A5 for the right eigenvector at order (cid:15) (cid:104) l (1)0 | (cid:16) M (0) − E (0)0 (cid:17) + (cid:104) l (0)0 | (cid:16) M (1) − E (1)0 (cid:17) (A11)8can be projected onto the unperturbed right eigenvector | r (0)0 (cid:105) to obtain again Eq. A10.To obtain the first-order corrections of the eigenvectors, one needs to introduce the inverse of the operator ( E (0)0 − M (0) ) within the subspace orthogonal to the subspace ( | r (0)0 (cid:105)(cid:104) l (0))0 | ) of the unperturbed eigenvectors G (0) ≡ (cid:16) − | r (0)0 (cid:105)(cid:104) l (0)0 | (cid:17) E (0)0 − M (0) (cid:16) − | r (0)0 (cid:105)(cid:104) l (0)0 | (cid:17) (A12)The application of this Green function G (0) on the left of Eq. A9 and on the right of Eq. A11 yields that the first-ordercorrections for the eigenvectors read using Eq. A8 | r (1)0 (cid:105) = G (0) M (1) | r (0)0 (cid:105)(cid:104) l (1)0 | = (cid:104) l (0)0 | M (1) G (0) (A13)If the Green function G (0) from Eq. A12 is computed in the basis of the eigenvectors of the unperturbed matrix M (0) G (0) = (cid:88) k (cid:54) =0 | r (0) k (cid:105)(cid:104) l (0) k | E (0)0 − E (0) k (A14)one recovers the analog of the familiar formulas from quantum mechanics perturbation theory. However, if one doesnot know the full spectrum of the unperturbed matrix M (0) or if one does not wish to compute it, the Green function G (0) of Eq. A12 can be computed directly by solving the matrix equations (cid:16) E (0)0 − M (0) (cid:17) G (0) = − | r (0)0 (cid:105)(cid:104) l (0)0 | G (0) (cid:16) E (0)0 − M (0) (cid:17) = − | r (0)0 (cid:105)(cid:104) l (0)0 | (A15)with the orthogonality conditions (cid:104) l (0)0 | G (0) = 0 G (0) | r (0)0 (cid:105) = 0 (A16)
3. Second-order perturbation
The eigenvalue Eq. A4 for the right eigenvector at order (cid:15) (cid:16) M (0) − E (0)0 (cid:17) | r (2)0 (cid:105) + (cid:16) M (1) − E (1)0 (cid:17) | r (1)0 (cid:105) + (cid:16) M (2) − E (2)0 (cid:17) | r (0)0 (cid:105) (A17)can be projected onto the unperturbed left eigenvector (cid:104) l (0)0 | to obtain the second-order correction of the eigenvalue E (2)0 = (cid:104) l (0)0 | M (1) | r (1)0 (cid:105) + (cid:104) l (0)0 | M (2) | r (0)0 (cid:105) (A18)Similarly, the projection of the eigenvalue Eq A5 for the left eigenvector at order (cid:15) (cid:104) l (2)0 | (cid:16) M (0) − E (0)0 (cid:17) + (cid:104) l (1)0 | (cid:16) M (1) − E (1)0 (cid:17) + (cid:104) l (0)0 | (cid:16) M (2) − E (2)0 (cid:17) (A19)onto the unperturbed right eigenvector | r (0)0 (cid:105) yields E (2)0 = (cid:104) l (1)0 | M (1) | r (0)0 (cid:105) + (cid:104) l (0)0 | M (2) | r (0)0 (cid:105) (A20)Using the firs-order corrections of the eigenvectors of Eq. A13, one obtains that Eqs A20 and A20 give the samefinal result E (2)0 = (cid:104) l (0)0 | M (1) G (0) M (1) | r (0)0 (cid:105) + (cid:104) l (0)0 | M (2) | r (0)0 (cid:105) (A21)as it should for consistency. If one uses the decomposition of Eq. A12 for the Green function G (0) , one recovers theanalog of the familiar formula from quantum mechanics perturbation theory, but G (0) can also be computed directlyfrom Eqs A15 and A16.9 Appendix B: Reminder on L´evy sums
Let us assume that the probability distribution q ( τ ) of the trapping time τ ∈ ]1 , + ∞ [ is the power-law dependingon the parameter µ > q ( τ ) = µτ µ (B1)The non-integer moments of order k are finite only for k < µτ k = (cid:90) + ∞ dτ τ k q ( τ ) = µµ − k for k < µ (B2)and diverge for k ≥ µ . In particular, the first moment k = 1 is finite only for µ > τ = (cid:90) + ∞ dτ τ q ( τ ) = µµ − µ > k = 2 is finite only for µ > τ = (cid:90) + ∞ dτ τ q ( τ ) = µµ − µ > σ ≡ τ − ( τ ) = µµ − − (cid:18) µµ − (cid:19) for µ > L independent trapping timesΣ L ≡ L (cid:88) x =1 τ x (B6)will have a finite averaged value only for µ > L = Lτ = L µµ − µ > µ > L − (Σ L ) = L (cid:16) τ − ( τ ) (cid:17) = Lσ for µ > µ >
2, where the the appropriate rescaled variable θ ≡ Σ L − Lτ √ Lσ (B9)will be Gaussian distributed.For 0 < µ <
2, the sum Σ L can be analyzed instead in terms of L´evy stables laws. Note that L´evy sums have beenanalyzed in the context of various disordered systems [73–79].
1. L´evy sum Σ L for < µ < For 0 < µ <
1, the Laplace transform of the distribution of Eq. B1 presents the characteristic singularity in p µ inthe Laplace variable p near the origin p → e − pτ = (cid:90) + ∞ dτ q ( τ ) e − pτ = 1 − (cid:90) + ∞ dτ µτ µ (1 − e − pτ ) = 1 − p µ (cid:90) + ∞ p dt µt µ (1 − e − t ) (cid:39) p → − p µ (cid:90) + ∞ dt µt µ (1 − e − t ) + ... (cid:39) p → e − p µ (cid:82) + ∞ dt µt µ (1 − e − t )+ ... (B10)0So the generating function of the sum of Eq. B6 will display the same singularity e − p Σ L = (cid:16) e − pτ (cid:17) L (cid:39) p → e − Lp µ (cid:82) + ∞ dt µt µ (1 − e − t )+ ... = e − (cid:18) L µ p (cid:19) µ (cid:82) + ∞ dt µt µ (1 − e − t )+ ... (B11)This means that the sum Σ L will grow as L µ , i.e. more rapidly than linearly in L , and that the appropriate rescaledvariable θ ≡ Σ L L µ (B12)will be distributed with the L´evy law L µ ( θ ) of index µ ∈ ]0 ,
1[ determined by the Laplace transform (cid:90) + ∞ dθ L µ ( θ ) e − pθ = e − p µ (cid:82) + ∞ dt µt µ (1 − e − t ) (B13)
2. L´evy sum Σ L for < µ < In the region 1 < µ <
2, the averaged value τ of Eq. B3 exists. In the Laplace transform of the distribution of Eq.B1, the singularity in p µ will thus appear after the regular term in pe − pτ = (cid:90) + ∞ dτ q ( τ ) e − pτ = 1 − pτ − (cid:90) + ∞ dτ µτ µ (1 − pτ − e − pτ ) = 1 − pτ − p µ (cid:90) + ∞ p dt µt µ (1 − t − e − t ) (cid:39) p → − pτ − p µ (cid:90) + ∞ dt µt µ (1 − t − e − t ) + ... (cid:39) p → e − pτ − p µ (cid:82) + ∞ dt µt µ (1 − t − e − t )+ ... (B14)So the generating function of the sum of Eq. B6 will display the same singularity e − p Σ L = (cid:16) e − pτ (cid:17) L (cid:39) p → e − Lpτ − Lp µ (cid:82) + ∞ dt µt µ (1 − t − e − t )+ ... (B15)This means that the difference (Σ L − Lτ ) will scale as L µ , i.e. will be bigger than the Central-Limit fluctuations oforder L of Eq. B9. The appropriate rescaled variable θ ≡ Σ L − LτL µ (B16)will be distributed with the L´evy law L µ ( θ ) of index µ ∈ ]1 ,
2[ determined by the Laplace transform (cid:90) + ∞ dθ L µ ( θ ) e − pθ = e − Lp µ (cid:82) + ∞ dt µt µ (1 − t − e − t ) (B17)
3. Translation for the sum Υ L of the squares of the trapping times In the text, we will also need to analyze the sum of the squares of the trapping timesΥ L ≡ L (cid:88) x =1 τ x (B18)Since the variable u = τ ∈ ]1 , + ∞ [ is distributed with the probability Q ( u ) = µ u µ (B19)that corresponds to the power-law form of Eq. B1 but with the modified parameter µ (cid:48) = µ , the previous discussioncan be directly translated as follows :(i) the Central-Limit theorem will be valid for the sum of Eq. B18 only in the region µ (cid:48) > µ > < µ (cid:48) < < µ <
4, the appropriate rescaled variable ϑ ≡ Υ L − Lτ L µ (cid:48) = Υ L − Lτ L µ (B20)will be distributed with the L´evy law L µ (cid:48) ( ϑ ) = L µ ( ϑ ) of index µ (cid:48) ∈ ]1 , < µ (cid:48) < < µ <
2, the appropriate rescaled variable ϑ ≡ Υ L L µ (cid:48) = Υ L L µ (B21)will be distributed with the L´evy law L µ (cid:48) ( ϑ ) = L µ ( ϑ ) of index µ (cid:48) ∈ ]0 , [1] D. Ruelle, ”Thermodynamic Formalism : The Mathematical Structures of Equilibrium Statistical Mechanics”, CambridgeUniversity Press (1978).[2] C. Beck and F. Schloegl, ”Thermodynamics of chaotic systems : An introduction” Cambridge University Press (1993).[3] P. Gaspard and X.J. Wang, Phys. Rep. 235, 291 (1993).[4] J.F. Dorfman, ”An introduction to chaos in nonequilibrium statistical mechanics”, Cambridge University Press (1999).[5] P. Gaspard, Journal of Statistical Physics 117, 599 (2004).[6] V. Lecomte, PhD Thesis (2007) ”Thermodynamique des histoires et fluctuations hors d’´equilibre” Universit´e Paris 7.[7] V. Lecomte, C. Appert-Rolland and F. van Wijland, Phys. Rev. Lett. 95 010601 (2005).[8] V. Lecomte, C. Appert-Rolland and F. van Wijland, J. Stat. Phys. 127 51-106 (2007).[9] V. Lecomte, C. Appert-Rolland and F. van Wijland, Comptes Rendus Physique 8, 609 (2007).[10] J.P. Garrahan, R.L. Jack, V. Lecomte, E. Pitard, K. van Duijvendijk, F. van Wijland, Phys. Rev. Lett. 98, 195702 (2007).[11] J.P. Garrahan, R.L. Jack, V. Lecomte, E. Pitard, K. van Duijvendijk and F. van Wijland, J. Phys. A 42, 075007 (2009).[12] K. van Duijvendijk, R.L. Jack and F. van Wijland, Phys. Rev. E 81, 011110 (2010).[13] C. Monthus, J. Stat. Mech. P03008 (2011).[14] Y. Oono, Progress of Theoretical Physics Supplement 99, 165 (1989).[15] R.S. Ellis, Physica D 133, 106 (1999).[16] H. Touchette, Phys. Rep. 478, 1 (2009);H. Touchette, Modern Computational Science 11: Lecture Notes from the 3rd International Oldenburg Summer School,BIS-Verlag der Carl von Ossietzky Universitat Oldenburg, 2011.[17] B. Derrida, J. Stat. Mech. P07023 (2007).[18] R J Harris and G M Sch¨utz, J. Stat. Mech. P07020 (2007).[19] E.M. Sevick, R. Prabhakar, S. R. Williams, D. J. Searles, Ann. Rev. of Phys. Chem. Vol 59, 603 (2008).[20] H. Touchette and R.J. Harris, chapter ”Large deviation approach to nonequilibrium systems” of the book ”NonequilibriumStatistical Physics of Small Systems: Fluctuation Relations and Beyond”, Wiley 2013.[21] L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio, and C. Landim Rev. Mod. Phys. 87, 593 (2015).[22] R. L. Jack, P. Sollich, The European Physical Journal Special Topics 224, 2351 (2015).[23] A. Lazarescu, J. Phys. A: Math. Theor. 48 503001 (2015).[24] A. Lazarescu, J. Phys. A: Math. Theor. 50 254004 (2017).[25] R. L. Jack, Eur. Phy. J. B 93, 74 (2020)[26] A. de La Fortelle, PhD (2000) ”Contributions to the theory of large deviations and applications” INRIA Rocquencourt.[27] R. Ch´etrite, PhD Thesis 2008 ”Grandes d´eviations et relations de fluctuation dans certains mod`eles de syst`emes horsd’´equilibre” ENS Lyon[28] B. Wynants, arXiv:1011.4210, PhD Thesis (2010), ”Structures of Nonequilibrium Fluctuations”, Catholic University ofLeuven.[29] R. Ch´etrite, HDR Thesis (2018) ”P´er´egrinations sur les ph´enom`enes al´eatoires dans la nature”, Laboratoire J.A. Dieudonn´e,Universit´e de Nice.[30] R. L. Jack, P. Sollich, Prog. Theor. Phys. Supp. 184, 304 (2010)[31] D. Simon, J. Stat. Mech. (2009) P07017[32] V. Popkov, G. M. Schuetz, D. Simon, J. Stat. Mech. P10007 (2010).[33] D. Simon, J. Stat. Phys. 142, 931 (2011)[34] V. Popkov, G. M. Schuetz, J. Stat. Phys 142, 627 (2011)[35] V. Belitsky, G. M. Schuetz, J. Stat. Phys. 152, 93 (2013)[36] O. Hirschberg, D. Mukamel, G. M. Schuetz, J. Stat. Mech. P11023 (2015).[37] G. M. Schuetz, From Particle Systems to Partial Differential Equations II, Springer Proceedings in Mathematics andStatistics Volume 129, pp 371-393, P. Gon¸calves and A.J. Soares (Eds.), (Springer, Cham, 2015).[38] R. Ch´etrite and H. Touchette, Phys. Rev. Lett. 111, 120601 (2013).[39] R. Ch´etrite and H. Touchette Ann. Henri Poincare 16, 2005 (2015).2