Nonequilibrium Markov processes conditioned on large deviations
NNonequilibrium Markov processes conditioned on large deviations
Rapha¨el Chetrite
Laboratoire J. A. Dieudonn´e, UMR CNRS 7351,Universit´e de Nice Sophia Antipolis, Nice 06108, France
Hugo Touchette
National Institute for Theoretical Physics (NITheP), Stellenbosch 7600, South Africa andInstitute of Theoretical Physics, Stellenbosch University, Stellenbosch 7600, South Africa (Dated: October 10, 2014)We consider the problem of conditioning a Markov process on a rare event and of representingthis conditioned process by a conditioning-free process, called the effective or driven process.The basic assumption is that the rare event used in the conditioning is a large deviation-typeevent, characterized by a convex rate function. Under this assumption, we construct thedriven process via a generalization of Doob’s h -transform, used in the context of bridgeprocesses, and show that this process is equivalent to the conditioned process in the long-timelimit. The notion of equivalence that we consider is based on the logarithmic equivalence ofpath measures and implies that the two processes have the same typical states. In constructingthe driven process, we also prove equivalence with the so-called exponential tilting of theMarkov process, often used with importance sampling to simulate rare events and giving rise,from the point of view of statistical mechanics, to a nonequilibrium version of the canonicalensemble. Other links between our results and the topics of bridge processes, quasi-stationarydistributions, stochastic control, and conditional limit theorems are mentioned. Keywords: Markov processes, large deviations, conditioning, nonequilibrium processes, microcanonicaland canonical ensembles
Typeset by REVTEX a r X i v : . [ c ond - m a t . s t a t - m ec h ] O c t CONTENTS
I. Introduction 3II. Notations and definitions 7A. Homogeneous Markov processes 7B. Pure jump processes and diffusions 9C. Conditioning observables 10D. Large deviation principle 11E. Nonequilibrium path ensembles 12F. Process equivalence 13III. Non-conservative tilted process 14A. Definition 14B. Spectral elements 15C. Marginal canonical density 16IV. Generalized Doob transform 18A. Definition 18B. Historical conditioning of Doob 20V. Driven Markov process 24A. Definition 24B. Equivalence with the canonical path ensemble 25C. Equivalence with the microcanonical path ensemble 27D. Invariant density 30E. Reversibility properties 31F. Identities and constraints 32VI. Applications 34A. Extensive Brownian bridge 34B. Ornstein–Uhlenbeck process 35C. Quasi-stationary distributions 36A. Derivation of the tilted generator 371. Pure jump processes 372. Diffusion processes 38B. Change of measure for the generalized Doob transform 39C. Squared field for diffusion processes 39D. Generator of the canonical path measure 39E. Markov chains 40Acknowledgments 41References 41
I. INTRODUCTION
We treat in this paper the problem of conditioning a Markov process X t on a rare event A T defined on the time interval [0 , T ], and of representing this conditioned Markov process in terms ofa conditioning-free Markov process Y t , called the effective or driven process , having the same typicalstates as the conditioned process in the stationary limit T → ∞ . More abstractly, this means thatwe are looking for a Markov process Y t such that X t |A T ∼ = Y t , (1)where X t |A T stands for the conditioned process and ∼ = is an asymptotic notion of process equivalence,related to the equivalence of ensembles in statistical physics, which we will come to define in aprecise way below. Under some conditions on X t , and for a certain class of large deviation-typeevents A T , we will show that Y t exists and is unique, and will construct its generator explicitly.This problem can be considered as a generalization of Doob’s work on Markov conditioning [1, 2]and also finds its source, from a more applied perspective, in many fundamental and seeminglyunrelated problems of probability theory, stochastic simulations, optimal control theory, andnonequilibrium statistical mechanics. These are briefly discussed next to set the context of ourwork: Conditioned Markov processes:
Doob was the first historically to consider conditioning ofMarkov processes, starting with the Wiener process conditioned on leaving the interval [0 , (cid:96) ] atthe boundary { (cid:96) } [1, 2]. In solving this problem, he introduced a transformation of the Wienerprocess, now referred to as Doob’s h -transform , which was later adapted under the same name todeal with other conditionings of stochastic processes, including the Brownian bridge [3], Gaussianbridges [4–6], and the Schr¨odinger bridge [7–11], obtained by conditioning a process on reaching acertain target distribution in time as opposed to a target point. Doob’s transform also appearsprominently in the theory of quasi-stationary distributions [12–16], which describes in the simplestcase the conditioning of a process never to reach an absorbing state.We discuss some of these historical examples in Sec. IV to explain how Doob’s original transformrelates to the large deviation conditioning considered here. Following this section, we will see thatthe construction of the driven process Y t also gives rise to a process transformation, which is howeverdifferent from Doob’s transform because of the time-integrated character of the conditioning A T considered. Gibbs conditioning and conditional limit theorems:
Let X , . . . , X n be a sequence ofindependent and identically distributed random variables with common distribution P ( x ) and let S n denote their sample mean: S n = 1 n n (cid:88) i =1 X i . (2)A conditional limit theorem for this sequence refers to the distribution of X obtained in the limit n → ∞ when the whole sequence X , . . . , X n is conditioned on S n being in a certain interval or on S n assuming a certain value. In the latter case, it is known that, under some conditions on P ( x ),lim n →∞ P { X = x | S n = s } = P ( x ) e kx W ( k ) ≡ P k ( x ) , (3)where k is a real parameter related to the conditioning value s and W ( k ) is the generating functionof P ( x ) normalizing the so-called exponentially tilted distribution P k ( x ); see [17–20] for details.This asymptotic conditioning of a sequence of random variables is sometimes referred to as Gibbsconditioning [21] because of its similarity with the construction of the microcanonical ensemble ofstatistical mechanics, further discussed below. Other limit theorems can be obtained by consideringsub-sequences of X , . . . , X n instead of X , as above (see [22–24]), or by assuming that the X i ’sform a Markov chain instead of being independent [25, 26].This paper came partly as an attempt to generalize these results to general Markov processesand, in particular, to continuous-time processes. The essential step needed to arrive at these resultsis the derivation of the driven process; the conditional limit theorems that follow from this processwill be discussed in a future publication. Rare event simulations:
Many numerical methods used for determining rare event proba-bilities are based on the idea of importance sampling, whereby the underlying distribution P ofa random variable or process is modified to a target distribution Q putting more weight on therare events to be sampled [27]. A particularly useful and general distribution commonly used inthis context is the exponentially tilted distribution P k mentioned earlier, which is also known asthe exponential family or Esscher transform of P [28]. Such a distribution can be generalized tosequences of random variables, as well as paths of stochastic processes (as a path measure), andcorresponds, from the point of view of statistical mechanics, to the probability distribution definingthe canonical ensemble, which describes thermodynamic systems coupled to a heat bath with inversetemperature β = − k .This link with statistical mechanical ensembles is discussed in more detail below. For theconditioning problem treated here, we make contact with P k by using this distribution as anintermediate step to construct the driven process Y t , as explained in Secs. III and V. An interestingby-product of this construction is that we can interpret Y t as a modified Markov process thatasymptotically realizes, in a sense to be made precise below, the exponential tilting of X t .A further link with rare event sampling is established in that the semi-group or propagator of Y t is deeply related to Feynman–Kac functionals, which underlie cloning [29–31] and genealogical [32]methods also used for sampling rare events. In fact, we will see in Sec. II that the driven process Y t is essentially a normalized version of a non-conservative process, whose generator is the so-calledtilted generator of large deviations and whose dominant eigenvalue (when it exists) is the so-calledscaled cumulant generating function – the main quantity obtained by cloning methods [29–31]. Stochastic control and large deviations:
The generalization of Doob’s transform thatwe will discuss in Sec. IV has been considered by Fleming and Sheu in their work on controlrepresentations of Feynman–Kac-type partial differential equations (PDEs) [33–36]. The problemhere is to consider a linear operator of the form L + V ( x ), where L is the generator of a Markovprocess, and to provide a stochastic representation of the solution φ ( x, t ) of the backward PDE ∂φ∂t + ( L + V ) φ = 0 , t ≤ T (4)with final condition φ ( x, T ) = Φ( x ). The Feynman–Kac formula [37–39] provides, as is well known,a stochastic representation of φ ( x, t ) in terms of the expectation φ ( x, t ) = E [Φ( X T ) e (cid:82) Tt V ( X s ) ds | X t = x ] . (5)The idea of Fleming and Sheu is to consider, instead of φ , the logarithm or Hopf–Cole transform I = − ln φ , which solves the Hamilton–Jacobi-like PDE, ∂I∂t + ( HI ) − V ( x ) = 0 , (6)where ( HI ) = − e I ( Le − I ), and to find a controlled process X ut with generator L u , so as to rewrite(6) as a dynamic programming equation: ∂I∂t + min u { ( L u I )( x ) + k V ( x, u ) } = 0 , (7)where k V ( x, u ) is some cost function that depends on V , the system’s state, and the controller’sstate. In this form, they show that I represents the value function of the control problem, involvinga Lagrangian dual to the Hamiltonian H ; see [40] for a more detailed description.These results have been applied by Fleming and his collaborators to give control representationsof various distributions related to exit problems [33–36], dominant eigenvalues of linear operators[41–43], and optimal solutions of sensitive risk problems [44–46], which aim at minimizing functionalshaving the exponential form of (5). What is interesting in all these problems is that the generator L u of the optimally-controlled process is given by a Doob transform similar to the one we use toconstruct the conditioned process. In their work, Fleming et al. do not interpret this transformationas a conditioning, but as an optimal change of measure between the controlled and referenceprocesses. Such a change of measure has also been studied in physics more recently by Nemoto andSasa [47–49]. We will discuss these links in more detail in a future publication. Fluctuation paths and fluctuation dynamics:
It is well known that rare transitions indynamical systems perturbed by a small noise are effected by special trajectories known as reactionpaths, fluctuation paths, most probable paths or instantons; see [50] for a review. These pathsare described mathematically by the Freidlin–Wentzell theory of large deviations [51], and arefundamental for characterizing many noise-activated (escape-type) processes arising in chemicalreactions, biological processes, magnetic systems, and glassy systems [52–54].The concept of fluctuation path is specific to the low-noise limit: for processes with arbitraryrandom perturbations, there is generally not a single fluctuation path giving rise to a rare event,but many different fluctuation paths leading to the same event, giving rise to what we call a fluctuation dynamics . The driven process that we construct in this paper is a specific example ofsuch a fluctuation dynamics: it describes the effective dynamics of X t as this process is seen tofluctuate away from its typical behavior to ‘reach’ the event A T . Consequently, it can be used tosimulate or sample this fluctuation in an efficient way, bringing yet another connection with rareevent simulations. This will be made clearer as we come to define this process in Sec. V. Statistical ensembles for nonequilibrium systems:
The problem of defining or extendingstatistical ensembles, such as the microcanonical and canonical ensembles, to nonequilibrium systemshas a long history in physics. It was revived recently by Evans [55–57], who proposed derivingthe transition rates of a system driven by external forces in a stationary nonequilibrium state byconditioning the transition rates of the same system when it is not driven, that is, when it is in anequilibrium state with transition rates satisfying detailed balance. Underlying this proposal is theinteresting idea that nonequilibrium systems driven in steady states could be seen as equilibriumsystems in which the driving is effected by a conditioning. This means, for example, that a drivennonequilibrium system having a given stationary particle current could be thought of, physically, asbeing equivalent to a non-driven equilibrium system in which this current appears as a fluctuation.The validity of this idea needs to be tested using examples of driven physical systems for whichnonequilibrium stationary solutions can be obtained explicitly and be compared with conditioningsof their equilibrium solutions. Our goal here is not to provide such a test, but to formalize theproblem in a clear, mathematical way as a Markov conditioning problem based on large deviations.This leads us to define in a natural way a nonequilibrium generalization of the microcanonicalensemble for trajectories or paths of Markov processes, as well as a nonequilibrium version of thecanonical ensemble, which is a path version of the exponentially tilted measure P k .The latter ensemble has been used recently with transition path sampling [58–61] to simulaterare trajectories of nonequilibrium systems associated with glassy phases and dynamical phasetransitions; see [62] for a recent review. In this context, the exponentially tilted distribution P k is referred to as the biased , tilted or s -ensemble , the last name stemming from the fact that thesymbol s is used instead of k [62–66]. These simulations follow exactly the idea of importancesampling mentioned earlier: they re-weight the statistics of the trajectories or paths of a system inan exponential way so as to reveal, in a typical way, trajectories responsible for certain states orphases that are atypical in the original system. In Sec. V, we will give conditions that ensure thatthis exponential re-weighting is equivalent to a large deviation conditioning – in other words, we willgive conditions ensuring that the path canonical ensemble is equivalent to the path microcanonicalensemble.The connection with the driven process is established from this equivalence by showing thatthe canonical ensemble can be realized by a Markov process in the long-time limit. Some resultson this canonical–Markov connection were obtained by Jack and Sollich [64] for a class of jumpprocesses and by Garrahan and Lesanovsky [67] for dissipative quantum systems (see also [68–71]).Here, we extend these results to general Markov processes, including diffusions, and relate themexplicitly to the conditioning problem.These connections and applications will not be discussed further in the paper, but shouldhopefully become clearer as we define the driven process and study its properties in the nextsections. The main steps leading to this process are summarized in [72]; here, we provide the fullderivation of this process and discuss, as mentioned, its link with Doob’s results. We also discussnew results related to constraints satisfied by the driven process, as well as special cases of theseresults for Markov chains, jump processes, and pure diffusions.The plan of the paper is as follows. In Sec. II, we define the class of general Markov processesand conditioning events (or observables) that we consider, and introduce various mathematicalconcepts (Markov semi-groups, Markov generators, path measures) used throughout the paper.We also define in that section the path versions of the microcanonical and canonical ensembles,corresponding respectively to the conditioning and exponential tilting of X t , and introduce all theelements of large deviation theory needed to define and study our class of rare event conditioning.We then proceed to construct the driven process Y t and prove its equivalence with the conditionedprocess X t |A T in three steps. Firstly, we construct in Sec. III a non-conservative process fromwhich various spectral elements, related to the large deviation conditioning, are obtained. Secondly,we study in Sec. IV the generalization of Doob’s transform needed to construct Y t , and show how itrelates to the original transform considered by Doob. Thirdly, we use the generalized transform todefine in Sec. V the driven process proper, and show that it is equivalent to the conditioned processby appealing to general results about ensemble equivalence.Our main results are contained in Sec. V. Their novelty, compared to previous works, residesin the fact that we treat the equivalence of the driven and conditioned processes explicitly viapath versions of the canonical and microcanonical ensembles, derive precise conditions for thisequivalence to hold, and express all of our results in the general language of Markov generators,which can be used to describe jump processes, diffusions, or mixed processes, depending on thephysical application considered. New properties of the driven process, including constraint rulessatisfied by its transition rates or generator, are also discussed in that section. Section VI finallypresents some applications of our results for diffusions, to show how the driven process is obtained inpractice, and for absorbing Markov chains, to make a connection with quasi-stationary distributions.The specialization of our results to Markov chains is summarized in the Appendices, which alsocollect various technical steps needed for proving our results. II. NOTATIONS AND DEFINITIONS
We define in this section the class of Markov processes and observables of these processes thatwe use to define the rare event conditioning problem. Markov processes are widely used as modelsof stochastic systems, for example, in the context of financial time series [73], biological processes[74], and chemical reactions [52–54]. In physics, they are also used as a general framework formodeling systems driven in nonequilibrium steady states by noise and external forces [52–54], suchas interacting particle systems coupled to different particle and energy reservoirs, which have beenstudied actively in the mathematics and physics literature recently [75–79]. For general introductionsto Markov processes and their applications in physics, see [52–54, 80–82]; for references on themathematics of these processes, see [3, 38, 39, 83, 84].
A. Homogeneous Markov processes
We consider a homogeneous continuous-time Markov process X t , with t ∈ R + , taking values insome space E , which, for concreteness, is assumed to be R d or a counting space. The dynamics of X t is described by a transition kernel P t ( x, dy ) giving the conditional probability that X t + t (cid:48) ∈ dy given that X t (cid:48) = x with t ≥
0. This kernel satisfies the Chapmann–Kolmogorov equation (cid:90) E P t (cid:48) ( x, dy ) P t ( y, dz ) = P t (cid:48) + t ( x, dz ) (8)for all ( x, z ) ∈ E , and is homogeneous in the sense that it depends only on the time difference t between X t + t (cid:48) and X t (cid:48) . Here and in the following, dy stands for the Lebesgue measure or thecounting measure, depending on E .To ensure that X t is well behaved, we assume that it admits c`adl`ag paths as a function of timefor every initial condition X = x ∈ E . We further assume that (cid:90) E P t ( x, dy ) = 1 , (9)so that the probability is conserved at all times. This property is also expressed in the literatureby saying that X t is conservative , honest , stochastically complete or strictly Markovian , and onlymeans physically that there is no killing or creation of probability. Although X t is assumed to beconservative, we will introduce later a non-conservative process as an intermediate mathematicalstep to construct the driven process. In what follows, it will be clear when we are dealing with aconservative or a non-conservative process. Moreover, it should be clear that the word ‘conservative’is not intended here to mean that energy is conserved.Mathematically, the transition kernel can be thought of as a positive linear operator acting onthe space of bounded measurable functions f on E according to( P t f )( x ) ≡ (cid:90) E P t ( x, dy ) f ( y ) ≡ E x [ f ( X t )] (10)for all x ∈ E , where E x [ · ] denotes the expectation with initial condition X = x . In many cases, itis more convenient to give a local specification of the action of P t via its generator L according to ∂ t E x [ f ( X t )] = E x [( Lf )( X t )] , (11) In probability theory, E is most often taken to be a so-called Polish (metric, separable and complete) topologicalspace. From the French ‘continue `a droite, limite `a gauche’: right continuous with left limit. This operator is positive in the Perron–Frobenius sense, that is, ( P t f ) ≥ f ≥ where ( Lf ) denotes the application of L on f . Formally, this is equivalent to the representation P t = e tL , (12)and the forward and backward Kolmogorov equation, given by ∂ t P t = P t L = LP t , P = I, (13)where I is the identity operator. For P t ( x, dy ) to be conservative, the generator must obey therelation ( L
1) = 0, where 1 is the constant function equal to 1 on E .In the following, we will appeal to a different characterization of X t based on the path probabilitymeasure d P L,µ ,T ( ω ) representing, roughly speaking, the probability of a trajectory or sample path { X t ( ω ) } Tt =0 over the time interval [0 , T ], with X ( ω ) chosen according to the initial measure µ .Technically, the space of such paths is defined as the so-called Skorohod space D ([0 , T ] , E ) of c`adl`agfunctions on E , while d P L,µ ,T ( ω ) is defined in terms of expectations having the form E µ [ C ] = (cid:90) C ( ω ) d P L,µ ,T ( ω ) , (14)where C is any bounded measurable functional of the path { X t ( ω ) } Tt =0 , and E µ now denotes theexpectation with initial measure µ . As usual, this expectation can be simplified to completelycharacterize P L,µ ,T by considering so-called cylinder functions , C ( ω ) = C (cid:0) X ( ω ) , X t ( ω ) , ..., X t n − ( ω ) , X T ( ω ) (cid:1) , (15)involving X t over a finite sequence of times 0 ≤ t ≤ t ≤ · · · ≤ t n − ≤ T instead of the wholeinterval [0 , T ]. At this level, the path probability measure becomes a joint probability distributionover these times, given in terms of L by P L,µ ,T ( dx , . . . , dx n ) = µ ( dx ) e t L ( x , dx ) e ( t − t ) L ( x , dx ) · · · e ( T − t n − ) L ( x n − , dx n ) , (16)where the exponentials refer to the operator of (12).One important probability measure obtained from the path measure is the marginal µ t of X t ,associated with the single-time cylinder expectation, E µ [ C ( X t )] = (cid:90) E C ( y ) µ t ( dy ) . (17)This measure is also obtained by ‘propagating’ the initial measure µ according to (16): µ t ( dy ) = (cid:90) E µ ( dx ) e tL ( x , dy ) . (18)It then follows from the Kolmogorov equation (13) that ∂ t µ t ( x ) = ( L † µ t )( x ) , (19)where L † is the formal adjoint of L with respect to the Lebesgue or counting measure. In physics, thisequation is referred to as the Master equation in the context of jump processes or the
Fokker–Planckequation in the context of diffusions.The time-independent probability measure µ inv satisfying( L † µ inv ) = 0 (20)is called the invariant measure when it exists. Furthermore, one says that the process X t isan equilibrium process (with respect to µ inv ) if its transition kernel satisfies the detailed balancecondition , µ inv ( dx ) P t ( x, dy ) = µ inv ( dy ) P t ( y, dx ) (21)for all ( x, y ) ∈ E . In the case where µ inv has the density ρ inv ( x ) ≡ µ inv ( dx ) /dx with respect to theLebesgue or counting measure, this condition can be expressed as the following operator identityfor the generator: ρ inv Lρ − = L † , (22)which is equivalent to saying that L is self-adjoint with respect to µ inv . If the process X t does notsatisfy this condition, then it is referred to in physics as a nonequilibrium Markov process. Here,we follow this terminology and consider both equilibrium and nonequilibrium processes.
B. Pure jump processes and diffusions
Two important types of Markov processes will be used in this paper to illustrate our results,namely, pure jump processes and diffusions. In continuous time and continuous space, all Markovprocesses consist of a superposition of these two processes, combined possibly with deterministicmotion [3, 85, 86]. The case of discrete-time Markov chains is discussed in Appendix E.A homogeneous Markov process X t is a pure jump process if the probability that X t undergoesone jump during the time interval [ t, t + dt ] is proportional to dt . To describe these jumps, it isusual to introduce the bounded intensity or escape rate function λ ( x ), such that λ ( x ) dt + o ( dt ) isthe probability that X t undergoes a jump during [ t, t + dt ] starting from the state X t = x . When ajump occurs, X ( t + dt ) is then distributed with the kernel T ( x, dy ), so that the overall transitionrate is W ( x, dy ) ≡ λ ( x ) T ( x, dy ) (23)for ( x, y ) ∈ E . Over a time interval [0 , T ], the path of such a process can thus be represented bythe sequence of visited states in E , together with the sequence of waiting times in those states, sothat the space of paths is [ E × (0 , ∞ )] N .Under some regularity conditions (see [39, 83]), one can show that this process possesses agenerator, given by ( Lf )( x ) = (cid:90) E W ( x, dy )[ f ( y ) − f ( x )] (24)for all bounded, measurable function f defined on E and all x ∈ E . In terms of transition rates, thecondition of detailed balance with respect to some invariant measure µ inv is expressed as µ inv ( dx ) W ( x, dy ) = µ inv ( dy ) W ( y, dx ) (25)for all ( x, y ) ∈ E .Pure diffusions driven by Gaussian white noise have, contrary to jump processes, continuoussample paths and are best described not in terms of transition rates, but in terms of stochasticdifferential equations (SDEs). For E = R d , these have the general form: dX t = F ( X t ) dt + (cid:88) α σ α ( X t ) ◦ dW α ( t ) , (26) In a countable space, one can show that all Markov processes with right continuous paths are of this type, aproperty which is not true in a general space [85, 86]. F and σ α are smooth vector fields on R d , called respectively the drift and diffusion coefficient ,and W α are independent Wiener processes (in arbitrary number, so that the range of α is leftunspecified). The symbol ◦ denotes the Stratonovich (midpoint) convention used for interpretingthe SDE; the It¯o convention can also be used with the appropriate changes.In the Stratonovich convention, the explicit form of the generator is L = F · ∇ + 12 (cid:88) α ( σ α · ∇ ) = ˆ F · ∇ + 12 ∇ D ∇ , (27)where ˆ F ( x ) = F ( x ) − (cid:88) α ( ∇ · σ α )( x ) σ α ( x ) (28)is the so-called modified drift and D ij ( x ) = (cid:88) α σ iα ( x ) σ jα ( x ) (29)is the covariance matrix involving the components of σ α . The notation ∇ D ∇ in (27) is a shorthandfor the operator ∇ D ∇ = (cid:88) i,j ∂∂x i D ij ( x ) ∂∂x j , (30)which is also sometimes expressed as ∇ · ( D ∇ ) or in terms of a matrix trace as tr D ∇ . With thesenotations, the condition of detailed balance for an invariant measure µ inv ( dx ), with density ρ inv ( x )with respect to the Lebesgue measure, is equivalent toˆ F = D ∇ ln ρ inv . (31)Similar results can be obtained for the It¯o interpretation. Obviously, the need to distinguish thetwo interpretations arises only if the diffusion fields σ α depend on x ∈ E . If these fields are constant,then the Stratonovich and It¯o interpretations yield the same results with ˆ F = F and ∇ D ∇ = D ∇ . C. Conditioning observables
Having defined the class of stochastic processes of interest, we now define the class of events A T used to condition these processes. The idea is to consider a random variable or observable A T ,taken to be a real function of the paths of X t over the time interval [0 , T ], and to condition X t on ageneral measurable event of the form A T = { A T ∈ B } with B ⊂ R . This means, more precisely,that we condition X t on the subset A T = { ω ∈ D ([0 , T ] , E ) : A T ( ω ) ∈ B } (32)of sample paths satisfying the constraint that A T ∈ B . In the following, we will consider thesmallest event possible, { A T = a } , representing the set of paths for which A T is contained in theinfinitesimal interval [ a, a + da ] or, more formally, the set of paths such that A T ( ω ) = a . Generalconditionings of the form { A T ∈ B } can be treated by integration over a . We then write X t | A T = a This density exists, for example, when the conditions of Hormander’s Theorem are satisfied [87, 88]. X t is conditioned on the basic event { A T = a } . Formally, we can alsostudy this conditioning by considering path probability densities instead of path measures, as donein [72].Mathematically, the observable A T is assumed to be non-anticipating, in the sense that it isadapted to the natural ( σ -algebra) filtration F T = σ { X t ( ω ) : 0 ≤ t ≤ T } of the process up to time T . Physically, we also demand that A T depend only on X t and its transitions or displacements.For a pure jump process, this means that we consider a general observable of the form A T = 1 T (cid:90) T f ( X t ) dt + 1 T (cid:88) ≤ t ≤ T :∆ X t (cid:54) =0 g ( X t − , X t + ) , (33)where f : E → R , g : E → R , and X t − and X t + denote, respectively, the state of X t before andafter a jump at time t . The discrete sum over the jumps of the process is well defined, since wesuppose that X t has a finite number of jumps in [0 , T ] with probability one.The class of observables A T defined by f and g includes many random variables of mathematicalinterest, such as the number of jumps over [0 , T ], obtained with f = 0 and g = 1, or the occupationtime in some set ∆, obtained with f ( x ) = 11 ∆ ( x ) and g = 0, with 11 ∆ the characteristic function ofthe set ∆. From a physical point of view, it also includes many interesting quantities, includingthe fluctuating entropy production [89], particle and energy currents [78], the so-called activity[65, 66, 90, 91], which is essentially the number of jumps, in addition to work- and heat-relatedquantities defined for systems in contact with heat reservoirs and driven by external forces [92, 93].For a pure diffusion process X t ∈ R d , the appropriate generalization of the observable above is A T = 1 T (cid:90) T f ( X t ) dt + 1 T (cid:90) T d (cid:88) i =1 g i ( X t ) ◦ dX it , (34)where f : E → R , g : E → R d , ◦ denotes as before the Stratonovich product, and g i and X it are thecomponents of g and X t , respectively. This class of ‘diffusive’ observables defined by the function f and the vector field g also includes many random variables of mathematical and physical interest,including occupation times, empirical distributions, empirical currents or flows, the fluctuatingentropy production [89], as well as work and heat quantities [94]. For example, the empiricaldensity of X t , which represents the fraction of time spent at x , is obtained formally by choosing f ( y ) = δ ( y − x ) and g = 0, while the empirical current, recently considered in the physics literature[95, 96], is defined, also formally, with f = 0 and g ( y ) = δ ( y − x ).The consideration of diffusions and current-type observables of the form (34) involving a stochasticintegral is one of the main contributions of this paper, generalizing previous results obtained byJack and Sollich [64] for jump processes, Garrahan and Lesanovsky [67] for dissipative quantumsystems, and by Borkar et al. [26, 97] for Markov chains. D. Large deviation principle
As mentioned in the introduction, the conditioning event A T must have the property of beingatypical with respect to the measure of X t , otherwise the conditioning should have no effect on thisprocess in the asymptotic limit T → ∞ . Here, we assume that { A T = a } is exponentially rare with T with respect to the measure P L,µ ,T of X t , which means that we define this rare event as a largedeviation event . This exponential decay of probabilities applies to many systems and observables ofphysical and mathematical interest, and is defined in a precise way as follows. The random variable A T is said to satisfy a large deviation principle (LDP) with respect to P L,µ ,T if there exists a lower2semi-continuous function I such thatlim inf T →∞ − T ln P L,µ ,T { A T ∈ C } ≥ inf a ∈ C I ( a ) (35) for any closed sets C and lim sup T →∞ − T ln P L,µ ,T { A T ∈ O } ≤ inf a ∈ O I ( a ) (36) for any open sets O [21, 98, 99]. The function I is called the rate function .The basic assumption of our work is that the function I exists and is different from 0 or ∞ .If the process X t is ergodic, then an LDP for the class of observables A T defined above holds, atleast formally, as these observables can be obtained by contraction of the so-called level 2.5 of largedeviations concerned with the empirical density and empirical current. This level has been studiedformally in [90, 95, 96], and rigorously for jump processes with finite space in [100] and countablespace in [101]. The observable A T can also satisfy an LDP if the process X t is not ergodic; in thiscase, however, the existence of the LDP must be proved on a process by process basis and maydepend on the initial condition of the process considered.Formally, the existence of the LDP is equivalent to assuming thatlim T →∞ − T ln P L,µ ,T { A T ∈ [ a, a + da ] } = I ( a ) , (37)so that the measure P L,µ ,T { A T ∈ [ a, a + da ] } decays exponentially with T , as mentioned. The factthat this decay is in general not exactly, but only approximately exponential is often expressed bywriting P L,µ ,T { A T ∈ [ a, a + da ] } (cid:16) e − T I ( a ) da, (38)where the approximation (cid:16) is defined according to the large deviation limit (37) [50, 99]. We willsee in the next subsection that this exponential approximation, referred to in information theory asthe logarithmic equivalence [19], sets a natural scale for defining two processes as being equivalentin the stationary limit T → ∞ . E. Nonequilibrium path ensembles
We now have all the notations needed to define our problem of large deviation conditioning. Atthe level of path measures, the conditioned process X t | A T = a is defined by the path measure d P micro a,µ ,T ( ω ) ≡ d P L,µ ,T { ω | A T = a } , (39)which is a pathwise conditioning of the reference measure P L,µ ,T of X t on the value A T = a afterthe time T . By Bayes’s Theorem, this is equal to d P micro a,µ ,T ( ω ) = d P L,µ ,T ( dω ) P L,µ ,T { A T = a } [ a,a + da ] ( A T ( ω )) , (40)where 11 ∆ ( x ) is, as before, the indicator (or characteristic) function of the set ∆. We refer to thismeasure as the path microcanonical ensemble (superscript micro) [55–57] because it is effectively apath generalization of the microcanonical ensemble of equilibrium statistical mechanics, in whichthe microscopic configurations of a system are conditioned or constrained to have a certain energyvalue. This energy is here replaced by the general observable A T .3Our goal for the rest of the paper is to show that the microcanonical measure can be expressedor realized in the limit T → ∞ by a conservative Markov process, called the driven process . Thisprocess will be constructed, as mentioned in the introduction, indirectly via another path measure,known as the exponential tilting of d P L,µ ,T ( ω ): d P cano k,µ ,T ( ω ) ≡ e T kA T ( ω ) d P L,µ ,T ( ω ) E µ [ e kT A T ] , (41)where k ∈ R . In mathematics, this measure is also referred to as a penalization or a Feynman–Kactransform of P L,µ ,T [102], in addition to the names ‘exponential family’ and ‘Essher transform’mentioned in the introduction. In physics, it is referred, as also mentioned, to as the biased, twisted,or s -ensemble, the last name arising again because the letter s is often used in place of k [62–66]. Weuse the name ‘canonical ensemble’ (superscript cano) because this measure is a path generalizationof the well-known canonical ensemble of equilibrium statistical. From this analogy, we can interpret k as the analog of a (negative) inverse temperature and the normalization factor E µ [ e kT A T ] as theanalog of the partition function.The plan for deriving the driven process is to define a process Y t via a generalization of Doob’stransform and to show that its path measure is equivalent in the asymptotic limit to the pathcanonical ensemble. Following this result, we will then use established results of ensemble equivalenceto show that the canonical path ensemble is equivalent to the microcanonical path ensemble, soas to finally obtain the result announced in (1). The notion of measure or process equivalenceunderlying these results, denoted by ∼ = in (1), is defined next. F. Process equivalence
Let P T and Q T be two path measures associated with a Markov process over the time interval[0 , T ]. Assume that P T is absolutely continuous with respect to Q T , so that the Radon–Nikodymderivative d P T /d Q T exists. We say that P T and Q T are asymptotically equivalent iflim T →∞ T ln d P T d Q T ( ω ) = 0 (42)almost everywhere with respect to both P T and Q T . In this case, we also say that the Markovprocess X t defined by P T and the different Markov process Y t defined by Q T are asymptoticallyequivalent, and denote this property by X t ∼ = Y t as in (1).This notion of process equivalence can be interpreted in two ways. Mathematically, it impliesthat P T and Q T are logarithmically equivalent for most paths, that is, d P T ( ω ) (cid:16) d Q T ( ω ) (43)for almost all ω with respect to P T or Q T . This is a generalization of the so-called asymptoticequipartition property of information theory [19], which states that the probability of sequencesgenerated by an ergodic discrete source is approximately (i.e., logarithmically) constant for almostall sequences [19]. Here, we have that, although P T and Q T may be different measures, they areapproximately equal in the limit T → ∞ for almost all paths with respect to these measures.In a more concrete way, the asymptotic equivalence of P T and Q T also implies that an observablesatisfying LDPs with respect to these measures concentrate on the same values for both measuresin the limit T → ∞ . In other words, the two measures lead to the same typical or ergodic states of(dynamic) observables in the long-time limit. A more precise statement of this result based on theLDP will be given when we come to proving explicitly the equivalence of the driven and conditioned4processes. For now, the only important point to keep in mind is that the typical properties ofthe two processes X t and Y t such that X t ∼ = Y t are essentially the same. This is a useful notionof equivalence when considering nonequilibrium systems, which is a direct generalization of thenotion of equivalence used for equilibrium systems [103–105]. For the latter systems, typical valuesof (static) observables are simply called equilibrium states . III. NON-CONSERVATIVE TILTED PROCESS
We discuss in this section the properties of a non-conservative process associated with thecanonical path measure (41). This process is important as it allows us to obtain a number ofimportant quantities related to the large deviations of A T , in addition to giving some clues as tohow the driven process will be constructed. A. Definition
We consider as before a Markov process X t with path measure P L,µ ,T and an observable A T defined as in (33) or (34) according to the type (jump process or diffusion, respectively) of X t .From the path measure of X t , we define a new path measure by d P L k ,µ ,T ( ω ) ≡ d P L,µ ,T ( ω ) e kT A T ( ω ) , (44)which corresponds to the numerator of the canonical path ensemble d P cano k,µ ,T , defined in (41). Assuggested by the notation, the new measure d P L k ,µ ,T defines a Markov process of generator L k ,which we call the non-conservative tilted process . This process is Markovian in the sense that E µ [ e kT A T C ] = (cid:90) E n +1 C ( x , . . . , x n ) µ ( dx ) e t L k ( x , dx ) · · · e ( T − t n − ) L k ( x n − , dx n ) , (45)for any cylinder functional C (15), and is non-conservative because ( L k (cid:54) = 0 in general.The class of observables defined by (33) and (34) can be characterized in the context of thisresult as the largest class of random variables for which the Markov property above holds. The proofof this property cannot be given for arbitrary Markov processes, but is relatively straightforwardwhen considering jump processes or diffusions. In each case, the proof of (45) and the form of theso-called tilted generator L k follow by applying Girsanov’s Theorem and the Feynman–Kac formula,as shown in Appendix A 1 for jump processes and Appendix A 2 for diffusions. The result in thefirst case is ( L k h )( x ) = (cid:18)(cid:90) E W ( x, dy )[ e kg ( x,y ) h ( y ) − h ( x )] (cid:19) + kf ( x ) h ( x ) (46)for all function h on E and all x ∈ E , where f and g are defined as in (33). This can be writtenmore compactly as L k = W e kg − ( W
1) + kf, (47)where the first term is understood as the Hadamard (component-wise) product W ( x, dy ) e kg ( x,y ) and kf is a diagonal operator k ( x ) f ( x ) δ ( x − y ). In the case of diffusions, we obtain instead L k = ˆ F · ( ∇ + kg ) + 12 ( ∇ + kg ) D ( ∇ + kg ) + kf, (48)where f and g are the functions appearing in (34), while ˆ F and D are defined as in (28) and (29),respectively. The double product involving D is defined as in (30).5 B. Spectral elements
The operator L k defined in (46) or (48) is a Perron–Frobenius operator or, more precisely,a Metzler operator with negative ‘diagonal’ part [106]. The extension of the Perron–FrobeniusTheorem to infinite-dimensional, compact operators is ruled by the Krein–Rutman Theorem [107].For differential elliptic operators having the form (48), this theorem can be applied on compact andsmooth domains with Dirichlet boundary conditions.We denote by Λ k the real dominant (or principal) eigenvalue of L k and by r k its associated‘right’ eigenfunction, defined by L k r k = Λ k r k . (49)We also denote by l k its ‘left’ eigenfunction, defined by L † k l k = Λ k l k , (50)where L † k is the dual of L k with respect to the Lebesgue or counting measure. These eigenfunctionsare defined, as usual, up to multiplicative constants, set here by imposing the following normalizationconditions: (cid:90) E l k ( x ) dx = 1 and (cid:90) E l k ( x ) r k ( x ) dx = 1 . (51)For the remaining, we also assume that the initial measure µ of X t is such that (cid:90) E µ ( dx ) r k ( x ) < ∞ , (52)and that there is a gap ∆ k between the first two largest eigenvalues resulting from the Perron–Frobenius Theorem. Under these assumptions, the semi-group generated by L k admits the asymp-totic expansion e t L k ( x, y ) = e t Λ k (cid:2) r k ( x ) l k ( y ) + O ( e − t ∆ k ) (cid:3) (53)as t → ∞ . Applying this result to the Feynman–Kac formula E µ [ e kT A T δ ( X T − y )] = (cid:90) E µ ( dx ) e T L k ( x , y ) , (54)obtained by integrating (45) with C = δ ( X T − y ), yields E µ [ e kT A T δ ( X T − y )] = e T Λ k (cid:90) E µ ( dx ) (cid:2) r k ( x ) l k ( y ) + O ( e − t ∆ k ) (cid:3) . (55)From this relation, we then deduce the following representations of the spectral elements Λ k , r k ,and l k ; a further representation for the product r k l k will be discussed in the next subsection. • Dominant eigenvalue Λ k : Λ k = lim T →∞ T ln E µ [ e kT A T ] (56)for all µ such that (52) is satisfied.6 • Right eigenfunction r k : r k ( x ) = lim T →∞ e − T Λ k E x [ e kT A T ] (57)for all initial condition x . • Left eigenfunction l k : l k ( y ) = lim T →∞ E µ [ e kT A T δ ( X T − y )] E µ [ e kT A T ] (58)for all µ such that (52) is satisfied.With these results, we can already build a path measure from d P L k ,µ ,T , which is asymptoticallyequivalent to the canonical path measure. Indeed, it is clear from (56) thatlim T →∞ T ln (cid:32) e − T Λ k d P L k ,µ ,T d P cano k,µ ,T (cid:33) = 0 (59)almost everywhere, so that d P cano k,µ ,T (cid:16) e − T Λ k d P L k ,µ ,T . (60)We will see in the next section how to integrate the constant term e − T Λ k into a Markovian measure,so as to obtain a Markov process which is conservative and equivalent to the canonical ensemble.For now, we close this subsection with two remarks: • The right-hand side of (56) is known in large deviation theory as the scaled cumulantgenerating function (SCGF) of A T . The rate function I can be obtained from this functionusing the G¨artner-Ellis Theorem [21, 98, 99], which states (in its simplest form) that, if Λ k isdifferentiable, then A T satisfies the LDP with rate function I given by the Legendre-Fencheltransform of Λ k : I ( a ) = sup k { ka − Λ k } . (61)For pure jump processes on a finite space, the differentiability of Λ k follows from the implicitfunction theorem and the fact that Λ k is a simple zero of the characteristic polynomial. Forcases where Λ k is nondifferentiable, see Sec. 4.4 of [50]. • The cloning simulation methods [29–31] mentioned in the introduction can be interpretedas algorithms that generate the non-conservative process L k and obtain the SCGF Λ k by estimating the rate of growth or decay of its (non-normalized) measure, identified as E µ [ e kT A T ]. An alternative method for simulating large deviations is transition path sampling,which attempts to directly sample paths according to P cano k,µ ,T [58–61]. C. Marginal canonical density
Equation (58) can be reformulated in terms of the canonical path measure as l k ( y ) = lim T →∞ (cid:90) d P cano k,µ ,T ( ω ) δ ( X T ( ω ) − y ) . (62)7This gives a physical interpretation of the left eigenfunction as the limit, when T is large, of themarginal probability density of the canonical ensemble at the final time t = T . If we calculate thismarginal for t ∈ [0 , T [ and let t → ∞ after taking T → ∞ , we obtain instead l k ( y ) r k ( y ) = lim t →∞ lim T →∞ (cid:90) d P cano k,µ ,T ( ω ) δ ( X t ( ω ) − y ) . (63)The product r k l k is thus the large-time marginal probability density of the canonical process takenover the infinite time interval. We will see in Sec. V that the same product corresponds to theinvariant density of the driven process.To prove (63), take C = δ ( X t − y ) with t < T in (45) and integrate to obtain E µ [ e kT A T δ ( X t − y )] = (cid:90) E µ ( dx ) e t L k ( x , y ) ( e ( T − t ) L k y ) . (64)Now, take the limit T → ∞ to obtainlim T →∞ e − T Λ k E µ [ e kT A T δ ( X t − y )] = (cid:90) E µ ( dx ) e t L k ( x , y ) e − t Λ k r k ( y ) , (65)which can be rewritten with (55) aslim T →∞ E µ [ e kT A T δ ( X t − y )] E µ [ e kT A T ] = (cid:90) E µ ( dx ) e t L k ( x , y ) e − t Λ k r k ( y ) (cid:90) E µ ( dx ) r k ( x ) (66)assuming (52). Finally, take the limit t → ∞ to obtainlim t →∞ lim T →∞ E µ [ e kT A T δ ( X t − y )] E µ [ e kT A T ] = l k ( y ) r k ( y ) , (67)which can be rewritten with the canonical measure as (63). A similar proof applies to (62); seeAppendix B of [108] for a related discussion of these results.The result of (63) can actually be generalized in the following way: instead of taking t ∈ [0 , T [and letting t → ∞ after T → ∞ , we can scale t with T by choosing t = c ( T ) such thatlim T →∞ c ( T ) = ∞ and lim T →∞ T − c ( T ) = ∞ . (68)In this case, it is easy to see from (65)-(67) that we obtain the same result, namely, l k ( y ) r k ( y ) = lim T →∞ (cid:90) d P cano k,µ ,T ( ω ) δ ( X c ( T ) ( ω ) − y ) . (69)In particular, we can take c ( T ) = (1 − (cid:15) ) T with 0 < (cid:15) < t as close as possible to T , withoutreaching T . This will be used later when considering the equivalence of the driven process with thecanonical path measure.Note that there is no contradiction between (62) and (63), since for t ≤ T , (cid:90) d P cano k,µ ,T ( ω ) δ ( X t ( ω ) − y ) = (cid:90) E µ ( dx ) e t L k ( x , y ) ( e ( T − t ) L k y ) (cid:90) E µ ( dx ) ( e T L k x ) (cid:54) = (cid:90) E µ ( dx ) e t L k ( x , y ) (cid:90) E µ ( dx ) ( e t L k x ) = (cid:90) d P cano k,µ ,t ( ω ) δ ( X t ( ω ) − y ) . (70)8The fact that the left-most and right-most terms are not equal arises because the canonical measureis defined globally (via A T ) for the whole time interval [0 , T ], so that the marginal of the canonicalmeasure at time t depends on times after t , as well as the end-time T . We will study in moredetail the source of this property in Sec. V when proving that the canonical path measure is anon-homogeneous Markov process that explicitly depends on t and T . IV. GENERALIZED DOOB TRANSFORM
We define in this section the generalized Doob transform that will be used in the next sectionto define the driven process. We also review the conditioning problem considered by Doob tounderstand whether the case of large deviation conditioning can be analyzed within Doob’s approach.Two examples will be considered: first, the original problem of Doob involving the conditioning onleaving a domain via its boundary and, second, a ‘punctual’ conditioning at a deterministic time.In each case, we will see that the generator of the process realizing the conditioning is a particularcase of Doob’s transform, but that the random variable underlying the conditioning is, in general,different from the random variables A T defined before. A. Definition
Let h be a strictly positive function on E and f an arbitrary function on the same space. We callthe generalized Doob transform of the process X t with generator L the new process with generator L h,f ≡ h − Lh − f. (71)In this expression, h − Lh must be understood as the composition of three operators: the multi-plication operator by h − , the operator L itself, and the multiplication operator by h . Moreover,the term f represents the multiplication operator by f , so that the application of L h,f on somefunction r yields ( L h,f r )( x ) = h − ( x ) ( Lhr )( x ) − f ( x ) r ( x ) . (72)We prove in Appendix B that the generalized Doob transform of L is indeed the generator of aMarkov process, whose path measure P L h,f ,µ ,T is absolutely continuous with respect to the pathmeasure P L,µ ,T of X t and whose Radon–Nikodym derivative is explicitly given by d P L h,f ,µ ,T d P L,µ ,T ( ω ) = h − ( X ) exp (cid:18) − (cid:90) T f ( X t ) dt (cid:19) h ( X T ) . (73)In the following, we will also use time-dependent functions h t and f t to transform L [109]. In thiscase, the generalized Doob transform is a non-homogeneous process with path measure given by d P L h,f ,µ ,T d P L,µ ,T ( ω ) = h − ( X ) exp (cid:18) − (cid:90) T ( f t + h − t ∂ t h t )( X t ) dt (cid:19) h T ( X T ) . (74)It is important to note that the transformed process with generator L h,f is Markovian, but notnecessarily conservative, which means that its dominant eigenvalue is not necessarily zero. If werequire conservation (zero dominant eigenvalue), it is sufficient that we choose f = h − ( Lh ), inwhich case (71) becomes L h = h − Lh − h − ( Lh ) , (75)9while (73) reduces to d P L h ,µ ,T d P L,µ ,T ( ω ) = h − ( X ) exp (cid:18) − (cid:90) T h − ( X t ) ( Lh )( X t ) dt (cid:19) h ( X T ) . (76)Moreover, in the time-dependent case, (74) becomes d P L h ,µ ,T d P L,µ ,T ( ω ) = h − ( X ) exp (cid:18) − (cid:90) T dt (cid:0) h − t ( Lh t ) + h − t ∂ t h t (cid:1) ( X t ) (cid:19) h T ( X T ) . (77)Specializing to specific processes, it is easy to see that the generalized Doob transform of a purejump process with transition rates W ( x, dy ) is also a pure jump process with modified transitionrates W h ( x, dy ) = h − ( x ) W ( x, dy ) h ( y ) (78)for all ( x, y ) ∈ E . Similarly, it can be shown that the generator of the generalized Doob transformof a diffusion with generator L is L h = L + ( ∇ ln h ) D ∇ , (79)where the product involving D is interpreted, as before, according to (30). The generalized Doobtransformed process is thus a diffusion with the same noise as the original diffusion, but with amodified drift F h = F + D ∇ ln h. (80)The proof of this result is given in Appendix C and follows by re-expressing the generalized Doobtransform of (75) as L h = L + h − Γ( h, · ) , (81)where Γ is the so-called ‘squared field’ operator, which is a symmetric bilinear operator defined forall f and g on E as Γ( f, g ) ≡ ( Lf g ) − f ( Lg ) − ( Lf ) g. (82)Mathematical properties and applications of the generalized Doob transform have been studiedby Kunita [110], Itˆo and Watanabe [111], Fleming and collaborators (see [40] and references citedtherein), and have been revisited recently by Palmowski and Rolski [112] and Diaconis and Miclo[113]. From the point of view of probability theory, the Radon–Nikodym derivative associatedwith this transform is an example of exponential martingale. The generalized Doob transformalso has interesting applications in physics: it appears in the stochastic mechanics of Nelson [114]and underlies, as shown in [109], the classical fluctuation–dissipation relations of near-equilibriumsystems [54, 115–117], and recent generalizations of these relations obtained for nonequilibriumsystems [91, 118–124]. The work of [109] shows moreover that the exponential martingale (76)verifies a non-perturbative general version of these relations, which also include the fluctuationrelations of Jarzynski [125] and Gallavotti-Cohen [126–128]. From the French ‘op´erateur carr´e du champs’. B. Historical conditioning of Doob
The transform considered by Doob is a particular case of the generalized transform (71), obtainedfor the constant function f ( x ) ≡ λ and for a so-called λ -excessive function h verifying Lh ≤ λh .For these functions, the Doob transformed process is a non-conservative process of generator L h,λ = h − Lh − λ, (83)and path measure d P L h,λ ,µ ,T d P L,µ ,T ( ω ) = h − ( X ) e − T λ h ( X T ) . (84)When ( Lh ) = λ , h is said to be λ -invariant. If we also have λ = 0, then h is called a harmonic function [1–3], and the process described by L h = h − Lh is conservative with path measure d P L h ,µ ,T d P L,µ ,T ( ω ) = h − ( X ) h ( X T ) . (85)In the time-dependent case, the harmonic condition Lh = 0 is replaced by( ∂ t + L t ) h t = 0 , (86)which yields, following (74) and (77), d P L h ,µ ,T d P L,µ ,T ( ω ) = h − ( X ) h T ( X T ) . (87)In this case, h t is said to be space–time harmonic [1–3]. Applications of these transforms haveappeared since Doob’s work in the context of various conditionings of Brownian motion, includingthe Gaussian and Schr¨odinger bridges mentioned in the introduction, in addition to non-collidingrandom walks related to Dyson’s Brownian motion and random matrices [129–131].The original problem considered by Doob, leading to L h , is to condition a Markov process X t started at X = x to exit a certain domain D via a subset of its boundary ∂D . To be more precise,assume that the boundary of D can be decomposed as ∂D = B ∪ C with B ∩ C = ∅ , and conditionthe process to exit D via B . In this case, the path measure of the conditioned process can bewritten as d P x ,T { ω |B} = d P L,x ,T ( ω ) P L,x {B|F T } P L,x {B} , (88)where P L,x ,T is the path measure of the process started at x , B = { τ B ≤ τ C } is the conditioningevent expressed in terms of the exit times, τ B ≡ inf { t : X t ∈ B } , τ C ≡ inf { t : X t ∈ C } , (89)and F T = σ { X t ( ω ) : 0 ≤ t ≤ T } is the natural filtration of the process up to the time T .The conditional path measure (88) is similar to the microcanonical path measure (39) and canbe expressed in the form d P x ,T { ω |B} = d P L,x ,T ( ω ) M [0 ,T ] , (90)where M [0 ,T ] = P L,x {B|F T } P L,x {B} (91)1to emphasize that it is a ‘reweighing’ or ‘penalization’ [102] of the original measure of the processwith the weighting function M [0 ,T ] . To show that this reweighing gives rise to a Doob transform forthe exit problem, let h ( x ) = P x { τ B ≤ τ C } . (92)This function is harmonic, since ( Lh ) = 0 by Dynkin’s formula [3]. Moreover, using the strongMarkov property, we can use this function to express the weighting function (91) as M [0 ,T ] = h ( X min( T,τ B ,τ C ) ) h ( X ) . (93)For T ≤ min( τ B , τ C ), we therefore obtain d P x ,T { ω |B} = h − ( x ) d P L,x ,T ( ω ) h ( X T ) . (94)which has the form of (85). The next example provides a simple application of this result. Example 1.
Consider the Brownian or Wiener motion W t conditioned on exiting the set A = { , (cid:96) } via B = { (cid:96) } . The solution of ( Lh ) = 0 with the boundary conditions h (0) = 0 and h ( (cid:96) ) = 1 givesthe harmonic function h ( x ) = x/(cid:96) , which implies from (80) that the drift of the conditioned processis F h ( x ) = 1 /x . The conditioned process is thus the Bessel process: dX t = 1 X t dt + dW t . (95)Note that the drift of the conditioned process is independent of (cid:96) , which means by taking (cid:96) → ∞ that the Bessel process is also the Wiener process conditioned never to return at the origin. Thisis expected physically, as F h is a repulsive force at the origin which prevents the process fromapproaching this point.As a variation of Doob’s problem, consider the conditioning event B T = { X T ∈ B T } , (96)where B T is a subset of E that can depend on T . This event is a particular case of A T obtainedwith f = 0 and g = 1, so that A T = X T /T assuming X = 0. Its associated weighting functiontakes the form M [0 ,T (cid:48) ] = P L,x {B T |F T (cid:48) } P L,x {B T } = P L,x {B T | X T (cid:48) } P L,x {B T } , (97)for T (cid:48) ≤ T . Defining the function h T (cid:48) ( X T (cid:48) ) ≡ P L,x {B T | X T (cid:48) } = (cid:90) B T P T − T (cid:48) ( X T (cid:48) , dy ) , (98)we then have M [0 ,T (cid:48) ] = h T (cid:48) ( X T (cid:48) ) h ( X ) . (99)Moreover, from the backward Kolmogorov equation, we find that h is space–time harmonic, as in(86). Therefore, the path measure of X t conditioned on B T also takes the form of a Doob transform, d P x ,T (cid:48) { ω |B T } = d P L,x ,T (cid:48) ( ω ) h − ( x ) h T (cid:48) ( X T (cid:48) ) , (100)2but now involves a time-dependent space–time harmonic function. The next two examples apply this type of punctual conditioning to define bridge versions of theWiener motion and the Ornstein–Uhlenbeck process.
Example 2 (Brownian bridge).
Let W t | W T = 0 be the Wiener motion W t conditioned onreaching 0 at time T . The Kolmogorov equation, which in this case is simply the classical diffusionequation with L = ∆ /
2, yields the Gaussian transition density of W t as the space–time harmonicfunction: h t ( x ) = e ( T − t ) L ( x,
0) = 1 (cid:112) π ( T − t ) exp (cid:18) − x T − t ) (cid:19) , ≤ t < T. (101)From (80) and (98), we then obtain, as expected, that W t | W T = 0 is the Brownian bridge evolvingaccording to dX t = − X t T − t dt + dW t (102)for 0 ≤ t < T. The limit T → ∞ recovers the Wiener process itself as the conditioned process. Example 3 (Ornstein–Uhlenbeck bridge).
Consider now the Ornstein–Uhlenbeck process, dX t = − γX t dt + σdW t (103)with γ > σ >
0, conditioned on the event X T = T a . Using the propagator of this process, P t ( x, y ) = (cid:114) γπσ (1 − e − γt ) exp (cid:18) − γσ ( y − e − γt x ) − e − γt (cid:19) , (104)we obtain from (98), h t ( x ) = e ( T − t ) L ( x,
0) = (cid:114) γπσ (1 − e − γ ( T − t ) ) exp (cid:32) − γσ ( x − T a e γ ( T − t ) ) e γ ( T − t ) − (cid:33) . (105)With (80), we then conclude that X t | X T = aT is the non-homogeneous diffusion dX t = − γX t dt + F T ( X t , t ) dt + σdW t , ≤ t < T, (106)with added time-dependent drift F T ( x, t ) = − γ x − T ae γ ( T − t ) e γ ( T − t ) − . (107)The relation between this drift and the conditioning is interesting. Sincelim t → T F T ( x, t ) = ∞ x < aTγaT x = aT −∞ x > aT, (108)points away from the target x = aT are infinitely attracted toward this point as t → T , whichleads X t to reach X T = aT . This attraction, however, is all concentrated near the final time T , asshown in Fig. 3, so that the conditioning X T = aT affects the Ornstein–Uhlenbeck process mostly Note that the probability of B T can vanish as T (cid:48) → ∞ , for example, if X t is transient. In this case, (98) vanishesas T (cid:48) → ∞ , so that (100) becomes singular in this limit. t x t FIG. 1. Sample paths { x t } Tt =0 of the Ornstein–Uhlenbeck process conditioned on the final point X T = aT for T ∈ { , , , , } . Parameters: γ = 1, σ = 1, a = 1. Black curves highlight one of five sample pathsgenerated for each T . The conditioning mostly affects, as clearly seen, the dynamics only near the final time T , over a constant time-scale, inferred from (107), to be roughly given by 1 /γ . at the boundary of the time interval [0 , T ] and marginally in the interior of this interval. Takingthe limit T → ∞ pushes the whole effect of the conditioning to infinity, so that care must be takenwhen interpreting this limit. It is clear here that we cannot conclude that, because F ∞ ( x, t ) = 0 for t < ∞ , the conditioned process is the Ornstein–Uhlenbeck process itself.This boundary behavior of the conditioning will be discussed later. Interestingly, this behaviordoes not arise for the Wiener motion, obtained with γ = 0 and σ = 1. In this case, the conditionedprocess is dX t = − X t − T aT − t dt + dW t (109)and converges to dX t = adt + dW t (110)in the limit T → ∞ . Thus, the conditioning X T = aT is effected by an added drift a , which affectsthe dynamics of the process over the complete interval [0 , T ].We return at this point to our original problem of representing in terms of a conservative Markovprocess the microcanonical path measure P micro a,µ ,T associated with the large deviation conditioning X t | A T = a . Following the preceding examples, the obvious question arises as to whether thismeasure can be obtained from a ‘normal’ Doob transform involving a suitably chosen function h .The answer is, no, for essentially two reasons: • Since A T depends on the whole time interval [0 , T ] and not, as in the examples above, on a‘punctual’ random time τ ≤ T or a deterministic time T , the weighting function associatedwith the large deviation conditioning (40) cannot be expressed as in (93) or (99). Whatmust be considered for this type of conditioning is an approximate and asymptotic formof equivalence, which essentially neglects the boundary terms h ( X ) and h ( X T ), as well assub-exponential terms in T . • There does not seem to be a way to prove the equivalence of the microcanonical path measurewith a Markov measure starting directly from the definitions of the former measure, the4associated weighing function, and the conditioning observable A T . Here, we prove thisequivalence indirectly via the use of the canonical path measure.These points are discussed in more detail in the next section. V. DRIVEN MARKOV PROCESS
We now come to the main point of this paper, which is to define a Markov process via thegeneralized Doob transform and prove its asymptotic equivalence with the conditioned process X t | A T = a . This equivalence is obtained, as just mentioned, by first proving the asymptoticequivalence of the path measure of the driven process with the canonical path measure, and bythen proving the equivalence of the latter measure with the microcanonical path measure usingknown results about ensemble equivalence. Following these results, we discuss interesting propertiesof the driven process related to its reversibility and constraints satisfied by its transition rates(in the case of jump processes) or drift (in the case of diffusions). Some of these properties wereannounced in [72]; here, we provide their full proofs in addition to deriving new results concerningthe reversibility of the driven process. Our main contribution is to treat the equivalence of thecanonical and microcanonical path ensembles explicitly and derive conditions for this equivalence tohold. In previous works, the conditioned process is assumed to be equivalent to the driven processand, in some cases, wrongly interpreted as the canonical path ensemble. A. Definition
We define the driven process Y t by applying the generalized Doob transform to the generator L k of the non-conservative process considered in Sec. III, using for h the right eigenfunction r k , whichis strictly positive on E by Perron–Frobenius. We denote the resulting generator of Y t by L k , sothat in the notation of the generalized Doob transform (75), we have L k ≡ L r k k = r − k L k r k − r − k ( L k r k ) . (111)Although the tilted generator L k is not conservative, L k is since ( L k
1) = 0. Moreover, we infer from(73) that the path measure of this new process is related to the path measure of the non-conservativeprocess by d P L k ,µ ,T d P L k ,µ ,T = r − k ( X ) e − T Λ k r k ( X T ) , (112)which means, using (44), that it is related to the path measure of the original (conservative) processby d P L k ,µ ,T d P L,µ ,T = d P L k ,µ ,T d P L k ,µ ,T d P L k ,µ ,T d P L,µ ,T = r − k ( X ) e − T Λ k e kT A T r k ( X T ) . (113)The existence and form of L k is the main result of this paper. Following the expressions of thetilted generator (46) and (48), L k can also be re-expressed as L k = r − k L k | f =0 r k + kf − Λ k (114)to make the dependence on f more explicit. We deduce from (46) and this result that the drivenprocess associated with a pure jump process remains a pure jump process described by the modifiedrates W k ( x, dy ) = r − k ( x ) W ( x, dy ) e kg ( x,y ) r k ( y ) (115)5for all ( x, y ) ∈ E . For a pure diffusion X t described by the SDE (26), the driven process Y t is adiffusion with the same noise as X t , but with the following modified drift: F k = F + D ( kg + ∇ ln r k ) . (116)The proof of this result follows by explicitly calculating h − L k h = ˆ F · ( ∇ + kg + ∇ ln h ) + 12 ( ∇ + kg + ∇ ln h ) D ( ∇ + kg + ∇ ln h ) + kf (117)for h > E , so as to obtain L hk = ˆ F · ∇ + 12 ∇ D ∇ + ( kg + ∇ ln h ) D ∇ = L + ( kg + ∇ ln h ) D ∇ . (118)Applying this formula to h = r k >
0, we obtain from (27) that L k is the generator of a diffusionwith the same diffusion fields σ α as X t , but with the modified drift given in (116). Note that thisresult carries an implicit dependence (via r k ) on the two functions f and g defining the observable A T , in addition to the explicit dependence on g . B. Equivalence with the canonical path ensemble
The relations (41), (44) and (113) lead together to d P L k ,µ ,T d P cano k,µ ,T = d P L k ,µ ,T d P L,µ ,T d P L,µ ,T d P cano k,µ ,T = r − k ( X ) r k ( X T ) e − T Λ k E µ [ e kT A T ] . (119)From the limit (56) associating the SCGF with Λ k , we therefore obtainlim T →∞ T ln d P L k ,µ ,T d P cano k,µ ,T ( ω ) = 0 (120)for all paths, which shows that the path measure of the driven process is asymptotically equivalentto the canonical path measure. This means, as explained before, that the two path measures arelogarithmically equivalent, d P L rkk ,µ ,T (cid:16) d P cano k,µ ,T , (121)so that, although they are not equal, their differences are sub-exponential in T for almost all paths.From this result, it is possible to show, with additional conditions, that the typical values ofobservables satisfying LDPs with respect to these measures are the same. However, because of thespecific form of the canonical path ensemble, we can actually prove a stronger form of equivalencebetween this path measure and that of the driven process, which implies not only that observableshave the same typical values, but also the same large deviations.This strong form of equivalence follows by noting that the canonical path ensemble represents atime-dependent Markov process. This is an important result, which does not seem to have beennoticed before. The meaning of this is that, despite the global normalization factor E µ [ e kT A T ],the canonical measure defined in (41) is the path measure of a non-homogeneous Markov processcharacterized by a time-dependent generator, denoted by L cano k,t,T . The derivation of this generatoris presented in Appendix D; the result is L cano k,t,T ≡ L h t,T k = h − t,T L k h t,T − h − t,T ( L k h t,T ) (122) H. Touchette, in preparation, 2014. Time-dependent generators arise when considering probability kernels P ts that depend on the times s and t betweentwo transitions, and not just the time difference t − s , as considered in (8). t ∈ [0 , T ], where h t,T ( x ) = ( e ( T − t ) L k x ) (123)is space–time harmonic with respect to L k (see Appendix D). Thus we see that the canonicalmeasure is the generalized Doob transform of L k obtained, interestingly, with a time-dependentfunction h t,T involving L k itself. At the level of path measures, we then have d P cano k,µ ,T = d P L cano k, · ,T ,µ ,T , (124)a result which should be understood in the sense of (16), with L replaced by the time-dependentgenerator L cano k,t,T and the normal exponential replaced by a time-ordered exponential [109].To relate this result to the driven process, note that ( e ( T − t ) L k
1) becomes proportional to r k as T → ∞ , so that lim T →∞ L cano k,t,T = ( L k ) r k ≡ L k . (125)Thus, although the process described by d P cano k,µ ,T over the time interval [0 , T ] is non-homogeneousfor T < ∞ , it becomes homogeneous inside this time interval as the final time T diverges. Moreover,it converges in this limit to the driven process itself, which is by definition a homogeneous process.This holds for all t ∈ [0 , T [ in the limit T → ∞ ; for the final time t = T , we obtain insteadlim T →∞ L cano k,T,T = L k − ( L k . (126)Consequently, the convergence of the canonical process toward the driven process applies onlyin [0 , T [; at the boundary of this time interval, the canonical process converges to a differenthomogeneous process with generator (126). This explains from the point of view of generators whywe obtain two different limits for the marginal canonical density at t < T and t = T , as seen inSec. III.This difference between the ‘interior’ (or ‘bulk’) and ‘boundary’ regimes of a process is animportant feature of our theory. In a sense, this theory can only characterize the ‘interior’ of aprocess (exponentially tilted or conditioned), since we push the boundary to infinity, so to speak,and consider large deviation events that arise entirely from the ‘interior’ regime. Given that thecanonical and driven processes are the same in this ‘interior’ regime, the large deviations of A T orany other observable satisfying an LDP must therefore also be the same for both processes.To be more precise, consider an observable B T and assume that this observable satisfies an LDPwith respect to the canonical path measure with rate function I k ( b ) ≡ lim T →∞ − T ln P cano k,µ ,T { B T ∈ [ b, b + db ] } . (127)Let us write this LDP as I k ( b ) = lim T →∞ lim (cid:15) → + − T ln P cano k,µ ,T { B (1 − (cid:15) ) T ∈ [ b, b + db ] } . (128)If we assume that the fluctuations of B T arise from the combined effect of canonical fluctuations of X t over the whole interval [0 , T ] and not just the end interval [(1 − (cid:15) ) T, T ], we can invert the limitson T and (cid:15) to obtain I k ( b ) = lim (cid:15) → + lim T →∞ − T ln P cano k,µ ,T { B (1 − (cid:15) ) T ∈ db } = lim (cid:15) → + lim T →∞ − T ln P cano k,µ ,T (cid:12)(cid:12) [0 , (1 − (cid:15) ) T ] { B (1 − (cid:15) ) T ∈ db } , (129)7where P cano k,µ ,T (cid:12)(cid:12)(cid:12) [0 , (1 − (cid:15) ) T ] represents the projection of P cano k,µ ,T on [0 , (1 − (cid:15) ) T ], which is different ingeneral from P cano k,µ , (1 − (cid:15) ) T . We know from our discussion above that this projection converges in thelimit T → ∞ to the path measure of the driven process. If we further assume that this convergencecarries over to B (1 − (cid:15) ) T , we can then write I k ( b ) = lim (cid:15) → + lim T →∞ − T ln P L k ,µ { B (1 − (cid:15) ) T ∈ db } = lim (cid:15) → + (1 − (cid:15) ) lim T →∞ − − (cid:15) ) T ln P L k ,µ { B (1 − (cid:15) ) T ∈ db } = lim T →∞ − T ln P L k ,µ { B T ∈ db } . (130)Consequently, the LDP for B T in the canonical path ensemble implies an LDP for this randomvariable with respect the driven process with the same rate function.This reasoning is valid, as stressed above, if the large deviations of B T and B (1 − (cid:15) ) T are the samein the canonical path ensemble, that is, if these large deviations arise from the ‘interior’ part of themeasure and not from the boundary interval [(1 − (cid:15) ) T, T ]. In most cases of interest, this is verified,although there are pathological cases for which the large deviations actually arise at the boundary.The asymptotic limit of the Ornstein–Uhlenbeck process with X T = aT , discussed in Sec. IV, issuch a case, which we will come back to in Sec. VI. C. Equivalence with the microcanonical path ensemble
We now come back to the problem of characterizing X t | A T = a as a Markov process by showingthat the canonical and microcanonical path measures are asymptotically equivalent. This secondlevel of equivalence is weaker than the previous one, for the simple reason that the microcanonicaland canonical path measures have different supports. Moreover, the fact that A T does not fluctuatein the microcanonical path ensemble (by definition of the conditioning) but does, generally, in thecanonical path ensemble shows that the large deviation properties of observables cannot be thesame in general in both ensembles. However – and this is the crucial observation for the problem ofconditioning – they can have the same typical values of observables, under conditions related tothe convexity of the rate function I ( a ) [105, 132]. Moreover, the same conditions imply that themicrocanonical and canonical path measures are asymptotically equivalent in the logarithmic sense.We discuss these levels of equivalence next, beginning with the one based on typical values.As before, we assume that the conditioning observable A T satisfies the LDP with respect tothe path measure P L,µ ,T of the reference process X t with rate function I ( a ). We then consider anobservable B T and assume that it satisfies an LDP with respect to the microcanonical path measure P micro a,µ ,T with rate function J a , as well as an LDP with respect to the canonical path measure P cano k,µ ,T with rate function J k . We denote the set of global minima of J a by B a and the global minima of J k by B k . Since rate functions vanish at their global minimizers [50, 99], we can also write B a = { b : J a ( b ) = 0 } , B k = { b : J k ( b ) = 0 } . (131)These zeros are called concentration points in large deviation theory [50], since they correspond tothe values of B T at which the microcanonical or canonical measure does not decay exponentiallywith T . If these sets are singleton sets, then their unique element correspond to the typical value of B T in the sense of the ergodic theorem [50, 99]. For example, if B a = { b ∗ } for a given value a of A T , then B T → b ∗ as T → ∞ with probability 1 with respect to the microcanonical path measure P micro a,µ ,T . A similar result can obviously be stated for the canonical ensemble.8The equivalence problem in this context is to determine pairs ( a, k ) for which B a = B k . Such pairsturn out to be determined by the convexity properties of I ( a ). Denote by ∂I ( a ) the subdifferentialof I at a . Except possibly at boundary points, I is convex at a if ∂I ( a ) (cid:54) = ∅ , and is converselynonconvex at a if ∂I ( a ) = ∅ [133]. With these notations, we have [105, 132]: • If I convex at a , then B a = B k for all k ∈ ∂I ( a ). • If I is nonconvex at a , then B a ∩ B k = ∅ for all k ∈ R . Thus, in this case, there is no k ∈ R such that B a = B k .The proof of these results, found in [105], relies on the following general relationship between therate functions J k and J a , which derives from the definitions of the microcanonical and canonicalensembles: J k ( b ) = inf a { J a ( b ) + I ( a ) + Λ k − ka } . (132)The idea of the proof is to relate the zeros of the two sides of (132), which define B k and B a , bynoting that I ( a ) ≥ ka − Λ k with equality if and only if I ( a ) is convex; see [105, 132] for details.A remarkable property of the microcanonical and canonical measures is that the convexity of I ( a ) not only determines the equality of B a and B k for general observables, but also the logarithmicequivalence of these measures. This brings us to the second level of equivalence, expressed by thefollowing results: • If I is convex at a , then for all k ∈ ∂I ( a ),lim T →∞ T ln d P micro a,µ ,T d P cano k,µ ,T ( ω ) = 0 , (133)almost everywhere with respect to P micro a,µ ,T and P cano k,µ ,T . • If I is nonconvex at a , then there is no k ∈ R for which the limit above vanishes.The proof of these results also follows from the definitions of the microcanonical and canonicalmeasures; see [105].Our problem of large deviation conditioning can now be solved by linking all the results obtained.To recapitulate:1. Driven–canonical measure equivalence:
Assuming the existence of Λ k , l k , and r k , thatthe conditions (51) and (52) are satisfied, and that the spectrum of L k has a gap, we havethat the driven process obtained from the generalized Doob transform (111) is such that d P L k ,µ,T (cid:16) d P cano k,µ ,T . (134)2. Driven–canonical observable equivalence:
Any observable B T satisfying an LDP withrespect to the canonical path measure also satisfies an LDP with respect to the law of thedriven process with the same rate function, provided that these LDPs are not related toboundary effects. In this case, the large deviations – and by consequence the concentrationpoints – of B T are the same for both the canonical and driven processes.3. Canonical–microcanonical measure equivalence: If I ( a ) is convex, then d P cano k,µ ,T ( ω ) (cid:16) d P micro a,µ ,T ( ω ) (135)for all k ∈ ∂I ( a ), almost everywhere with respect to both measures.94. Canonical–microcanonical observable equivalence: B T has in general different ratefunctions in the canonical and microcanonical path ensembles; however, its concentrationpoints are the same in both ensembles when I ( a ) is convex. We reach two conclusions from these results. The first, obtained by combining (134) and (135),is that if I is convex at the conditioning value a , then d P micro a,µ ,T ( ω ) (cid:16) d P L k ,µ,T ( ω ) (136)almost everywhere with respect to both measures for all k ∈ ∂I ( a ). At the level of processes, wetherefore write X t | A T = a ∼ = Y t , (137)where Y t is the driven process with generator L k such that k ∈ ∂I ( a ). The second conclusion,obtained from the points 2 and 4 above, is that X t | A T = a and Y t have the same typical values ofobservables, provided that these observables concentrate in a large deviation sense in the long-timelimit and that k ∈ ∂I ( a ). It is in this sense that we say that the conditioned process X t | A T = a is realized or represented by the driven process Y t : the two processes may (and will in general) havedifferent fluctuation properties, but they have the same typical or concentration properties in thestationary limit when I ( a ) is convex. In a more physical but looser sense, we can picture them asdescribing the same long-time stochastic dynamics.The next subsections discuss further properties of the driven process playing an important rolefor describing nonequilibrium systems. We list next several remarks that relate more specifically toits equivalence with the conditioned process: • If A T has a unique concentration point a ∗ , then it should be expected that X t | A T = a ∗ ∼ = X t , (138)since A T → a ∗ in the limit T → ∞ , so that this value is ‘naturally’ realized by X t . Thisfollows from our results by noting that 0 ∈ ∂I ( a ∗ ), Λ = 0 and r = 1 up to a constant, sothat L k =0 = L in general, and F k =0 = F for diffusions. Hence, conditioning on a typicalvalue of the process does not modify it in the asymptotic limit. • The conditioning A T = a is realized by the driven process as a typical value of A T in thestationary limit. That is, A T → a as T → ∞ with probability 1 with respect to the law of Y t .This follows simply by taking B T = A T . • The equivalence of B a and B k also implies, in the case where these sets are singleton sets,that bounded functions C ( B T ) have the same expectation in the driven and conditionedprocesses as T → ∞ . In other words, equivalence of concentration points also implies, in thecase of unique concentration points, equality of expectations. • If I ( a ) is convex and differentiable, then ∂I ( a ) = { I (cid:48) ( a ) } , so that the value k achievingequivalence is given by k = I (cid:48) ( a ). In the case where I ( a ) is strictly convex, we also have byLegendre duality that k is such that Λ (cid:48) k = a [50]. These results are large deviation analogs ofthe thermodynamic relations connecting, respectively, the temperature with the derivative ofthe entropy and the energy with the derivative of the free energy [50]. Equilibrium systems also have, in general, different fluctuations in the microcanonical and canonical ensembles, buthave the same equilibrium states when they are equivalent. • Since equivalence is for all k ∈ ∂I ( a ), there is possibly more than one driven process realizingthe typical states of a conditioned process. This interesting result should arise whenever I ( a )has exposed (convex) corners at which ∂I ( a ) is not a singleton; see [134] for an example. • Conversely to the above remark, there can be conditionings X t | A T = a that admit no drivenprocess if I is nonconvex at a . We conjecture that such a case of nonequivalent processesarises whenever X t is not ergodic and switch between ‘phases’ that cannot be represented bya single, homogeneous Markov process. • The driven process is a priori not unique: since boundary terms in path measures arenegligible at the level of the logarithmic equivalence, one could apply an extra generalizedDoob transform to L k by choosing a function h > L k h ) = 0. From the definition(111) of L k , this is equivalent to ( L k r k h ) = Λ k r k h . Since Λ k is non-degenerate, h musttherefore be a multiplicative constant having no effect on the driven process. • In the case of diffusions with constant noise power σ ( x ) = σ , the low-noise limit σ → A T = a . This can be used to recover known results from theFreidlin–Wentzell theory of fluctuation paths and instantons for noise-perturbed SDEs [51]. • The conditions leading to the equivalence of X t | A T = a and Y t prevent many processes frombeing treated within our theory. Examples include L´evy processes for which there are ingeneral no LDPs ( I ( a ) = 0 everywhere or Λ k = ∞ ), processes for which the LDP for A T mayhave a scaling or ‘speed’ in T different from T , as illustrated in the next section, in additionto processes with ‘condensation’ transitions for which either Λ k → ∞ , L k is gapless or thecondition (52) is not satisfied; see [64, 135–137] for examples. D. Invariant density
The driven process has an invariant density on E corresponding to ρ k ( x ) = l k ( x ) r k ( x ) , (139)which is normalized following (51). This is proved directly from the definition (111) of the generatorof the driven process, whose dual is L † k = r k L † k r − k − r − k ( L k r k ) , (140)so that ( L † k l k r k ) = r k ( L † k l k ) − l k ( L k r k ) = Λ k l k r k − Λ k l k r k = 0 . (141)If the driven process is ergodic, this invariant density is also the (unique) stationary density, in thesense that ρ k ( y ) = lim t →∞ lim T →∞ (cid:90) E d P L k ,µ ,T ( ω ) δ ( Y t ( ω ) − y ) (142)and ρ k ( y ) = lim T →∞ (cid:90) E d P L k ,µ ,T ( ω ) δ ( Y T ( ω ) − y ) . (143)1In this case, the stationary density is therefore the same independently of the time interval [0 , T ]considered, contrary to the canonical ensemble measure which gives two different results for thetwo limit above; see again (62) and (63).Note that if the original process X t is ergodic with invariant density ρ inv , then ρ k =0 = ρ inv because l = ρ inv and r k = 1. Moreover, if L k is self-dual (hermitian), then l k = r k ≡ ψ k so that ρ k = ψ k , in a clear analogy with quantum mechanics. E. Reversibility properties
It is interesting physically to describe the class of conditioning observables A T for which thedriven process is either reversible (equilibrium) or non-reversible (nonequilibrium). We studythis problem here by deriving a functional equation involving f and g , whose solution provides anecessary and sufficient condition for ρ k to be a reversible stationary density. This equation is hardto solve in general; a simpler form is obtained by assuming that the reference Markov process X t isreversible, which leads us to study the following question: Under what conditioning is the drivenprocess Y t reversible given that X t is reversible?To answer this question, we first consider pure jump processes. For all ( x, y ) ∈ E , the relation(115) for the driven transition rates implies W k ( x, y ) W k ( y, x ) = (cid:18) r k ( y ) r k ( x ) (cid:19) W ( x, y ) W ( y, x ) e k [ g ( x,y ) − g ( y,x )] . (144)Therefore, the driven process is reversible with respect to its invariant density ρ k if and only if theratio above can be written as ρ k ( y ) /ρ k ( x ), which yields, with the expression of ρ k shown in (139), anon-trivial functional equation for f and g .We can simplify this equation by assuming that the reference process is reversible, as in (25).The ratio (144) then becomes W k ( x, y ) W k ( y, x ) = (cid:18) r k ( y ) r k ( x ) (cid:19) ρ inv ( y ) ρ inv ( x ) e k [ g ( x,y ) − g ( y,x )] . (145)For the driven process to remain reversible, it is thus sufficient that there exists a ‘potential’ function h on E such that g ( x, y ) − g ( y, x ) = h ( y ) − h ( x ) (146)for all ( x, y ) ∈ E . In this case, we also have, if the invariant density ρ k of the driven process isunique, that ρ k is proportional to r k ρ inv e kh . This condition on g is verified, in particular, if g issymmetric, g ( x, y ) = g ( y, x ). Accordingly, conditioning observables A T , such as the activity, thatdepend on the jumps of X t , but not on the ‘direction’ of these jumps do not modify the reversibilityof X t . The same is true if A T does not depend on the jumps of the process, that is, if g = 0 andthe conditioning only involves an integral of f ( X t ) in time.These results translate for diffusions as follows. The driven process is reversible with respect tothe invariant density ρ k if and only if its modified driftˆ F k = ˆ F + D ( kg + ∇ ln r k ) , (147)obtained from (80), satisfies ˆ F k = D ∇ ln ρ k , (148) This sightly corrects the claim made in [64] that the driven process is reversible ‘only if the bias is also time-reversalsymmetric: g ( x, y ) = g ( y, x )’. F + D (cid:18) kg + 12 ∇ (cid:18) ln r k l k (cid:19)(cid:19) = 0 . (149)This is a functional equation involving f and g , via r k and l k , which is also difficult to solve ingeneral. We can simplify it, as before, by assuming that the reference process X t is reversible withrespect to ρ inv , as in (31), in which caseˆ F k = D ∇ ln( ρ inv r k ) + Dkg. (150)A particular solution of this equation is obtained if g is gradient, g = ∇ h/
2. Then the driven diffusionis a reversible diffusion with respect to the invariant density ρ k , which is moreover proportional to r k ρ inv e kh . In particular, if g = 0 and D is constant, then the driven process is a reversible diffusionwith drift given by F k = F + D ∇ ln r k . (151)We thus see for diffusions that conditioning observables A T that do not depend on the transitionsof X t ( g = 0) or depend on these transitions but via a gradient perturbation g do not modify thereversibility of X t . F. Identities and constraints
It was found in [138–140] that the driven process admits in many cases certain invariant quantitiesthat constrain its transition rates. These constraints arise very generally and very simply from ourresults. From (115), we can write W k ( x, dy ) W ( y, dx ) = r k ( x ) − W k ( y, dx ) W ( x, dy ) r k ( y ) e k [ g ( x,y ) − g ( y,x )] W k ( x, dy ) W k ( y, dx ) = W ( x, dy ) W ( y, dx ) e k [ g ( x,y )+ g ( y,x )] , (152)which are the most general identities that can be obtained for the transition rates of the drivenprocess. If g is a symmetric function, they reduce to W k ( x, dy ) W ( y, dx ) = r k ( x ) − W k ( y, dx ) W ( x, dy ) r k ( y ) , (153)whereas if g is antisymmetric or g = 0, we find W k ( x, dy ) W k ( y, dx ) = W ( x, dy ) W ( y, dx ) (154)for all ( x, y ) ∈ E . The latter result is referred to in [138, 140] as the ‘product constraint’. Anexample of symmetric observable is the so-called activity, which is proportional to the number ofjumps occurring in a jump process, whereas an example of antisymmetric observable is the current,which assigns opposite signs to a jump and its reversal.These constraints on the transition rates also imply constraints on the escape rate λ = ( W
1) of ajump process. Integrating (115) with respect to y , keeping in mind that r k is the right eigenfunctionof L k associated with Λ k , we obtain the following relation between the escape rates of the referenceprocess and those of the driven process: λ k ( x ) = λ ( x ) − kf ( x ) + Λ k . (155)3In the case f = 0, this yields λ k ( x ) = λ ( x ) + Λ k (156)for all x ∈ E , which implies λ k ( x ) − λ k ( y ) = λ ( x ) − λ ( y ) (157)for all x ∈ E , a result referred to as the ‘exit rate constraint’ in [138, 140].Diffusive analogs of these constraints can be derived from our results. In the case where thecovariance matrix D is invertible, (116) implies by taking its exterior derivative that d (cid:0) D − F k − D − F − kg (cid:1) = 0 , (158)where all vectors are interpreted as 1-forms. We conclude from this result that the 1-form associatedwith D − F k − D − F − kg is closed. In two and three dimensions, this implies ∇ × ( D − F k ) = ∇ × ( D − F ) + k ∇ × g (159)and thus ∇ × ( D − F k ) = ∇ × ( D − F ) (160)if g is gradient. For diffusions on the circle, (158) only implies that F k − F − kg is the derivative ofa periodic function. These results can be interpreted physically as circulation constraints, showing that the non-reversibility of the driven process, measured by the circulation of its drift, is directly related to thenon-reversibility of the reference process and the non-gradient character of g . This connection withnon-reversible dynamics can be emphasized by rewriting (159) in terms of the stationary probabilitycurrent, defined by J ρ inv = ˆ F ρ inv − D ∇ ρ inv , (161)or the so-called probability velocity V ρ inv ≡ J ρ inv ρ inv = ˆ F − D ∇ ln ρ inv . (162)Both vector fields are zero for reversible (equilibrium) diffusions. In terms of V , we then obtain ∇ × ( D − V ρ k ) = ∇ × ( D − V ρ inv ) + k ∇ × g, (163)where V ρ k is the probability velocity of the driven process with invariant density ρ k . A similar resultapplies in higher dimensions by replacing the rotational with the exterior derivative. Periodic functions on the circle are not necessarily the derivative of periodic functions: consider, for example, theconstant function. VI. APPLICATIONS
We study in this section three applications of our results for Brownian motion, the Ornstein–Uhlenbeck process, and the problem of quasi-stationary distributions. The applications are simple:they are there only to illustrate the different steps needed to obtain the driven process and theeffect of boundary dynamics. In the case of quasi-stationary distributions, we also want to showhow our results recover known results obtained by a different approach.For more complex applications involving many-particle dynamics, such as the totally asymmetricexclusion process and the zero-range process, see [64, 136, 141–143]. Applications for diffusionscan be found in [139, 144], whereas applications for quantum systems can be found in [67–71].One interesting aspect of many-particle dynamics is that current-type conditionings have thegeneric effect of producing long-range interactions between particles at the level of the stationarydistribution of the driven process [55, 64, 141–143].In future publications, we will discuss in more detail some of the connections mentioned in theintroduction, in particular those relating to conditional limit theorems and optimal control theory,in addition to tackling other applications of our results, including the case of diffusions conditionedon occupation measures, which is relevant for studying metastable states and quasi-stationarydistributions. We will also study the low-noise (Freidlin–Wentzell) large deviation limit [51], anddevelop numerical techniques for obtaining the spectral elements used to construct the drivenprocess. A. Extensive Brownian bridge
We revisit Example 3 about the Wiener process conditioned on reaching the point X T = aT .This observable is a particular case of A T obtained with f = 0, g = 1, and X = 0.From the Gaussian propagator of the Wiener process, we find P x { X T /T ∈ [ a, a + da ] } (cid:16) e − T I ( a ) da, (164)with I ( a ) = a /
2, as well as lim T →∞ T ln E x [ e kX T ] = k . (165)The latter result is equal to the dominant eigenvalue Λ k , as can be verified from the expression ofthe tilted generator L k = 12 (cid:18) ddx + k (cid:19) = 12 d dx + k ddx + k , (166)obtained from (48). In fact, in this case, we have r k ( x ) = l k ( x ) = 1. From (116), we thus find thatthe drift of the driven process, equivalent to the conditioned process, is F k = k . To re-express thisdrift as a function of the conditioning X T /T = a , we use I (cid:48) ( a ) = a = k to obtain F k ( a ) = a . Thisshows that the process equivalent to the Brownian motion conditioned with X T = aT is the driftedBrownian motion W T + at , as found previously in (110). Equivalence is for all a ∈ R , since I ( a )is convex. Moreover, since the typical value of X T /T is 0, we have X t | X T = 0 ∼ = X t ; that is, theBrownian bridge bridged at T → ∞ is asymptotically equivalent to the Wiener process.There is a subtlety involved in this calculation, in that r k = l k = 1 are not normalizable. Tocircumvent this problem, it seems possible to consider the problem on a compact domain of E = R , F. Angeletti, H. Touchette, in preparation, 2014. − (cid:96), (cid:96) ], to obtain a gapped spectrum with normalizable eigenfunctions, andthen take the limit (cid:96) → ∞ . This is a common procedure used in physics, for example, in quantummechanics to deal with the free particle. B. Ornstein–Uhlenbeck process
Consider the Ornstein–Uhlenbeck process, defined in (103), with the conditioning observable A T = 1 T (cid:90) T X t dt, (167)which corresponds to the choice f ( x ) = x and g = 0. The spectral elements of this observable areeasily found to be r k ( x ) = e kx/γ and Λ k = σ k / (2 γ ). From the expression of r k and (116), wethen find that the effective drift of the driven process is F k ( x ) = − γx + σ kγ . (168)With the rate function I ( a ) = sup k { ka − Λ k } = γ a σ , (169)we then find F k ( a ) ( x ) = − γx + aγ . (170)Hence, the conditioning only adds a constant drift to the process, which ensures that X T /T → a as T → ∞ . Naturally, since the typical value of A T is 0 in the original Ornstein–Uhlenbeck process,conditioning on X T /T = 0 yields the same process with F k =0 ( x ) = − γx .If instead of choosing the linear observable (167), we choose A T = 1 T (cid:90) T X t dt, (171)the same steps can be followed to obtain F k ( a ) ( x ) = − σ a x. (172)In this case, the conditioning keeps the linear force of the Ornstein–Uhlenbeck process, but changesits friction coefficient to match the variance of the process with the value of A T .To close this example, let us revisit the conditioning A T = X T /T = a , studied in Example 3,which corresponds to the choice f = 0 and g = 1. We know from our previous discussion of thisexample that the driven process cannot describe this conditioning because the latter does not affectthe ‘interior’ dynamics of the process in the asymptotic limit T → ∞ . Let us see how this arises inour theory. From the exact form of the propagator (104), we find that P { X T /T ∈ [ a, a + da ] } (cid:16) e − T γa /σ , (173)so that I ( a ) = ∞ if we take the large deviation limit with the scale T , as in (37). In this case,we can formally take Λ k = 0 and r k ( x ) = e − kx for the spectral elements, which is consistent with I ( a ) = ∞ , to obtain F k ( a ) ( x ) = − γx . This, as we know from Example 3, is the correct interiordynamics produced by the conditioning, but it is not the complete dynamics that actually realizesthe conditioning. The problem here is that the large deviation is a boundary effect in time – it canbe seen, physically, as a temporal analog of a ‘condensation’ – which prevents us from exchangingthe two limits in (129). Consequently, though we can formally define the driven process, it is notequivalent to the conditioned process.6 C. Quasi-stationary distributions
A classical problem in the theory of absorbing processes and quasi-stationary distributions isto condition a Markov chain never to escape from some subset of its state space. We want tobriefly show in this subsection that the solution of this problem, obtained classically by defininga new Markov chain restricted on the subset of interest [15], can be recovered from our results(summarized for Markov chains in Appendix E) by taking the limit k → ∞ .To define the problem, let { X i } ∞ i =0 be a Markov chain with homogeneous transition matrix M .For a subset E of E , we consider the conditioning event B = { τ > N } , (174)where τ is the exit time from E defined by τ = inf { n : X n / ∈ E } , (175)assuming X ∈ E . This means that we are conditioning the Markov chain on leaving E (or onbeing ‘killed’ outside E ) only after the time N .Within our theory, this conditioning is effected by considering the observable A N = 1 N N − (cid:88) i =0 E ( X i ) , (176)or its symmetrized version A (cid:48) N = 1 N N − (cid:88) i =0 E ( X i ) + 11 E ( X i +1 )2 (177)for which we have 11 B = 11 A N +1 =1 = 11 A (cid:48) N =1 . The second observable leads to the tilted matrix M k ( x, y ) ≡ M ( x, y ) exp (cid:20) k E ( x ) + 11 E ( y )) (cid:21) . (178)Given the conditioning A (cid:48) N = 1, we must then choose k ∈ ∂I (1), where I is the rate function of A (cid:48) N .The form of ∂I (1) depends in general on the Markov chain considered; however, since I is definedon [0 , ∞ ∈ ∂I (1), so the conditioning follows with the limit k → ∞ .To see that this limit recovers the correct result, define the matrix M (cid:48) ( x, y ) ≡ lim k →∞ e − k M k ( x, y ) . (179)Then, M (cid:48) ( x, y ) = M ( x, y )11 E ( x )11 E ( y ) = (cid:26) M ( x, y ) x, y ∈ E E . Denoting by λ (cid:48) the dominant eigenvalue of M (cid:48) and by r (cid:48) its associated right eigenvector, we infer λ (cid:48) = lim k →∞ e − k Λ k , r (cid:48) = lim k →∞ r k , (181)where Λ k and r k are the corresponding elements of M k .7According to our theory, the effective Markov chain resulting from the asymptotic conditioning A (cid:48) N = a is given by the generalized Doob transform M k ( x, y ) = 1Λ k r k ( x ) M k ( x, y ) r k ( y ) . (182)Taking the limit k → ∞ , we then obtainlim k →∞ M k = 1 λ (cid:48) r (cid:48) M (cid:48) r (cid:48) , (183)which is the known result characterizing a Markov chain conditioned on eternally staying in E [15].In this context, it can be proved thatln λ (cid:48) = lim N →∞ N ln P x,M,N { τ > N } , (184)where X = x ∈ E , so that λ (cid:48) represents the survival rate at which the chain stays in E , while theleft eigenvector l (cid:48) ( y ) = lim N →∞ P x,M,N { X N = y | τ > N } (185)represents the quasi-stationary density of the chain as it stays in E . This last result corresponds toour result (62), and is known as the Yaglom limit of the process [15]. Taking the distribution at atime n < N before the conditioning, we obtain instead l (cid:48) ( y ) r (cid:48) ( y ) = lim n →∞ lim N →∞ P x,M,N { X n = y | τ > N } , (186)in agreement with (63).For recent surveys on quasi-stationary distributions, see [14–16]; for applications in the contextof large deviations, see [145, 146]; finally, see Bauer and Cornu [147] for a study of the effect ofquasi-stationary conditioning on cycle affinities of finite-state jump processes. Appendix A: Derivation of the tilted generator1. Pure jump processes
To derive the form of the tilted generator L k in the case of jump processes, we consider theconservative Markov generator G k defined by( G k h )( x ) = (cid:90) E W ( x, dy ) e kg ( x,y ) [ h ( y ) − h ( x )] (A1)for all x ∈ E . This generator is only a normalization factor away from L k , since G k = L k − ( L k , (A2)which means that G k and L k differ only in their diagonal elements.The process described by G k is a jump process with transition rates W ( x, dy ) e kg ( x,y ) . The factthat e kg ( x,y ) is strictly positive implies that the measure W ( x, dy ) e kg ( x,y ) and the original measure W ( x, dy ) are absolutely continuous, which only means in this context that the G k and L processeshave the same set of allowed jumps x → y . In this case, we can use Girsanov’s Theorem, as applied8to jump processes [76], to obtain the Radon–Nikodym of the paths measure of the G k process withrespect to the path measure of the L process: d P G k ,µ ,T d P L,µ ,T ( ω ) = exp k (cid:88) ≤ t ≤ T :∆ X t (cid:54) =0 g ( X t − , X t + ) − (cid:90) T dt [( W e kg − ( W X t ) . (A3)Combining this result with the Feynman–Kac formula, we then arrive at d P L k ,µ ,T d P L,µ T = d P L k ,µ ,T d P G k ,µ ,T d P G k ,µ ,T d P L,µ ,T = exp k (cid:88) ∆ X t (cid:54) =0 g ( X t − , X t + ) − (cid:90) T dt [( W e kg − ( W X t ) + (cid:90) T ( L k X t ) dt , which yields (44) given the expression (46) of L k .
2. Diffusion processes
Given the generator L of (27), we introduce a new Markov generator L = L + a · ∇ + b, (A4)involving the arbitrary vector field a and scalar field b on E . Combining the Cameron–Martin–Girsanov Theorem and the Feynman–Kac formula [37–39], it can be shown that L induces, withthe initial measure µ , a path measure P L ,µ ,T , which is absolutely continuous with respect to thepath measure P L,µ ,T , and whose Radon–Nikodym derivative with respect to the latter measure is d P L ,µ ,T d P L,µ ,T ( ω ) = e R T ( ω ) , (A5)where R T = (cid:90) T D − ( X t ) a ( X t ) ◦ dX t + (cid:90) T (cid:18) b ( X t ) − D − ( X t ) a ( X t ) (cid:16) ˆ F + a (cid:17) ( X t ) −
12 ( ∇ · a ) ( X t ) (cid:19) dt. (A6)This is a generalization of the Cameron–Martin–Girsanov Theorem for non-conservative processeswith b (cid:54) = 0. In the particular case where L is the generator of the Wiener process W t , and there isno b perturbation, we recover the classical result R T = (cid:90) T (cid:18) a ( W t ) dW t − a ( W t ) dt (cid:19) , (A7)written here in the It¯o convention [3]. In our case, we obtain the expression of L k from this generalresult by equating (A5) with (44) to obtain R T = kT A T , which is solved given (A6) for a = kDg and b = kg · (cid:18) ˆ F + k Dg (cid:19) + k ∇ · ( Dg ) + kf. (A8)The expression of L k is therefore L k = L + kDg ∇ + kg · (cid:18) ˆ F + k Dg (cid:19) + k ∇ · ( Dg ) + kf = ˆ F · ( ∇ + kg ) + ( ∇ + kg ) D ∇ + kg ) + kf. (A9)9 Appendix B: Change of measure for the generalized Doob transform
To prove (73), it is sufficient to show that E L h,f ,µ [ C ] = E L,µ (cid:104) C h − ( X ) e − (cid:82) T f ( X t ) dt h ( X T ) (cid:105) (B1)for any cylinder function C . In this expression, the generators indicate with respect to whichmeasure the expectation is taken. To arrive at this result, note first that e tL h,f ( x, dy ) = h − ( x ) e t ( L − f ) ( x, dy ) h ( y ) (B2)for all ( x, y ) ∈ E . Next, replace t by t − s and use the Feynman–Kac formula to express theexponential semi-group as an expectation, so that e ( t − s ) L h,f ( x, dy ) = h − ( x ) E x (cid:104) e − (cid:82) ts f ( X u ) du δ ( X t − y ) dy (cid:105) h ( y ) (B3)for all ( x, y ) ∈ E . Finally, expand (B1) and use (B3) iteratively to obtain E L h,f ,µ [ C ] = (cid:90) E n +1 C ( x , . . . , x n ) µ ( dx ) h − ( x ) E x [ e − (cid:82) t f ( X t ) dt δ ( X t − x ) dx ] · · ·· · · E x n − [ e − (cid:82) Ttn f ( X t ) dt δ ( X T − x n ) dx n ] h ( x n ) , (B4)which, by multiple integration, is equal to (B1). Appendix C: Squared field for diffusion processes
Let L be the generator of the general diffusion defined in (27). The application of this generatoron the product of two arbitrary functions f and g on E yields( Lf g ) = f ˆ F · ∇ g + g ˆ F · ∇ f + 12 ∇ ( gD ∇ f + f D ∇ g )= f ˆ F · ∇ g + g ˆ F · ∇ f + g ∇ D ( ∇ f ) + ∇ f D ∇ g + f ∇ D ( ∇ g )= f ( Lg ) + ( Lf ) g + ∇ f D ∇ g. (C1)Comparing with the definition (82) of the squared field Γ( f, g ), we findΓ( f, g ) = ∇ f D ∇ g. (C2)Putting this result into (81) with f = ln h , we then find (79), which represents the generator of adiffusion process with the same noise fields σ α as the diffusion described by L , but with the modifieddrift given in (80). Appendix D: Generator of the canonical path measure
We derive here the time-dependent generator associated with the canonical path measure. Tothis end, we consider this measure on the cylinder events { X = x , X t = x , . . . , X t n = x n } with0 ≤ t ≤ · · · ≤ t n ≤ T to obtain, following (45), d P cano k,µ ,T ( x , . . . , x n ) = µ ( dx ) e t L k ( x , dx ) · · · e ( t n − t n − ) L k ( x n − , dx n ) ( e ( T − t n ) L k x n ) E µ [ e T kA T ] , (D1)0with the normalization added according to (41). Therefore, d P cano k,µ ,T ( x n | x , . . . , x n − ) ≡ d P cano k,µ ,T ( x , . . . , x n ) d P cano k,µ ,T ( x , . . . , x n − ) = e ( t n − t n − ) L k ( x n − , dx n ) ( e ( T − t n ) L k x n )( e ( T − t n − ) L k x n − ) (D2)which shows that the conditional measure d P cano k,µ ,T ( x n | x , . . . , x n − ) is Markovian, since it does notdepend on all the previous points x , . . . , x n − but only on x n − . However, it is non-homogeneous,since it explicitly depends on t n − , t n , and T .To derive the generator L cano k,t,T of this Markovian measure, we introduce the positive function h t,T ( x ) = ( e ( T − t ) L k x ) (D3)to write the transition probability associated with (D2) as P cano ,tk,s,T ( x, dy ) = 1 h s,T ( x ) e ( t − s ) L k ( x, dy ) h t,T ( y ) . (D4)Noting that h t,T solves the backward differential equation( ∂ t + L k ) h t,T = 0 , h T,T = 1 , (D5)we then have that h t,T is space–time harmonic with respect to L k , which implies from (86)-(87)that the canonical measure is the Doob transform of L k with the function h t,T involving L k itself.This means explicitly that L cano k,t,T = ( L k ) (exp(( T − t ) L k )1) . (D6)This result is valid for t < T , but also for t = T which yields L cano k,T,T = L k − ( L k . (D7)In the limit T → ∞ , L cano k,t,T becomes homogeneous; however, the limit is different for t < T and t = T , as shown in (125) and (126). Appendix E: Markov chains
We briefly re-express in this last section our main results for the simpler case of homogeneousMarkov chains. In this context, the generalized Doob transform seems to have appeared for thefirst time in the work of Miller [148].The sequence X , X , . . . , X N of random variables is a homogeneous Markov chain if its jointmeasure is given by d P M,µ ,N ( x , . . . , x N ) = µ ( dx ) M ( x , dx ) ....M ( x N − , dx N ) , (E1)where M ( x, dy ) is the transition matrix and µ is the initial measure for X . The generalized Doobtransform of the Markov chain is defined as M h ( x, dy ) = 1( M h )( x ) M ( x, dy ) h ( y ) , (E2)where h is a strictly positive function on E . This transformed matrix remains a stochastic matrix,as shown in [148], which can be used to define a discrete-time path measure d P M h ,µ ,N , whoseRadon–Nikodym derivative with respect to the original Markov chain is d P M h ,µ ,N d P M,µ ,N ( x , . . . , x N ) = 1( M h )( x ) exp (cid:32) N − (cid:88) i =0 ln h ( x i )( M h )( x i ) (cid:33) h ( x N ) . (E3)1This follows by re-expressing in discrete time the proof presented in Appendix B.Consider now the observable A N = 1 N N − (cid:88) i =0 g ( X i , X i +1 ) , (E4)where g : E → R . The tilted generator L k is replaced for this observable by the tilted matrix M k ( x, dy ) = M ( x, dy ) e kg ( x,y ) . (E5)The particular observable A N = 1 N N − (cid:88) i =0 f ( X i ) (E6)is covered by this result simply by taking g ( x, y ) = f ( x ), so that we do not have to consider additiveand two-point observables separately.The dominant (Perron–Frobenius) eigenvalue of M k is denoted by ζ k and leads to the followingresult for the SCGF: lim N →∞ N ln E µ [ e kNA N ] = ln ζ k ≡ Λ k . (E7)Denoting by r k the right Perron–Frobenius eigenvector of M k , we can show, similarly to ourprevious results, that the Markov chain { X i } conditioned on A N = a is asymptotically equivalentto a Markov chain described by the following transition matrix: M k ( x, dy ) = M r k k ( x, dy ) = 1 ζ k r k ( x ) M k ( x, dy ) r k ( y ) . (E8)The stationary density of this driven process is the same as in the continuous-time case, namely, ρ k ( x ) = l k ( x ) r k ( x ), where l k ( x ) is the left eigenvector of M k associated with ζ k . Moreover, all ourresults about the reversibility of this density apply with minor changes. ACKNOWLEDGMENTS
We would like to thank Patrick Cattiaux, Mike Evans, Rosemary J. Harris, and Vivien Lecomtefor useful discussions on this paper. We are also grateful for the hospitality and support of theLaboratoire J. A. Dieudonn´e, the Kavli Institute for Theoretical Physics, and the Galileo GalileiInstitute for Theoretical Physics, where parts of this work were developed and written. Furtherfinancial support was received from the ANR STOSYMAP (ANR-2011-BS01-015), the NationalResearch Foundation of South Africa (CSUR 13090934303) and Stellenbosch University (projectfunding for new appointee). [1] J. L. Doob, “Conditional Brownian motion and the boundary limits of harmonic functions,” Bull. Soc.Math. Fr. , 431–458 (1957).[2] J. L. Doob, Classical Potential Theory and Its Probabilistic Counterpart (Springer, New York, 1984).[3] L. C. G. Rogers and D. Williams,
Diffusions, Markov Processes and Martingales (Cambridge UniversityPress, Cambridge, 2000). [4] F. Baudoin, “Conditioned stochastic differential equations: Theory, examples and application tofinance,” Stoch. Proc. Appl. , 109–145 (2002).[5] D. Gasbarra, T. Sottinen, and E. Valkeila, “Gaussian bridges,” in Stochastic Analysis and Applications ,Abel Symposia, Vol. 2, edited by F. Espen Benth, G. Di Nunno, T. Lindstrøm, B. Øksendal, andT. Zhang (Springer, Berlin, 2007) pp. 361–382.[6] T. Sottinen and A. Yazigi, “Generalized Gaussian bridges,” Stoch. Proc. Appl. , 3084–3105 (2014).[7] E. Schr¨odinger, “ ¨Uber die Umkehrung der Naturgesetze,” Sitzungsber. Preuss. Akad. Wiss. Phys.-Math.Kl. , 144–153 (1931).[8] E. Schr¨odinger, “Sur la th´eorie relativiste de l’´electron et l’interpr´etation de la m´ecanique quantique,”Ann. Inst. Henri Poincar´e , 269–319 (1932).[9] B. Jamison, “The Markov processes of Schr¨odinger,” Z. Wahrsch. Verw. Gebiete , 323–331 (1975).[10] J. C. Zambrini, “Stochastic mechanics according to Schr¨odinger,” Phys. Rev. A , 1532–1548 (1986).[11] R. Aebi, Sch¨odinger Diffusion Processes (Birkh¨auser, Basel, 1996).[12] J. N. Darroch and E. Seneta, “On quasi-stationary distributions in absorbing discrete-time finiteMarkov chains,” J. Appl. Prob. , 88–100 (1965).[13] J. N. Darroch and E. Seneta, “On quasi-stationary distributions in absorbing continuous-time finiteMarkov chains,” J. Appl. Prob. , 192–196 (1967).[14] D. Villemonais, “Quasi-stationary distributions and population processes,” Prob. Surveys , 340–410(2012).[15] P. Collet, S. Mart´ınez, and J. San Mart´ın, Quasi-Stationary Distributions (Springer, New York, 2013).[16] E. A. van Doorn and P. K. Pollett, “Quasi-stationary distributions for discrete-state models,” Eur. J.Oper. Res. , 1–14 (2013).[17] O. A. Vasicek, “A conditional law of large numbers,” Ann. Prob. , 142–147 (1980).[18] J. M. van Campenhout and T. M. Cover, “Maximum entropy and conditional probability,” IEEETrans. Info. Th. , 483–489 (1981).[19] T. M. Cover and J. A. Thomas, Elements of Information Theory (John Wiley, New York, 1991).[20] I. Csiszar, “Sanov property, generalized I -projection and a conditional limit theorem,” Ann. Prob. ,768–793 (1984).[21] A. Dembo and O. Zeitouni, Large Deviations Techniques and Applications , 2nd ed. (Springer, NewYork, 1998).[22] P. Diaconis and D. Freedman, “A dozen de Finetti-style results in search of a theory,” Ann. Inst. HenriPoincare B: Prob. Stat. , 397–423 (1987).[23] P. Diaconis and D. A. Freedman, “Conditional limit theorems for exponential families and finiteversions of de Finetti’s theorem,” J. Theoret. Prob. , 381–410 (1988).[24] A. Dembo and O. Zeitouni, “Refinements of the Gibbs conditioning principle,” Prob. Th. Rel. Fields , 1–14 (1996).[25] I. Csisz´ar, T. M. Cover, and B.-S. Choi, “Conditional limit theorems under Markov conditioning,”IEEE Trans. Info. Th. , 788–801 (1987).[26] V. S. Borkar, S. Juneja, and A. A. Kherani, “Peformance analysis conditioned on rare events: Anadaptive simulation scheme,” Commun. Info. Syst. , 259–278 (2004).[27] J. A. Bucklew, Introduction to Rare Event Simulation (Springer, New York, 2004).[28] W. Feller,
An Introduction to Probability Theory and its Applications , Vol. II (Wiley, New York, 1970).[29] C. Giardina, J. Kurchan, and L. Peliti, “Direct evaluation of large-deviation functions,” Phys. Rev.Lett. , 120603 (2006).[30] V. Lecomte and J. Tailleur, “A numerical approach to large deviations in continuous time,” J. Stat.Mech. , P03004 (2007).[31] J. Tailleur and V. Lecomte, “Simulation of large deviation functions using population dynamics,” in Modeling and Simulation of New Materials: Proceedings of Modeling and Simulation of New Materials ,Vol. 1091, edited by J. Marro, P. L. Garrido, and P. I. Hurtado (AIP, Melville, NY, 2009) pp. 212–219.[32] P. Del Moral,
Feynman-Kac Formulae: Genealogical and Interacting Particle Systems with Applications (Springer, New York, 2004).[33] W. H. Fleming, “Exit probabilities and optimal stochastic control,” Appl. Math. Optim. , 329–346(1978).[34] W. H. Fleming, “Logarithmic transformations and stochastic control,” in Advances in Filtering andOptimal Stochastic Control , Lecture Notes in Control and Information Sciences, Vol. 42, edited by W. H. Fleming and L. G. Gorostiza (Springer, New York, 1982) pp. 131–141.[35] W. H. Fleming, “A stochastic control approach to some large deviations problems,” in
Recent Mathe-matical Methods in Dynamic Programming , Vol. 1119, edited by I. C. Dolcetto, W. H. Fleming, andT. Zolezzi (Springer, 1985) pp. 52–66.[36] W. H. Fleming and S.-J. Sheu, “Stochastic variational formula for fundamental solutions of parabolicPDE,” Appl. Math. Optim. , 193–204 (1985).[37] M. Kac, “On distributions of certain Wiener functionals,” Trans. Am. Math. Soc. , 1–13 (1949).[38] D. W. Stroock and S. R. S. Varadhan, Multidimensional Diffusion Processes (Springer, New York,1979).[39] D. Revuz and M. Yor,
Continuous Martingales and Brownian Motion , 3rd ed. (Springer, Berlin, 1999).[40] W. H. Fleming and H. M. Soner,
Controlled Markov Processes And Viscosity Solutions , StochasticModelling and Applied Probability, Vol. 25 (Springer, New York, 2006).[41] S.-J. Sheu, “Stochastic control and principal eigenvalue,” Stochastics , 191–211 (1984).[42] W. H. Fleming, S. J. Sheub, and H. M. Soner, “A remark on the large deviations of an ergodic Markovprocess,” Stochastics , 187–199 (1987).[43] W. H. Fleming and S.-J. Sheu, “Asymptotics for the principal eigenvalue and eigenfunction of a nearlyfirst-order operator with large potential,” Ann. Prob. , 1953–1994 (1997).[44] W. H. Fleming and W. M. McEneaney, “Risk sensitive optimal control and differential games,” in Stochastic Theory and Adaptive Control , Lecture Notes in Control and Information Sciences, Vol. 184,edited by T. E. Duncan and B. Pasik-Duncan (Springer, New York, 1992) pp. 185–197.[45] W. Fleming and W. McEneaney, “Risk-sensitive control on an infinite time horizon,” SIAM J. Cont.Opt. , 1881–1915 (1995).[46] P. Dupuis and W. McEneaney, “Risk-sensitive and robust escape criteria,” SIAM J. Cont. Opt. ,2021–2049 (1997).[47] T. Nemoto and S.-I. Sasa, “Thermodynamic formula for the cumulant generating function of time-averaged current,” Phys. Rev. E , 061113 (2011).[48] T. Nemoto and S.-I. Sasa, “Variational formula for experimental determination of high-order correlationsof current fluctuations in driven systems,” Phys. Rev. E , 030105 (2011).[49] T. Nemoto and S.-I. Sasa, “Computation of large deviation statistics via iterative measurement-and-feedback procedure,” Phys. Rev. Lett. , 090602 (2014).[50] H. Touchette, “The large deviation approach to statistical mechanics,” Phys. Rep. , 1–69 (2009).[51] M. I. Freidlin and A. D. Wentzell, Random Perturbations of Dynamical Systems , Grundlehren derMathematischen Wissenschaften, Vol. 260 (Springer, New York, 1984).[52] C. W. Gardiner,
Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences ,2nd ed., Springer Series in Synergetics, Vol. 13 (Springer, New York, 1985).[53] N. G. van Kampen,
Stochastic Processes in Physics and Chemistry (North-Holland, Amsterdam, 1992).[54] H. Risken,
The Fokker-Planck equation: Methods of solution and applications , 3rd ed. (Springer, Berlin,1996).[55] R. M. L. Evans, “Rules for transition rates in nonequilibrium steady states,” Phys. Rev. Lett. ,150601 (2004).[56] R. M. L. Evans, “Detailed balance has a counterpart in non-equilibrium steady states,” J. Phys. A:Math. Gen. , 293–313 (2005).[57] R. M. L. Evans, “Statistical physics of shear flow: A non-equilibrium problem,” Contemp. Phys. ,413–427 (2010).[58] C. Dellago, P. G. Bolhuis, and P. L. Geissler, “Transition path sampling,” Adv. Chem. Phys. ,1–78 (2003).[59] C. Dellago, P. G. Bolhuis, and P. L. Geissler, “Transition path sampling methods,” in ComputerSimulations in Condensed Matter Systems: From Materials to Chemical Biology Volume 1 , LectureNotes in Physics, Vol. 703, edited by M. Ferrario, G. Ciccotti, and K. Binder (Springer, New York,2006).[60] C. Dellago and P. Bolhuis, “Transition path sampling and other advanced simulation techniques forrare events,” in
Advanced Computer Simulation Approaches for Soft Matter Sciences III , Advances inPolymer Science, Vol. 221, edited by C. Holm and K. Kremer (Springer, Berlin, 2009) pp. 167–233.[61] E. Vanden-Eijnden, “Transition path theory,” in
Computer Simulations in Condensed Matter Systems:From Materials to Chemical Biology Volume 1 , Lecture Notes in Physics, Vol. 703, edited by M. Ferrario, G. Ciccotti, and K. Binder (Springer, 2006) pp. 453–493.[62] D. Chandler and J. P. Garrahan, “Dynamics on the way to forming glass: Bubbles in space-time,”Ann. Rev. Chem. Phys. , 191–217 (2010).[63] L. O. Hedges, R. L. Jack, J. P. Garrahan, and D. Chandler, “Dynamic order-disorder in atomisticmodels of structural glass formers,” Science , 1309–1313 (2009).[64] R. L. Jack and P. Sollich, “Large deviations and ensembles of trajectories in stochastic models,” Prog.Theoret. Phys. Suppl. , 304–317 (2010).[65] V. Lecomte, C. Appert-Rolland, and F. van Wijland, “Chaotic properties of systems with Markovdynamics,” Phys. Rev. Lett. , 010601 (2005).[66] V. Lecomte, C. Appert-Rolland, and F. van Wijland, “Thermodynamic formalism for systems withMarkov dynamics,” J. Stat. Phys. , 51–106 (2007).[67] J. P. Garrahan and I. Lesanovsky, “Thermodynamics of quantum jump trajectories,” Phys. Rev. Lett. , 160601 (2010).[68] J. P. Garrahan, A. D. Armour, and I. Lesanovsky, “Quantum trajectory phase transitions in themicromaser,” Phys. Rev. E , 021115 (2011).[69] C. Ates, B. Olmos, J. P. Garrahan, and I. Lesanovsky, “Dynamical phases and intermittency of thedissipative quantum ising model,” Phys. Rev. A , 043620 (2012).[70] S. Genway, J. P. Garrahan, I. Lesanovsky, and A. D. Armour, “Phase transitions in trajectories of asuperconducting single-electron transistor coupled to a resonator,” Phys. Rev. E , 051122 (2012).[71] J. M. Hickey, S. Genway, I. Lesanovsky, and J. P. Garrahan, “Thermodynamics of quadraturetrajectories in open quantum systems,” Phys. Rev. A , 063824 (2012).[72] R. Chetrite and H. Touchette, “Nonequilibrium microcanonical and canonical ensembles and theirequivalence,” Phys. Rev. Lett. , 120601 (2013).[73] I. Karatzas and S. Shreve, Methods of Mathematical Finance , Stochastic Modelling and AppliedProbability, Vol. 39 (Springer, 1998).[74] H. C. Berg,
Random Walks in Biology (Princeton University Press, Princeton, 1993).[75] H. Spohn,
Large Scale Dynamics of Interacting Particles (Springer Verlag, Berlin, 1991).[76] C. Kipnis and C. Landim,
Scaling Limits of Interacting Particle Systems , Grundlheren der mathema-tischen Wissenschaften, Vol. 320 (Springer-Verlag, Berlin, 1999).[77] T. M. Liggett,
Interacting Particle Systems (Springer, New York, 2004).[78] B. Derrida, “Non-equilibrium steady states: Fluctuations and large deviations of the density and ofthe current,” J. Stat. Mech. , P07023 (2007).[79] L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio, and C. Landim, “Stochastic interacting particlesystems out of equilibrium,” J. Stat. Mech. , P07014 (2007).[80] K. Jacobs,
Stochastic Processes for Physicists: Understanding Noisy Systems (Cambridge UniversityPress, Cambridge, 2010).[81] P. L. Krapivsky, S. Redner, and E. Ben-Naim,
A Kinetic View of Statistical Physics (CambridgeUniversity Press, Cambridge, 2010).[82] E. Nelson,
Dynamical Theories of Brownian Motion (Princeton University Press, Princeton, 1967).[83] W. J. Anderson,
Continuous-Time Markov Chains: An Applications-Oriented Approach , SpringerSeries in Statistics (Springer-Verlag, 1991).[84] K. L. Chung and J. B. Walsh,
Markov Processes, Brownian Motion, and Time Symmetry , 2nd ed.(Springer, Berlin, 2005).[85] D. Applebaum,
L´evy Processes and Stochastic Calculus , 2nd ed. (Cambridge University Press, Cam-bridge, 2009).[86] K. Sato,
L´evy Processes and Infinite Divisibility , Cambridge Studies in Advanced Mathematics (Cam-bridge University Press, Cambridge, 1999).[87] L. H¨ormander, “Hypoelliptic second order differential equations,” Acta. Math. , 147–171 (1967).[88] P. Malliavin, “Stochastic calculus of variations and hypoelliptic operators,” in
Proc. Inter. Symp. Stoch.Diff. Equations, Kyoto 1976 (Wiley, New York, 1978) pp. 195–263.[89] J. L. Lebowitz and H. Spohn, “A Gallavotti-Cohen-type symmetry in the large deviation functional forstochastic dynamics,” J. Stat. Phys. , 333–365 (1999).[90] C. Maes and K. Netoˇcn´y, “Canonical structure of dynamical fluctuations in mesoscopic nonequilibriumsteady states,” Europhys. Lett. (2008).[91] M. Baiesi, C. Maes, and B. Wynants, “Fluctuations and response of nonequilibrium states,” Phys. Rev. Lett. , 010602 (2009).[92] C. Jarzynski, “Equilibrium free-energy differences from nonequilibrium measurements: A master-equation approach,” Phys. Rev. E , 5018–5035 (1997).[93] G. E. Crooks, “Nonequilibrium measurements of free energy differences for microscopically reversiblemarkovian systems,” J. Stat. Phys. , 1481–1487 (1998).[94] K. Sekimoto, Stochastic Energetics , Lect. Notes. Phys., Vol. 799 (Springer, New York, 2010).[95] C. Maes, K. Netoˇcn´y, and B. Wynants, “Steady state statistics of driven diffusions,” Physica A ,2675–2689 (2008).[96] V. Chernyak, M. Chertkov, S. Malinin, and R. Teodorescu, “Non-equilibrium thermodynamics andtopology of currents,” J. Stat. Phys. , 109–147 (2009).[97] A. Basu, T. Bhattacharyya, and V. S. Borkar, “A learning algorithm for risk-sensitive cost,” Maths.Op. Res. , 880–898 (2008).[98] F. den Hollander, Large Deviations , Fields Institute Monograph (Amer. Math. Soc., Providence, R.I.,2000).[99] R. S. Ellis,
Entropy, Large Deviations, and Statistical Mechanics (Springer, New York, 1985).[100] A. De La Fortelle, “Large deviation principle for Markov chains in continuous time,” Prob. Info. Trans. , 120–139 (2001).[101] L. Bertini, A. Faggionato, and D. Gabrielli, “Large deviations of the empirical flow for continuoustime markov chains,” (2012), arXiv:1210.2004.[102] B. Roynette and M. Yor, Penalising Brownian Paths (Springer, New York, 2009).[103] J. T. Lewis, C.-E. Pfister, and G. W. Sullivan, “The equivalence of ensembles for lattice systems:Some examples and a counterexample,” J. Stat. Phys. , 397–419 (1994).[104] J. T. Lewis, C.-E. Pfister, and W. G. Sullivan, “Entropy, concentration of probability and conditionallimit theorem,” Markov Proc. Relat. Fields , 319–386 (1995).[105] H. Touchette, “Equivalence and nonequivalence of ensembles: Thermodynamic, macrostate, andmeasure levels,” (2014), arXiv:1403.6608.[106] C. R. MacCluer, “The many proofs and applications of Perron’s theorem,” SIAM Rev. , 487–498(2000).[107] M. G. Krein and M. A. Rutman, “Linear operators leaving a cone in a Banach space,” Am. Math. Soc.Transl. , 128 (1950).[108] V. Lecomte, Thermodynamique des histoires et fluctuations hors d’´equilibre , Ph.D. thesis, Universit´eParis VII (2007).[109] R. Chetrite and S. Gupta, “Two refreshing views of fluctuation theorems through kinematics elementsand exponential martingale,” J. Stat. Phys. , 543–584 (2011), 10.1007/s10955-011-0184-0.[110] H. Kunita, “Absolute continuity of Markov processes and generators,” Nagoya Math. J. , 1–26(1969).[111] K. Itˆo and S. Watanabe, “Transformation of Markov processes by multiplicative functionals,” Ann.Inst. Fourier , 13–30 (1965).[112] Z. Palmowski and T. Rolski, “A technique for exponential change of measure for Markov processes,”Bernoulli , 767–785 (2002).[113] P. Diaconis and L. Miclo, “On characterizations of Metropolis type algorithms in continuous time,”ALEA Lat. Am. J. Probab. Math. Stat. , 199–238 (2009).[114] P. A. Meyer and W. A. Zheng, “Construction de processus de Nelson reversibles,” in S´eminaire deProbabilit´es XIX 1983/84 , Lecture Notes in Mathematics, Vol. 1123, edited by J. Az´ema and M. Yor(Springer Berlin Heidelberg, 1985) pp. 12–26.[115] W. Bernard and H. B. Callen, “Irreversible thermodynamics of nonlinear processes and noise in drivensystems,” Rev. Mod. Phys. , 1017–1044 (1959).[116] H. B. Callen and T. A. Welton, “Irreversibility and generalized noise,” Phys. Rev. , 34–40 (1951).[117] R. Kubo, “The fluctuation-dissipation theorem,” Rep. Prog. Phys. , 255 (1966).[118] E. Lippiello, F. Corberi, A. Sarracino, and M. Zannetti, “Nonlinear response and fluctuation-dissipationrelations,” Phys. Rev. E , 041120 (2008).[119] T. Speck and U. Seifert, “Restoring a fluctuation-dissipation theorem in a nonequilibrium steady state,”Europhys. Lett. , 391 (2006).[120] M. Baiesi, C. Maes, and B. Wynants, “Nonequilibrium linear response for Markov dynamics I: Jumpprocesses and overdamped diffusions,” J. Stat. Phys. , 1094–1116 (2009). [121] M. Baiesi, E. Boksenbojm, C. Maes, and B. Wynants, “Nonequilibrium linear response for Markovdynamics II: Inertial dynamics,” J. Stat. Phys. , 492–505 (2010).[122] U. Seifert and T. Speck, “Fluctuation-dissipation theorem in nonequilibrium steady states,” Europhys.Lett. , 10007 (2010).[123] R. Chetrite, G. Falkovich, and K. Gawedzki, “Fluctuation relations in simple examples of non-equilibrium steady states,” J. Stat. Mech. , P08005 (2008).[124] R. Chetrite and K. Gawedzki, “Eulerian and Lagrangian pictures of non-equilibrium diffusions,” J.Stat. Phys. , 890–916 (2009).[125] C. Jarzynski, “Nonequilibrium equality for free energy differences,” Phys. Rev. Lett. , 2690–2693(1997).[126] D. J. Evans, E. G. D. Cohen, and G. P. Morriss, “Probability of second law violations in shearingsteady states,” Phys. Rev. Lett. , 2401–2404 (1993).[127] G. Gallavotti and E. G. D. Cohen, “Dynamical ensembles in nonequilibrium statistical mechanics,”Phys. Rev. Lett. , 2694–2697 (1995).[128] G. Gallavotti and E. G. D. Cohen, “Dynamical ensembles in stationary states,” J. Stat. Phys. ,931–970 (1995).[129] F. J. Dyson, “A Brownian motion model for the eigenvalues of a random matrix,” J. Math. Phys. ,1191–1198 (1962).[130] D. J. Grabiner, “Brownian motion in a Weyl chamber, non-colliding particles, and random matrices,”Ann. Inst. Henri Poincare B: Prob. Stat. , 177–204 (1999).[131] N. O’Connell, “Random matrices, non-colliding processes and queues,” in S´eminaire de Probabilit´esXXXVI , Lecture Notes in Mathematics, Vol. 1801, edited by J. Az´ema, M. ´Emery, M. Ledoux, andM. Yor (Springer, Berlin, 2003) pp. 165–182.[132] H. Touchette, “Ensemble equivalence for general many-body systems,” Europhys. Lett. , 50010(2011).[133] R. T. Rockafellar, Convex Analysis (Princeton University Press, Princeton, 1970).[134] T. R. Gingrich, S. Vaikuntanathan, and P. L. Geissler, “Heterogeneity-induced large deviations inactivity and (in some cases) entropy production,” (2014), arXiv:1406.3311.[135] N. Merhav and Y. Kafri, “Bose-Einstein condensation in large deviations with applications to informa-tion systems,” J. Stat. Mech. , P02011 (2010).[136] R. J. Harris, V. Popkov, and G. M. Sch¨utz, “Dynamics of instantaneous condensation in the ZRPconditioned on an atypical current,” Entropy , 5065–5083 (2013).[137] J. Szavits-Nossan, M. R. Evans, and S. N. Majumdar, “Constraint-driven condensation in largefluctuations of linear statistics,” Phys. Rev. Lett. , 020602 (2014).[138] A. Baule and R. M. L. Evans, “Invariant quantities in shear flow,” Phys. Rev. Lett. , 240601 (2008).[139] A. Simha, R. M. L. Evans, and A. Baule, “Properties of a nonequilibrium heat bath,” Phys. Rev. E , 031117 (2008).[140] A. Baule and R. M. L. Evans, “Nonequilibrium statistical mechanics of shear flow: invariant quantitiesand current relations,” J. Stat. Mech. , P03030 (2010).[141] V. Popkov, G. M. Sch¨utz, and D. Simon, “ASEP on a ring conditioned on enhanced flux,” J. Stat.Mech. , P10007 (2010).[142] V. Popkov and G. Sch¨utz, “Transition probabilities and dynamic structure function in the ASEPconditioned on strong flux,” J. Stat. Phys. , 627–639 (2011).[143] R. L. Jack and P. Sollich, “Large deviations of the dynamical activity in the East model: Analysingstructure in biased trajectories,” J. Phys. A: Math. Theor. , 015003 (2014).[144] M. Kneˇzevi´c and R. M. L. Evans, “Numerical comparison of a constrained path ensemble and a drivenquasisteady state,” Phys. Rev. E , 012132 (2014).[145] J. Chen, H. Li, and S. Jian, “Some limit theorems for absorbing Markov processes,” J. Phys. A: Math.Theor. , 345003 (2012).[146] J. Chen and X. Deng, “Large deviations and related problems for absorbing Markov chains,” Stoch.Proc. Appl. , 2398–2418 (2013).[147] M. Bauer and F. Cornu, “Affinity and fluctuations in a mesoscopic noria,” (2014), arXiv:1402.2422.[148] H. D. Miller, “A convexity property in the theory of random variables defined on a finite Markovchain,” Ann. Math. Stat.32