Markov processes with product-form stationary distribution
aa r X i v : . [ m a t h . P R ] O c t Markov processes with product-form stationary distribution
Krzysztof Burdzy ∗ and David White
This research has been inspired by several papers on processes with inert drift [5, 4, 3, 1]. Themodel involves a “particle” X and an “inert drift” L , neither of which is a Markov process byitself, but the vector process ( X, L ) is Markov. It turns out that for some processes (
X, L ), thestationary measure has the product form; see [1]. The first goal of this note is to give an explicitcharacterization of all processes (
X, L ) with a finite state space for X and a product form stationarydistribution—see Theorem 2.1.The second, more philosophical, goal of this paper is to develop a simple tool that could helpgenerate conjectures about stationary distributions for processes with continuous state space andinert drift. So far, the only paper containing a rigorous result about the stationary distribution fora process with continuous state space and inert drift, [1], was inspired by computer simulations.Examples presented in Section 3 lead to a variety of conjectures that would be hard to arrive atusing pure intuition or computer simulations. Let S = { , , . . . , N } for some integer N > d ≥ X ( t ) , L ( t )) on S × R d as follows. We associate with each state j ∈ S a vector v j ∈ R d , ≤ j ≤ N . Define L j ( t ) = µ ( { s ∈ [0 , t ] : X ( s ) = j } ), where µ is Lebesgue measure, andlet L ( t ) = P j ∈S v j L j ( t ). To make the “reinforcement” non-trivial, we assume that at least one of v j ’s is not 0. Since L will always belong to the hyperplane spanned by v j ’s, we also assume that d = dim(span { v , . . . , v N } ).We also select non-negative functions a ij ( l ) which define the Poisson rates of jumps from state i to j . The rates depend on l = L ( t ). We assume that a ij ’s are right-continuous with left limits.Formally speaking, the process ( X, L ) is defined by its generator A as follows, Af ( j, l ) = v j · ∇ l f ( j, l ) + X i = j a ji ( l )[ f ( i, l ) − f ( j, l )] , j = 1 , . . . , N, l ∈ R d , for f : { , . . . , N } × R d → R . ∗ Partially supported by NSF Grant DMS-0600206.
1e assume that (
X, L ) is irreducible in the sense of Harris, i.e., for some open set U ⊂ R d andsome j ∈ S , for all ( x, l ) ∈ S × R d , we have for some t > P (( X ( t ) , L ( t )) ∈ { j } × U ) > . We are interested only in processes satisfying (14) below. Using that condition, it is easy to checkHarris irreducibility for each of our models by a direct argument. A standard coupling argumentshows that Harris irreducibility implies uniqueness of the stationary probability distribution (as-suming existence of such).The (formal) adjoint of A is given by A ∗ g ( j, l ) = − v j · ∇ l g ( j, l ) + X i = j [ a ij ( l ) g ( i, l ) − a ji ( l ) g ( j, l )] , j = 1 , . . . , N, l ∈ R d . (1)We are interested in invariant measures of product form so suppose that g ( j, l ) = p j g ( l ), where P j ∈S p j = 1 and R R d g ( l ) dl = 1. We may assume that p j > j ; otherwise some points in S are never visited. Under these assumptions, (1) becomes A ∗ g ( j, l ) = − p j v j · ∇ g ( l ) + X i = j [ p i a ij ( l ) g ( l ) − p j a ji ( l ) g ( l )] , j = 1 , . . . , N, l ∈ R d . Theorem 2.1.
Assume that for every i and j , the function l → a ij ( l ) is continuous. A probabilitymeasure p j g ( l ) djdl is invariant for the process ( X, L ) if and only if − p j v j · ∇ g ( l ) + X i = j [ p i a ij ( l ) g ( l ) − p j a ji ( l ) g ( l )] = 0 , j = 1 , . . . , N, l ∈ R d . (2) Proof.
Recall that the state space S for X is finite. Hence v ∗ := sup j ∈S | v j | < ∞ . Fix arbitrary r, t ∗ ∈ (0 , ∞ ). It follows that, sup i,j ∈S ,l ∈ B (0 ,r +2 t ∗ v ∗ ) a ij ( l ) = a ∗ < ∞ . Note that we always have | L ( t ) − L ( u ) | ≤ v ∗ | t − u | . Hence, if | L (0) | ≤ r + t ∗ v ∗ and s, t > s + t ≤ t ∗ ,then | L ( s + t ) | ≤ r + 2 t ∗ v ∗ and, therefore,sup j ∈S ,u ≤ s + t a X ( u ) ,j ( L ( u )) ≤ a ∗ < ∞ . This implies that the probability of two or more jumps on the interval [ s, s + t ] is o ( t ). Assumethat | l | ≤ r + t ∗ v ∗ and t ≤ t ∗ . Then we have the following three estimates. First, P ( X ( t ) = j | X (0) = i, L (0) = l ) = a ij ( l ) t + R i,j,l ( t ) , (3)where the remainder R i,j,l ( t ) satisfies sup i,j ∈S ,l ∈ B (0 ,t ∗ v ∗ ) | R i,j,l ( t ) | ≤ R ( t ) for some R ( t ) such thatlim t → R ( t ) /t = 0.Let a ii ( l ) = − P j = i a ij ( l ). We have P ( X ( t ) = i, L ( t ) = l + tv i | X (0) = i, L (0) = l ) = 1 + a ii ( l ) t + R i,l ( t ) , (4)2here the remainder R i,l ( t ) satisfies sup i ∈S ,l ∈ B (0 ,t ∗ v ∗ ) | R i,l ( t ) | ≤ R ( t ) for some R ( t ) such thatlim t → R ( t ) /t = 0.Finally, P ( X ( t ) = i, L ( t ) = l + tv i | X (0) = i, L (0) = l ) = R i,l ( t ) , (5)where the remainder R i,l ( t ) satisfies sup i ∈S ,l ∈ B (0 ,t ∗ v ∗ ) | R i,l ( t ) | ≤ R ( t ) for some R ( t ) such thatlim t → R ( t ) /t = 0.Now consider any C function f ( j, l ) with support in S × B (0 , r ). Recall that | L ( t ) − L ( u ) | ≤ v ∗ | t − u | . Hence, E i,l f ( X t , L t ) = 0 for t ≤ t ∗ and | l | ≥ r + v ∗ t ∗ .Suppose that | l | ≤ r + v ∗ t ∗ , t ∈ (0 , t ∗ ) and s ∈ (0 , t ∗ − t ). Then E i,l f ( X t + s , L t + s ) − E i,l f ( X t , L t )= X j ∈S Z R d X k ∈S Z R d f ( k, r ) P ( X ( t + s ) = k, L ( t + s ) ∈ dr | X ( t ) = j, L ( t ) = l ) × P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l ) − X j ∈S Z R d f ( j, l ) P ( X ( t ) = j, L ( t ) ∈ dr | X (0) = i, L (0) = l ) . We combine this formula with (3)-(5) to see that, E i,l f ( X t + s , L t + s ) − E i,l f ( X t , L t )= X j ∈S Z R d X k ∈S ,k = j ( f ( k, l ) + O ( s ))( a jk ( l ) s + R j,k,l ( s )) × P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l )+ X j ∈S Z R d f ( j, l + sv j )(1 + a jj ( l ) s + R j,l ( s )) × P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l )+ X j ∈S Z R d ( f ( j, l ) + O ( s )) R j,l ( s ) × P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l ) − X j ∈S Z R d f ( j, l ) P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l ) , X j ∈S Z R d ( f ( j, l + sv j ) − f ( j, l )) P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l )+ X j ∈S Z R d f ( j, l + sv j ) a jj ( l ) + X k ∈S ,k = j f ( k, l ) a jk ( l ) ! s × P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l )+ X j ∈S Z R d X k ∈S ,k = j f ( k, l ) R j,k,l ( s ) + O ( s )( a jk ( l ) s + R j,k,l ( s )) ! + f ( j, l + sv j ) R j,l ( s ) + ( f ( j, l ) + O ( s )) R j,l ( s ) ! P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l ) . We will analyze the limitlim s ↓ s ( E i,l f ( X t + s , L t + s ) − E i,l f ( X t , L t )) . Note thatlim s ↓ s X j ∈S Z R d X k ∈S ,k = j f ( k, l ) R j,k,l ( s ) + O ( s )( a jk ( l ) s + R j,k,l ( s )) ! + f ( j, l + sv j ) R j,l ( s ) + ( f ( j, l ) + O ( s )) R j,l ( s ) ! P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l )= 0 . We also havelim s ↓ s X j ∈S Z R d ( f ( j, l + sv j ) − f ( j, l )) P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l )= X j ∈S Z R d ∇ l f ( j, l ) · v j P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l ) , and lim s ↓ s X j ∈S Z R d f ( j, l + sv j ) a jj ( l ) + X k ∈S ,k = j f ( k, l ) a jk ( l ) ! s × P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l )= X j ∈S Z R d X k ∈S f ( k, l ) a jk ( l ) P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l ) . ddt E i,l f ( X t , L t ) (cid:12)(cid:12)(cid:12) t = t = lim s ↓ s ( E i,l f ( X t + s , L t + s ) − E i,l f ( X t , L t ))= X j ∈S Z R d ∇ l f ( j, l ) · v j + X k ∈S f ( k, l ) a jk ( l ) ! P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l ) (6)= X j ∈S Z R d Af ( j, l ) P ( X ( t ) = j, L ( t ) ∈ dl | X (0) = i, L (0) = l )= E i,l Af ( X t , L t ) , (7)for all i and | l | ≤ r + v ∗ t ∗ .We will argue that P (( X ( t ) , L ( t )) ∈ · | X (0) = i, L (0) = l ) → P (( X ( t ) , L ( t )) ∈ · | X (0) = i, L (0) = l ) , (8)weakly when l → l . Let T k be the time of the k -th jump of X . We have P ( T > t | X (0) = i, L (0) = l ) = exp (cid:18)Z t a ii ( l + sv i ) ds (cid:19) . Since l → a ii ( l ) is continuous, we conclude that P ( T > t | X (0) = i, L (0) = l ) → P ( T > t | X (0) = i, L (0) = l ) , weakly as l → l . This and continuity of l → a ij ( l ) for every j implies that P (( X ( T ) , L ( T )) ∈ · | X (0) = i, L (0) = l ) → P (( X ( T ) , L ( T )) ∈ · | X (0) = i, L (0) = l ) , weakly when l → l . By the strong Markov property applied at T k ’s, we obtain inductively that P (( X ( T k ) , L ( T k )) ∈ · | X (0) = i, L (0) = l ) → P (( X ( T k ) , L ( T k )) ∈ · | X (0) = i, L (0) = l ) , when l → l , for every k ≥
1. This easily implies (8), because the number of jumps is stochasticallybounded on any finite interval.Since ( j, l ) ֒ → ∇ l f ( j, l ) · v j + P k ∈S f ( k, l ) a jk ( l ) is a continuous function, it follows from (6) and(8) that l ֒ → ddt E i,l f ( X t , L t ) (cid:12)(cid:12)(cid:12) t = t is continuous on the set | l | ≤ r + v ∗ t ∗ .Recall that E i,l f ( X t , L t ) = 0 for t ≤ t ∗ and | l | ≥ r + v ∗ t ∗ . Hence, l → ddt E i,l f ( X t , L t ) (cid:12)(cid:12)(cid:12) t = t iscontinuous for all t ≤ t ∗ and all values of i .Fix some t ≤ t ∗ and let u t ( j, l ) = E j,l f ( X t , L t ). We have just shown that for a fixed t ≤ t ∗ andany j , the function l → u t ( j, l ) is C . Hence we can apply (7) with f ( j, l ) = u t ( j, l ) to obtain, ddt E j,l f ( X t , L t ) = lim s ↓ s ( E j,l f ( X t + s , L t + s ) − E j,l f ( X t , L t )) (9)= lim s ↓ s ( E j,l u t ( X s , L s ) − u t ( j, l ))= ( Au t )( j, l ) . j,l (cid:0) ∇ l f ( j, l ) · v j + P k ∈S f ( k, l ) a jk ( l ) (cid:1) < ∞ , formula (6) shows thatsup j,l,s ≤ t ∗ (cid:18) ddt E j,l f ( X s , L s ) (cid:19) < ∞ . (10)Now assume that (2) is true and let π ( dj, dl ) = p j g ( l ) djdl . In view of (10), we can change theorder of integration in the following calculation. For 0 ≤ t < t ≤ t ∗ , using (9), E π f ( X ( t ) , L ( t )) − E π f ( X ( t ) , L ( t )) (11)= X j ∈S Z R d E j,l f ( X t , L t ) p j g ( l ) dl − X j ∈S Z R d E j,l f ( X t , L t ) p j g ( l ) dl = X j ∈S Z R d Z t t dds E j,l f ( X s , L s ) dsp j g ( l ) dl = Z t t X j ∈S Z R d dds E j,l f ( X s , L s ) p j g ( l ) dlds = Z t t X j ∈S Z R d ( Au s )( j, l ) p j g ( l ) dlds. Let h ( j, l ) = p j g ( l ). For a fixed j and s ≤ t ∗ , the function u s ( j, l ) = 0 outside a compact set, so wecan use integration by parts to show that X j ∈S Z R d ( Au s )( j, l ) p j g ( l ) dl = X j ∈S Z R d u s ( j, l )( A ∗ h )( j, l ) dl. (12)We combine this with the previous formula and the assumption that A ∗ h ≡ E π f ( X ( t ) , L ( t )) − E π f ( X ( t ) , L ( t )) = Z t t X j ∈S Z R d u s ( j, l )( A ∗ h )( j, l ) dlds = 0 . It follows that t → E π f ( X ( t ) , L ( t )) is constant for every C function f ( j, l ) with compact support.This proves that the distributions of ( X ( t ) , L ( t )) and ( X ( t ) , L ( t )) are identical under π , for all0 ≤ t < t ≤ t ∗ .Conversely, assume that π ( dj, dl ) = p j g ( l ) djdl is invariant. Then the left hand side of (11) iszero for all 0 ≤ t < t ≤ t ∗ . This implies that X j ∈S Z R d ( Au s )( j, l ) p j g ( l ) dl = 0for a set of s that is dense on [0 , ∞ ). By (12), X j ∈S Z R d u s ( j, l )( A ∗ h )( j, l ) dl = 0 (13)for a set of s that is dense on [0 , ∞ ). Note that lim s ↓ u s ( j, l ) = f ( j, l ). Hence, the collection of C functions u s ( j, l ), obtained by taking arbitrary C functions f ( j, l ) with compact support andpositive reals s dense in [0 , ∞ ), is dense in the family of C functions with compact support. Thisand (13) imply that A ∗ h ≡
0, that is, (2) holds. 6 orollary 2.2.
If a probability measure p j g ( l ) djdl is invariant for the process ( X, L ) then X j ∈S p j v j = 0 . (14) Proof.
Summing (2) over j , we obtain X j ∈S − p j v j · ∇ g ( l ) = 0 , (15)for all l . Since g is integrable over R d , it is standard to show that there exist l , l , . . . , l d whichspan R d . Applying (15) to all l , l , . . . , we obtain (14).It will be convenient to use the following notation, b ij ( l ) = p i a ij ( l ) − p j a ji ( l ) . (16)Note that b ij = − b ji . Corollary 2.3.
A probability measure p j g ( l ) djdl is invariant for the process ( X, L ) and g ( l ) is theGaussian density g ( l ) = (2 π ) − d/ exp( −| l | / , (17) if and only if the following equivalent conditions hold, p j v j · l + X i = j [ p i a ij ( l ) − p j a ji ( l )] = 0 , j = 1 , . . . , N, l ∈ R d , (18) p j v j · l + X i = j b ij ( l ) = 0 , j = 1 , . . . , N, l ∈ R d . (19) Proof. If g ( l ) is the Gaussian density then ∇ g ( l ) = − lg ( l ) and (2) is equivalent to (18). Conversely,if (2) and (18) are satisfied then ∇ g ( l ) = − lg ( l ), so g ( l ) must have the form (17).In the rest of the paper we will consider only processes satisfying (18)-(19). Example 2.4.
We now present some choices for a ij ’s. Recall the notation x + = max( x, x − = − min( x, x + − x − = x . Given v j ’s, p j ’s and b ij ’s which satisfy (19) and thecondition b ij = − b ji , we may take a ij ( l ) = ( b ij ( l )) + /p i . (20)Then a ji ( l ) = ( b ji ( l )) + /p j = ( − b ij ( l )) + /p j = ( b ij ( l )) − /p j , so p i a ij ( l ) − p j a ji ( l ) = ( b ij ( l )) + − ( b ij ( l )) − = b ij ( l ) , as desired. 7he above is a special case, in a sense, of the following. Suppose that p j = p i for all i and j .Assume that v j ’s and b ij ’s satisfy (19) and the condition b ij = − b ji . Fix some c > a ij ( l ) = b ij ( l ) exp( cb ij ( l ))exp( cb ij ( l )) − exp( − cb ij ( l )) . (21)It is elementary to check that with this definition, (16) is satisfied for all i and j , because b ij ( l ) = − b ji ( l ). The formula (21) arose naturally in [5]. Note that (20) (with all p i ’s equal) is the limit of(21) as c → ∞ . This section is contains examples of processes (
X, L ) with finite state space for X , and conjecturesconcerned with processes with continuous state space. There are no proofs in this sectionFirst we will consider processes that resemble diffusions with reflection. In these models, the“inert drift” is accumulated only at the “boundary” of the domain.We will now assume that elements of S are points in a Euclidean space R n with n ≤ N . Wedenote them S = { x , x , . . . , x N } . In other words, by abuse of notation, we switch from j to x j .We also take v j ∈ R n , i.e. d = n . Moreover, we limit ourselves to functions b ij ( l ) of the form b ij · l for some vector b ij ∈ R n . Then (19) becomes0 + b + b + · · · + b N = − p v − b − b + · · · + b n = − p v (22) . . . − b N − b N − b N − · · · − − p N v N . Consider any orthogonal transformation Λ : R n → R n . If { b ij , v j , p j } satisfy (22) then so do { Λ b ij , Λ v j , p j } .Suppose that a ij ( l ) have the form a ij · l for some a ij ∈ R n . If { a ij , v j , p j } satisfy (18) thenso do { Λ a ij , Λ v j , p j } . Moreover, the process with parameters { a ij , v j } has the same transitionprobabilities as the one with parameters { Λ a ij , Λ v j } . Example 3.1.
Our first example is a reflected random walk on the interval [0 , x j =( j − / ( N −
1) for j = 1 , . . . , N . We will construct a process with all p j ’s equal to each other, i.e., p j = 1 /N . We will take l ∈ R , v = α and v N = − α , for some α = α ( N ) >
0, and all other v j = 0,so that the “inert drift” L changes only at the endpoints of the interval. We also allow jumps onlybetween adjacent points, so b ij = 0 for | i − j | >
1. Then (22) yields b = − α/N − b + b = 0 · · ·− b ( N − N = α/N. Solving this, we obtain b i ( i +1) = − α/N for all i .We would like to find a family of semi-discrete models indexed by N that would converge to acontinuous process with product-form stationary distribution as N → ∞ . For 1 < i < N , we set8 i ( i +1) ( l ) = A R ( l, N ) and a ( i +1) i ( l ) = A L ( l, N ). We would like the random walk to have varianceof order 1 at time 1, for large N , so we need A R + A L = N . (23)Since b i ( i +1) = − α/N for all i , A R and A L have to satisfy A L − A R = αl. (24)When l is of order 1, we would like to have drift of order 1 at time 1, so we take α = N . Then (24)becomes A L − A R = N l. (25)Solving (23) and (25) gives A L = N + N l , A R = N − N l . Unfortunately, A L and A R given by the above formula can take negative values—this is notallowed because a ij ’s have to be positive. However, for every N , the stationary distribution of L isstandard normal, so l typically takes values of order 1. We are interested in large N so, intuitivelyspeaking, A R and A L are not likely to take negative values. To make this heuristics rigorous, wemodify the formulas for A R and A L as follows, A L = N + N l ∨ ∨ N l, A R = N − N l ∨ ∨ ( − N l ) . (26)Let P N denote the distribution of ( X, L ) with the above parameters. We conjecture that as N → ∞ , P N converge to the distribution of reflected Brownian motion in [0 ,
1] with inert drift, as definedin [5, 1]. The stationary distribution for this continuous time process is the product of the uniformmeasure in [0 ,
1] and the standard normal; see [1].
Example 3.2.
This example is a semi-discrete approximation to reflected Brownian motion in abounded Euclidean subdomain of R n , with inert drift. In this example we proceed in the reversedorder, starting with b ij ’s and a ij ’s.Consider an open bounded connected set D ⊂ R n . Let K be a (large) integer and let D K = Z n /K ∩ D , i.e., D K is the subset of the square lattice with mesh 1 /K that is inside D . We assumethat D K is connected, i.e., any vertices in D K are connected by a path in D K consisting of edgesof length 1 /K . We take S = D K and l ∈ R n .We will consider nearest neighbor random walk, i.e., we will take a ij ( l ) = 0 for | x i − x j | > /K .In analogy to (26), we define a ij ( l ) = K x i − x j ) · l ) ∨ ∨ K ( x i − x j ) · l. (27)Then b ij ( l ) = ( K /N )( x i − x j ) · l . Let us call a point in S = D K an interior point if it has 2 n neighbors in D K . We now define v j ’s using (22) with p j = 1 / | D K | . For all interior points x j , thevector v j is 0, by symmetry. For all boundary (that is, non-interior) points x j , the vector v j is not0. 9ix D ⊂ R n and consider large K . Let P K denote the distribution of ( X, L ) constructed in thisexample. We conjecture that as K → ∞ , P K converge to the distribution of normally reflectedBrownian motion in D with inert drift, as defined in [5, 1]. If D is C then it is known that thestationary distribution for this continuous time process is the product of the uniform measure in D and the standard Gaussian distribution; see [1].The next two examples are discrete counterparts of processes with continuous state space andsmooth inert drift. The setting is similar to that in Example 3.2. We consider an open boundedconnected set D ⊂ R n . Let K be a (large) integer and let D K = Z n /K ∩ D , i.e., D K is the subsetof the square lattice with mesh 1 /K that is inside D . We assume that D K is connected, i.e., anyvertices in D K are connected by a path in D K consisting of edges of length 1 /K . We take S = D K and l ∈ R n . Example 3.3.
This example is concerned with a situation when the stationary distribution hasthe form p j g ( l ) where p j ’s are not necessarily equal. We start with a C “potential” V : D → R .We will write V j instead of V ( x j ). Let p j = c exp( − V j ). We need an auxiliary function d ij = 2( p i − p j ) p i ( V j − V i ) − p j ( V i − V j ) . Note that d ij = d ji and for a fixed i , we have d ij K → K → ∞ and | i − j K | = 1 /K .Let a ij ( l ) = 0 for | x i − x j | > /K , and for | x i − x j | = 1 /K , e a ij ( l ) = K d ij ( V i − V j ) + ( x j − x i ) · l ) . We set a ij ( l ) = (e a ij ( l ) ∨ e a ji ( l ) > K / p i )( p i + p j )( x j − x i ) · l otherwise. (28)If e a ji ( l ) > e a ij ( l ) > b ij ( l ) = p i a ij ( l ) − p j a ji ( l )= K p i − p j ) + ( p i ( V i − V j ) − p j ( V j − V i )) d ij + ( p i ( x j − x i ) − p j ( x i − x j )) · l )= K p i − p j ) − p i − p j ) + ( p i + p j )( x j − x i ) · l )= K p i + p j )( x j − x i ) · l. It follows from (28) that the above formula holds also if e a ji ( l ) ≤ e a ij ( l ) ≤
0. Consider aninterior point x j . For (19) to be satisfied, we have to take v j = − p j X | x i − x j | =1 /K K p i + p j )( x j − x i ) . For large K , series expansion shows that v j ≈ −∇ V. D ⊂ R n and consider large K . Let P K denote the distribution of ( X, L ) constructed in thisexample. We recall the following SDE from [1], dY t = −∇ V ( Y t ) dt + S t dt + dB t ,dS t = −∇ V ( Y t ) dt , where B is standard n -dimensional Brownian motion and V is as above. Let P ∗ denote the distri-bution of ( Y, S ). We conjecture that as K → ∞ , P K converge to P ∗ . Under mild assumptions on V ,it is known that the stationary distribution for ( Y, S ) is the product of the measure exp( − V ( x )) dx and the standard Gaussian distribution; see [1]. Example 3.4.
We again consider the situation when all p j ’s are equal, i.e., p j = 1 /N . Consider a C function V : D → R . We let a ij ( l ) = 0 for | x i − x j | > /K . If | x i − x j | = 1 /K , we let e a ij ( l ) = K V j + V i )( x j − x i ) · l ) . We set a ij ( l ) = (e a ij ( l ) ∨ e a ji ( l ) > K ( V j + V i )( x j − x i ) · l otherwise. (29)Then b ij ( l ) = (1 /N ) K ( V j + V i )( x j − x i ) · l and v j = K X | x i − x j | =1 /K ( V j + V i )( x j − x i ) . For large K , we have v j ≈ − ∇ V .Fix D ⊂ R n and consider large K . Let P K denote the distribution of ( X, L ) constructed in thisexample. Consider the following SDE, dY t = V ( Y t ) S t dt + dB t ,dS t = − ∇ V ( Y t ) dt , where B is standard n -dimensional Brownian motion and V is as above. Let P ∗ denote the dis-tribution of ( Y, S ). We conjecture that as K → ∞ , P K converge to P ∗ , and that the stationarydistribution for ( Y, S ) is the product of the uniform measure on D and the standard Gaussiandistribution.The next example and conjecture are devoted to examples where the inert drift is related tothe curvature of the state space, in a suitable sense. Example 3.5.
In this example, we will identify R and C . The imaginary unit will be denoted by i ,as usual. Let S consist of N points on a circle with radius r > x j = r exp( j πi/N ) , j = 1 , . . . , N .We assume that the p j ’s are all equal to each other.For any pair of adjacent points x j and x k , we let e a jk ( l ) = N x k − x j ) · l ) , a jk ( l ) = (e a jk ( l ) ∨ e a kj ( l ) > N ( x k − x j ) · l otherwise,with the other a kj ( l ) = 0. Then b j ( j +1) = N ( x j +1 − x j ) · l , and by (19) we have v j = N ( x j − − x j ) + N ( x j +1 − x j ) = 2 N (cos(2 π/N ) − x j . Note that v j → − π x j when N → ∞ .Let P N be the distribution of ( X, L ) constructed above.Let C be the circle with radius r > T y be the projection of R onto thetangent line to C at y ∈ C . Consider the following SDE, dY t = T Y t ( S t ) dt + dB t ,dS t = − π Y t dt , where Y takes values in C and B is Brownian motion on this circle. Let P ∗ be the distribution of( Y, S ). We conjecture that as N → ∞ , P N converge to P ∗ , and that the stationary distribution for( Y, S ) is the product of the uniform measure on the circle and the standard Gaussian distribution.
Conjecture 3.6.
We propose a generalization of the conjecture stated in the previous example.We could start with an explicit discrete approximation, just like in other examples discussed so far.The notation would be complicated and the whole procedure would not be illuminating, so we skipthe approximation and discuss only the continuous model.Let
S ⊂ R n be a smooth ( n − T y be the projection of R n onto thetangent space to S at y ∈ S , let n ( y ) be the inward normal to S at y ∈ S , and let ρ be the meancurvature at y ∈ S . Consider the following SDE, dY t = T Y t ( S t ) dt + dB t ,dS t = c ρ − n ( Y t ) dt , where Y takes values in S and B is Brownian motion on this surface. We conjecture that for some c depending only on the dimension n , the stationary distribution for ( Y, S ) exists, is unique andis the product of the uniform measure on S and the standard Gaussian distribution.We end with examples of processes that are discrete approximations of continuous-space pro-cesses with jumps. It is not hard to construct examples of discrete-space processes that convergein distribution to continuous-space processes with jumps. Stable processes are a popular familyof processes with jumps. These and similar examples of processes with jumps allow for jumpsof arbitrary size, and this does not mesh well with our model because we assume a finite statespace for X . Jump processes confined to a bounded domain have been defined (see, e.g., [2]) buttheir structure is not very simple. For these technical reasons, we will present approximations toprocesses similar to the stable process wrapped around a circle.In both examples, we will identify R and C . Let S consist of N points on the unit circle D , x j = exp( j πi/N ) , j = 1 , . . . , N . We assume that the p j ’s are all equal to each other, hence, p j = 1 /N . In these examples, L takes values in R , not R .12 xample 3.7. Consider a C -function V : D → R . We write V j = V ( x j ). We define A ( j, k ) = ( x j and x k are adjacent on the unit circle,0 otherwise . For any pair of points x j and x k , not necessarily adjacent, we let e a jk ( l ) = N V k − V j ) A ( j, k ) l + 1 N X n ∈ Z | ( k − j ) + nN | − − α , where α ∈ (0 , a jk ( l ) = (e a jk ( l ) ∨ e a kj ( l ) > N ( V k − V j ) A ( j, k ) l otherwise.Then b jk ( l ) = N ( V k − V j ) A ( j, k ) l and by (19) we have v k = − N X j : A ( k,j )=1 V k − V j . Note that v k → ∆ V ( x ) = V ′′ ( x ) when N → ∞ and x k → x .Let P N be the distribution of ( X, L ) constructed above. Let W ( x ) = V ( e ix ) and let ( Z, S )be a Markov process with the state space R × R and the following transition probabilities. Thecomponent Z is a jump process with the drift ∇ W ( Z ) S = W ′ ( Z ) S . The jump density for theprocess Z is P n ∈ Z | ( x − y ) + n π | − − α . We let S t = R t ∆ W ( Z s ) ds . Let Y t = exp( iZ t ) and P ∗ bethe distribution of ( Y, S ). We conjecture that P N → P ∗ as N → ∞ and the process ( Y, S ) has thestationary distribution which is the product of the uniform measure on D and the standard normaldistribution. The process ( Y, S ) is a “stable process with index α , with inert drift, wrapped on theunit circle.” Example 3.8.
Consider a continuous function V : D → R with R D V ( x ) dx = 0. Recall the notation V j = V ( x j ). For any pair of points x j and x k , not necessarily adjacent, we let e a jk ( l ) = 1 N
12 ( V k − V j ) l + X n ∈ Z | ( k − j ) + nN | − − α ! , where α ∈ (0 , a jk ( l ) = (e a jk ( l ) ∨ e a kj ( l ) > N ( V k − V j ) l otherwise.Then b jk ( l ) = (1 /N )( V k − V j ) l and by (19) we have v k = 1 N X ≤ j ≤ N,j = k V k − V j . x k → y when N → ∞ then v k → V ( e iy ) − R D V ( x ) dx = V ( e iy ).Let P N be the distribution of ( X, L ) constructed above. Let W ( x ) = V ( e ix ) and let ( Z, S ) be aMarkov process with the state space R × R and the following transition probabilities. The component Z is a jump process with the jump density f ( x ) = ( W ( x ) − W ( y )) s − R P n ∈ Z (( x − y ) + n π ) − − α at time t , given { Z t = y, S t = s } . We let S t = R t W ( Z s ) ds . Let Y t = exp( iZ t ) and P ∗ be thedistribution of ( Y, S ). We conjecture that P N → P ∗ as N → ∞ and the process ( Y, S ) has thestationary distribution which is the product of the uniform measure on D and the standard normaldistribution. We are grateful to Zhenqing Chen and Tadeusz Kulczycki for very helpful suggestions.
References [1] R. Bass, K. Burdzy, Z.Q. Chen and M. Hairer, Stationary distributions for diffusions with inertdrift
Probab. Theory Rel. Fields (to appear)[2] K. Bogdan, K. Burdzy and Z.-Q. Chen, Censored stable processes
Probab. Theory Rel. Fields (2003) 89–152.[3] K. Burdzy, R. Ho lyst and L. Pruski, Brownian motion with inert drift, but without flux: amodel.
Physica A (2007) 278–284.[4] K. Burdzy and D. White, A Gaussian oscillator.
Electron. Comm. Probab. (2004) paper 10,pp. 92–95.[5] D. White, Processes with inert drift. Electron. J. Probab. (2007) paper 55, pp. 1509–1546. Krzysztof Burdzy:
Department of Mathematics, University of Washington, Seattle, WA 98195, USA.Email: [email protected]
David White:
Department of Mathematics, Belmont University, Nashville TN 37212Email: [email protected]@u.washington.edu