aa r X i v : . [ m a t h . P R ] M a r A stochastic differential equation with a stickypoint
Richard F. BassMarch 12, 2014
Abstract
Abstract:
We consider a degenerate stochastic differential equationthat has a sticky point in the Markov process sense. We prove thatweak existence and weak uniqueness hold, but that pathwise unique-ness does not hold nor does a strong solution exist.
Subject Classification: Primary 60H10; Secondary 60J60, 60J65
The one-dimensional stochastic differential equation dX t = σ ( X t ) dW t (1.1)has been the subject of intensive study for well over half a century. What canone say about pathwise uniqueness when σ is allowed to be zero at certainpoints? Of course, a large amount is known, but there are many unansweredquestions remaining.Consider the case where σ ( x ) = | x | α for α ∈ (0 , α ≥ /
2, it isknown there is pathwise uniqueness by the Yamada-Watanabe criterion (see,e.g., [6, Theorem 24.4]) while if α < /
2, it is known there are at least twosolutions, the zero solution and one that can be constructed by a non-trivialtime change of Brownian motion. However, that is not the end of the story.1n [7], it was shown that there is in fact pathwise uniqueness when α < / m is given by m ( dy ) = 1 ( y =0) | y | − α dy + γδ ( dy ) , where γ ∈ [0 , ∞ ] and δ is point mass at 0. When γ = ∞ , we get the 0solution, or more precisely, the solution that stays at 0 once it hits 0. If weset γ = 0, we get the situation considered in [7] where the amount of timespent at 0 has Lebesgue measure zero, and pathwise uniqueness holds amongsuch processes.In this paper we study an even simpler equation: dX t = 1 ( X t =0) dW t , X = 0 , (1.2)where W is a one-dimensional Brownian motion. One solution is X t = W t ,since Brownian motion spends zero time at 0. Another is the identically 0solution.We take γ ∈ (0 , ∞ ) and consider the class of solutions to (1.2) which spenda positive amount of time at 0, with the amount of time parameterized by γ . We give a precise description of what we mean by this in Section 3.Representing diffusions on the line as the solutions to stochastic differen-tial equations has a long history, going back to Itˆo in the 1940’s, and thispaper is a small step in that program. For this reason we characterize oursolutions in terms of occupation times determined by a speed measure. Otherformulations that are purely in terms of stochastic calculus are possible; seethe system (1.5)–(1.6) below.We start by proving weak existence of solutions to (1.2) for each γ ∈ (0 , ∞ ). We in fact consider a much more general situation. We let m be anymeasure that gives finite positive mass to each open interval and define thenotion of continuous local martingales with speed measure m .We prove weak uniqueness, or equivalently, uniqueness in law, amongcontinuous local martingales with speed measure m . The fact that we have2niqueness in law not only within the class of strong Markov processes butalso within the class of continuous local martingales with a given speed mea-sure may be of independent interest.We then restrict our attention to (1.2) and look at the class of continuousmartingales that solve (1.2) and at the same time have speed measure m ,where now m ( dy ) = 1 ( y =0) dy + γδ ( dy ) (1.3)with γ ∈ (0 , ∞ ).Even when we fix γ and restrict attention to solutions to (1.2) that havespeed measure m given by (1.3), pathwise uniqueness does not hold. Theproof of this fact is the main result of this paper. The reader familiar withexcursions will recognize some ideas from that theory in the proof.Finally, we prove that for each γ ∈ (0 , ∞ ), no strong solution to (1.2)among the class of continuous martingales with speed measure m given by(1.3) exists. Thus, given W , one cannot find a continuous martingale X withspeed measure m satisfying (1.2) such that X is adapted to the filtrationof W . A consequence of this is that certain natural approximations to thesolution of (1.2) do not converge in probability, although they do convergeweakly.Besides increasing the versatility of (1.1), one can easily imagine a prac-tical application of sticky points. Suppose a corporation has a takeover offerat $10. The stock price is then likely to spend a great deal of time preciselyat $10 but is not constrained to stay at $10. Thus $10 would be a stickypoint for the solution of the stochastic differential equation that describesthe stock price.Regular continuous strong Markov processes on the line which are onnatural scale and have speed measure given by (1.3) are known as stickyBrownian motions. These were first studied by Feller in the 1950’s and Itˆoand McKean in the 1960’s.A posthumously published paper by Chitashvili ([9]) in 1997, based on atechnical report produced in 1988, considered processes on the non-negativereal line that satisfied the stochastic differential equation dX t = 1 ( X t =0) dW t + θ ( X t =0) dt, X t ≥ , X = x , (1.4)with θ ∈ (0 , ∞ ). Chitashvii proved weak uniqueness for the pair ( X, W ) and3howed that no strong solution exists.Warren (see [23] and also [24]) further investigated solutions to (1.4). Theprocess X is not adapted to the filtration generated by W and has some“extra randomness,” which Warren characterized.While this paper was under review, we learned of a preprint by Engelbertand Peskir [11] on the subject of sticky Brownian motions. They consideredthe system of equations dX t = 1 ( X t =0) dW t , (1.5)1 ( X t =0) dt = 1 µ dℓ t ( X ) , (1.6)where µ ∈ (0 , ∞ ) and ℓ t is the local time in the semimartingale sense at 0of X . (Local times in the Markov process sense can be different in general.)Engelbert and Peskir proved weak uniqueness of the joint law of ( X, W )and proved that no strong solution exists. They also considered a one-sidedversion of this equation, where X ≥
0, and showed that it is equivalent to(1.4). Their results thus provide a new proof of those of Chitashvili.It is interesting to compare the system (1.5)–(1.6) investigated by [11]with the SDE considered in this paper. Both include the equation (1.5). Inthis paper, however, in place of (1.6) we use a side condition whose originscome from Markov process theory, namely: X is a continuous martingale with speed measure (1.7) m ( dx ) = dx + γδ ( dx ) , where δ is point mass at 0 and “continuous martingale with speed measure m ” is defined in (3.1). One can show that a solution to the system studied by[11] is a solution to the formulation considered in this paper and vice versa,and we sketch the argument in Remark 5.3. However, we did not see a wayof proving this without first proving the uniqueness results of this paper andusing the uniqueness results of [11].Other papers that show no strong solution exists for stochastic differentialequations that are closely related include [1], [2], and [15].After a short section of preliminaries, Section 2, we define speed measuresfor local martingales in Section 3 and consider the existence of such localmartingales. Section 4 proves weak uniqueness, while in Section 5 we prove4hat continuous martingales with speed measure m given by (1.3) satisfy(1.2). Sections 6, 7, and 8 prove that pathwise uniqueness and strong ex-istence fail. The first of these sections considers some approximations to asolution to (1.2), the second proves some needed estimates, and the proof iscompleted in the third. Acknowledgment.
We would like to thank Prof. H. Farnsworth for sug-gesting a mathematical finance interpretation of a sticky point.
For information on martingales and stochastic calculus, see [6], [14] or [22].For background on continuous Markov processes on the line, see the abovereferences and also [5], [13], or [16].We start with an easy lemma concerning continuous local martingales.
Lemma 2.1.
Suppose X is a continuous local martingale which exits a finitenon-empty interval I a.s. If the endpoints of the interval are a and b , a
1] such that(1) for each A ∈ F , Q ( · , A ) is measurable with respect to F ;(2) for each ω ∈ Ω, Q ( ω, · ) is a probability measure on F ;(3) for each A ∈ F , P ( A | G )( ω ) = Q ( ω, A ) for almost every ω .Regular conditional probabilities do not always exist, but will if Ω hassufficient structure; see [6, Appendix C].The filtration {F t } generated by a process Z is the smallest filtration towhich Z is adapted and which satisfies the usual conditions.We use the letter c with or without subscripts to denote finite positiveconstants whose value may change from place to place. Let a : R → R and b : R → R be Borel measurable functions with a ( x ) ≤ b ( x )for all x . If S is a finite stopping time, let τ S [ a,b ] = inf { t > S : X t / ∈ [ a ( X S ) , b ( X S )] } . We say a continuous local martingale X started at x has speed measure m if X = x and E [ τ S [ a,b ] − S | F S ] = G [ a ( X S ) ,b ( X S )] ( X S ) , a.s. (3.1)whenever S is a finite stopping time and a and b are as above. Remark 3.1.
We remark that if X were a strong Markov process, then theleft hand side of (3.1) would be equal to E X S τ a,b ] , where τ a,b ] = inf { t ≥ X t / ∈ [ a, b ] } . Thus the above definition of speed measure for a martingale isa generalization of the one for one-dimensional diffusions on natural scale.7 heorem 3.2. Let m be a measure that is finite and positive on every finiteopen interval. There exists a continuous local martingale X with m as itsspeed measure.Proof. Set X equal to X M as defined in (2.4). We only need show that (3.1)holds. Since X is a Markov process and has associated with it probabilities P x and shift operators θ t , then τ S [ a,b ] − S = σ [ a ( X ) ,b ( X )] ◦ θ S , where σ [ a ( X ) ,b ( X )] = inf { t > X t / ∈ [ a ( X ) , b ( X )] } . By the strong Markovproperty, E [ τ S [ a,b ] − S | F S ] = E X S σ [ a ( X ) ,b ( X )] a.s. (3.2)For each y , σ [ a ( X ) ,b ( X )] = τ [ a ( y ) ,b ( y )] under P y , and therefore E y σ [ a ( X ) ,b ( X )] = G [ a ( y ) ,b ( y )] ( y ) . Replacing y by X S ( ω ) and substituting in (3.2) yields (3.1). Theorem 3.3.
Let X be any continuous local martingale that has speed mea-sure m and let f be a non-negative Borel measurable function. Suppose X = x , a.s. Let I = [ a, b ] be a finite interval with a < b such that m does not give positive mass to either end point. Then E Z τ I f ( X s ) ds = Z I g I ( x, y ) f ( y ) m ( dy ) . (3.3) Proof.
It suffices to suppose that f is continuous and equal to 0 at the bound-aries of I and then to approximate an arbitrary non-negative Borel measur-able function by continuous functions that are 0 on the boundaries of I . Themain step is to prove E Z τ I ( X )0 f ( X s ) ds = E Z τ I ( X M )0 f ( X Ms ) ds. (3.4)Let ε >
0. Choose δ such that | f ( x ) − f ( y ) | < ε if | x − y | < δ with x, y ∈ I .Set S = 0 and S i +1 = inf { t > S i : | X t − X S i | ≥ δ } . E Z τ I f ( X s ) ds = E ∞ X i =0 Z S i +1 ∧ τ I S i ∧ τ I f ( X s ) ds differs by at most ε E τ I from E ∞ X i =0 f ( X S i ∧ τ I )( S i +1 ∧ τ I − S i ∧ τ I ) (3.5)= E h ∞ X i =0 f ( X S i ∧ τ I ) E [ S i +1 ∧ τ I − S i ∧ τ I | F S i ∧ τ I ] i . Let a ( x ) = a ∨ ( x − δ ) and b ( x ) = b ∧ ( x + δ ). Since X is a continuous localmartingale with speed measure m , the last line in (3.5) is equal to E ∞ X i =0 f ( X S i ∧ τ I ) G [ a ( X Si ∧ τI ) ,b ( X Si ∧ τI )] ( X S i ∧ τ I ) . (3.6)Because E τ [ − N,N ] < ∞ for all N , then X is a time change of a Brownianmotion. It follows that the distribution of { X S i ∧ τ I ( X ) , i ≥ } is that of asimple random walk on the lattice { x + kδ } stopped the first time it exits I ,and thus is the same as the distribution of { X MS i ∧ τ I ( X M ) , i ≥ } . Therefore theexpression is (3.6) is equal to the corresponding expression with X replacedby X M . This in turns differs by at most E ετ I ( X M ) from E Z τ I ( X M )0 f ( X Ms ) ds. Since ε is arbitrary, we have (3.4). Finally, the right hand side of (3.4) isequal to the right hand side of (3.3) by [5, Corollary IV.2.4]. In this section we show that if X is a continuous local martingale under P with speed measure m , then X has the same law as X M . Note that we donot suppose a priori that X is a strong Markov process. We remark that theresults of [12] do not apply, since in that paper a generalization of the system(1.5)–(1.6) is studied rather than the formulation given by (1.5) together with(1.7). 9 heorem 4.1. Suppose P is a probability measure and X is a continuouslocal martingale with respect to P . Suppose that X has speed measure m and X = x a.s. Then the law of X under P is equal to the law of X M under P x M .Proof. Let
R > m ( {− R } ) = m ( { R } ) = 0 and set I = [ − R, R ].Let X t = X t ∧ τ I ( X ) and X Mt = X Mt ∧ τ I ( X M ) , the processes X and X M stoppedon exiting I . For f bounded and measurable let H λ f = E Z τ I ( X )0 e − λt f ( X t ) dt and H Mλ f ( x ) = E x Z τ I ( X M )0 e − λt f ( X Mt ) dt for λ ≥
0. Since X and X M are stopped at times τ I ( X ) and τ I ( X M ), resp., wecan replace τ I ( X ) and τ I ( X M ) by ∞ in both of the above integrals withoutaffecting H λ or H Mλ as long as f is 0 on the boundary of I .Suppose f ( − R ) = f ( R ) = 0. Then H Mλ f ( − R ) and H Mλ f ( R ) are also 0,since we are working with the stopped process.We want to show H λ f = H Mλ f ( x ) , λ ≥ . (4.1)By Theorem 3.3 we know (4.1) holds for λ = 0. Let K = E τ I ( X ). We have E x τ I ( X M ) = K as well since both X and X M have speed measure m .Let λ = 0 and µ ≤ / K . Let t > Y s = X s + t . Let Q t be aregular conditional probability for P ( Y ∈ · | F t ). It is easy to see that foralmost every ω , Y is a continuous local martingale under Q t ( ω, · ) started at X t and Y has speed measure m . Cf. [5, Section I.5] or [7]. Therefore byTheorem 3.3 E Q t Z ∞ f ( Y s ) ds = H M f ( X t ) . This can be rewritten as E h Z ∞ f ( X s + t ) ds | F t i = H M f ( X t ) , a.s. (4.2)10s long as f is 0 on the endpoints of I .Therefore, recalling that λ = 0, H µ H Mλ f = E Z ∞ e − µt H Mλ f ( X t ) dt (4.3)= E Z ∞ e − µt E h Z ∞ e − λs f ( X s + t ) ds | F t i dt = E Z ∞ e − µt e λt Z ∞ t e − λs f ( X s ) ds dt = E Z ∞ Z s e − ( µ − λ ) t dt e − λs f ( X s ) ds = E Z ∞ − e − ( µ − λ ) s µ − λ e − λs f ( X s ) ds = 1 µ − λ E Z ∞ e − λs f ( X s ) ds − µ − λ E Z ∞ e − µs f ( X s ) ds. = 1 µ − λ H Mλ f ( x ) − µ − λ E Z ∞ e − µs f ( X s ) ds. We used (4.2) in the second equality. Rearranging, H µ f = H Mλ f ( x ) + ( λ − µ ) H µ ( H Mλ f ) . (4.4)Since X and X M are stopped upon exiting I , then H Mλ f = 0 at theendpoints of I . We now take (4.4) with f replaced by H Mλ f , use this toevaluate the last term in (4.4), and obtain H µ f = H Mλ f ( x ) + ( λ − µ ) H Mλ ( H Mλ f )( x ) + ( λ − µ ) H µ ( H Mλ ( H Mλ f )) . We continue. Since | H µ g | ≤ k g k E τ I ( X ) = k g k K and k H Mλ g k ≤ k g k E τ I ( X M ) = k g k K for each bounded g , where k g k is the supremum norm of g , we can iterateand get convergence as long as µ ≤ / K and obtain H µ f = H Mλ f ( x ) + ∞ X i =1 (( λ − µ ) H Mλ ) i H Mλ f ( x ) . X is replaced by X M , so that H Mµ f ( x ) = H Mλ f ( x ) + ∞ X i =1 (( λ − µ ) H Mλ ) i H Mλ f ( x ) . We conclude H µ f = H Mµ f ( x ) as long as µ ≤ / K and f is 0 on theendpoints of I .This holds for every starting point. If Y s = X s + t and Q t is a regularconditional probability for the law of Y s under P x given F t , then we assertedabove that Y is a continuous local martingale started at X t with speedmeasure m under Q t ( ω, · ) for almost every ω . We replace x by X t ( ω ) in thepreceding paragraph and derive E h Z ∞ e − µs f ( X s + t ) ds | F t i = H Mµ f ( X t ) , a.s.if µ ≤ / K and f is 0 on the endpoints of I .We now take λ = 1 / K and µ ∈ (1 / K, / K ]. The same argument asabove shows that H µ f = H Mµ f ( x ) as long as f is 0 on the endpoints of I .This is true for every starting point. We continue, letting λ = n/ K andusing induction, and obtain H µ f = H Mµ f ( x )for every µ ≥ f is continuous with compact support and R is large enoughso that ( − R, R ) contains the support of f . We have that E Z τ [ − R,R ] ( X )0 e − µt f ( X t ) dt = E x Z τ [ − R,R ] ( X M )0 e − µt f ( X Mt ) dt for all µ >
0. This can be rewritten as E Z ∞ e − µt f ( X t ∧ τ [ − R,R ] ( X ) ) dt = E x Z ∞ e − µt f ( X Mt ∧ τ [ R,R ] ( X M ) ) dt. (4.5)If we hold µ fixed and let R → ∞ in (4.5), we obtain E Z ∞ e − µt f ( X t ) dt = E x Z ∞ e − µt f ( X Mt ) dt µ >
0. By the uniqueness of the Laplace transform and the continuityof f, X, and X M , E f ( X t ) = E x f ( X Mt )for all t . By a limit argument, this holds whenever f is a bounded Borelmeasurable function.The starting point x was arbitrary. Using regular conditional probabili-ties as above, E [ f ( X t + s ) | F t ] = E x [ f ( X Mt + s ) | F t ] . By the Markov property, the right hand side is equal to E X Mt f ( X s ) = P s f ( X Mt ) , where P s is the transition probability kernel for X M .To prove that the finite dimensional distributions of X and X M agree, weuse induction. We have E n +1 Y j =1 f j ( X t j ) = E i n Y j =1 f j ( X t j ) E i [ f n +1 ( X t n +1 ) | F t n ]= E i n Y j =1 f j ( X t j ) P t n +1 − t n f n +1 ( X t n ) . We use the induction hypothesis to see that this is equal to E x n Y j =1 f j ( X Mt j ) P t n +1 − t n f n +1 ( X Mt n ) . We then use the Markov property to see that this in turn is equal to E x n +1 Y j =1 f j ( X Mt j ) . Since X and X M have continuous paths and the same finite dimensionaldistributions, they have the same law.13 The stochastic differential equation
We now discuss the particular stochastic differential equation we want ourmartingales to solve. We specialize to the following speed measure. Let γ ∈ (0 , ∞ ) and let m ( dx ) = dx + γδ ( dx ) , (5.1)where δ is point mass at 0.We consider the stochastic differential equation X t = x + Z t ( X s =0) dW s . (5.2)A triple ( X, W, P ) is a weak solution to (5.2) with X starting at x if P is a probability measure, there exists a filtration {F t } satisfying the usualconditions, W is a Brownian motion under P with respect to {F t } , and X isa continuous martingale adapted to {F t } with X = x and satisfying (5.2).We now show that any martingale with X = x a.s. that has speedmeasure m is the first element of a triple that is a weak solution to (5.2).Although X has the same law as X M started at x , here we only have oneprobability measure and we cannot assert that X is a strong Markov process.We point out that [12, Theorem 5.18] does not apply here, since they studya generalization of the system (1.5)–(1.6), and we do not know at this stagethat this formulation is equivalent to the one used here. Theorem 5.1.
Let P be a probability measure on a space that supports aBrownian motion and let X be a continuous martingale which has speed mea-sure m with X = x a.s. Then there exists a Brownian motion W such that ( X, W, P ) is a weak solution to (5.2) with X starting at x . Moreover X t = x + Z t ( X s =0) dX s . (5.3) Proof.
Let W ′ t = Z t ( X s =0) dX s . Hence d h W ′ i t = 1 ( X t =0) d h X i t . < η < δ . Let S = inf { t : | X t | ≥ δ } , T i = inf { t > S i : | X t | ≤ η } ,and S i +1 = inf { t > T i : | X t | ≥ δ } for i = 0 , , . . . .The speed measure of X is equal to m , which in turn is equal to Lebesguemeasure on R \ { } , hence X has the same law as X M by Theorem 4.1. Since X M behaves like a Brownian motion when it is away from zero, we conclude1 [ S i ,T i ] d h X i t = 1 [ S i ,T i ] dt .Thus for each N , Z t ∪ Ni =0 [ S i ,T i ] ( s ) d h X i s = Z t ∪ Ni =0 [ S i ,T i ] ( s ) ds. Letting N → ∞ , then η →
0, and finally δ → ∞ , we obtain Z t ( X s =0) d h X i s = Z t ( X s =0) ds. Let V t be an independent Brownian motion and let W ′′ t = Z t ( X s =0) dV s . Let W t = W ′ t + W ′′ t . Clearly W ′ and W ′′ are orthogonal martingales, so d h W i t = d h W ′ i t + d h W ′′ i t = 1 ( X t =0) dt + 1 ( X t =0) dt = dt. By L´evy’s theorem (see [6, Theorem 12.1]), W is a Brownian motion.If M t = Z t ( X s =0) dX s , by the occupation times formula ([22, Corollary VI.1.6]), h M i t = Z t ( X s =0) d h X i s = Z { } ( x ) ℓ xt ( X ) dx = 0for all t , where { ℓ xt ( X ) } are the local times of X in the semimartingale sense.This implies that M t is identically zero, and hence X t = W ′ t .Using the definition of W , we deduce1 ( X t =0) dW t = 1 ( X t =0) dX t = dW ′ t = dX t , (5.4)as required. 15e now show weak uniqueness, that is, if ( X, W, P ) and ( e X, f W , e P ) are twoweak solutions to (5.2) with X and e X starting at x and in addition X and e X have speed measure m , then the joint law of ( X, W ) under P equals thejoint law of ( e X, f W ) under e P . This holds even though W will not in generalbe adapted to the filtration of X . We know that the law of X under P equalsthe law of e X under e P and also that the law of W under P equals the law of f W under e P , but the issue here is the joint law. Cf. [8]. See also [11]. Theorem 5.2.
Suppose ( X, W, P ) and ( e X, f W , e P ) are two weak solutions to (5.2) with X = e X = x and that X and e X are both continuous martingaleswith speed measure m . Then the joint law of ( X, W ) under P equals the jointlaw of ( e X, f W ) under e P .Proof. Recall the construction of X M from Section 2. With U t a Brownianmotion with jointly continuous local times { L xt } and m given by (5.1), wedefine α t by (2.3), let β t be the right continuous inverse of α t , and let X Mt = x + U β t . Since m is greater than or equal to Lebesgue measure but is finiteon every finite interval, we see that α t is strictly increasing, continuous, andlim t →∞ α t = ∞ . It follows that β t is continuous and tends to infinity almostsurely as t → ∞ .Given any stochastic process { N t , t ≥ } , let F N ∞ be the σ -field generatedby the collection of random variables { N t , t ≥ } together with the null sets.We have β t = h X M i t and U t = X Mα t − x . Since β t is measurable withrespect to F X M ∞ for each t , then α t is also, and hence so is U t . In fact, we cangive a recipe to construct a Borel measurable map F : C [0 , ∞ ) → C [0 , ∞ )such that U = F ( X M ). Note also that X Mt is measurable with respect to F U ∞ for each t and there exists a Borel measurable map G : C [0 , ∞ ) → C [0 , ∞ )such that X M = G ( U ). In addition observe that h X M i ∞ = ∞ a.s.Since X and X M have the same law, then h X i ∞ = ∞ a.s. If Z t is aBrownian motion with X t = x + Z ( ζ t ) for a continuous increasing process ζ , then ζ t = h X i t is measurable with respect to F X ∞ , its inverse ρ t is also,and therefore Z t = X ρ t − x is as well. Moreover the recipe for constructing Z from X is exactly the same as the one for constructing U from X M , thatis, Z = F ( X ). Since X and X M have the same law, then the joint law of( X, Z ) is equal to the joint law of ( X M , U ). We can therefore conclude that X is measurable with respect to F Z ∞ and X = G ( Z ).16et Y t = Z t ( X s =0) dW s . Then Y is a martingale with h Y i t = Z t ( X s =0) ds = t − h X i t . Observe that h X, Y i t = R t ( X s =0) ( X s =0) ds = 0. By a theorem of Knight(see [17] or [22]), there exists a two-dimensional process V = ( V , V ) suchthat V is a two-dimensional Brownian motion under P and( X t , Y t ) = ( x + V ( h X i t ) , V ( h Y i t ) , a.s.(It turns out that h Y i ∞ = ∞ , but that is not needed in Knight’s theorem.)By the third paragraph of this proof, X t = x + V ( h X i t ) implies that X t is measurable with respect to F V ∞ , and in fact X = G ( V ). Since h Y i t = t − h X i t , then ( X t , Y t ) is measurable with respect to F V ∞ for each t and thereexists a Borel measurable map H : C ([0 , ∞ ) , R ) → C ([0 , ∞ ) , R ), where C ([0 , ∞ ) , R ) is the space of continuous functions from [0 , ∞ ) to R , and( X, Y ) = H ( V ). Thus ( X, Y ) is the image under H of a two-dimensionalBrownian motion. If ( e X, f W , e P ) is another weak solution, then we can define e Y analogously and find a two-dimensional Brownian motion e V such that( e X, e Y ) = H ( e V ). The key point is that the same H can be used. We concludethat the law of ( X, Y ) is uniquely determined. Since(
X, W ) = (
X, X + Y − x ) , this proves that the joint law of ( X, W ) is uniquely determined.
Remark 5.3.
In Section 2 we constructed the continuous strong Markovprocess ( X M , P xM ) and we now know that X started at x is equal in law to X M under P x M . We pointed out in Remark 3.1 that in the strong Markovcase the notion of speed measure for a martingale reduces to that of speedmeasure for a one dimensional diffusion. In [11] it is shown that the solutionto the system (1.5)–(1.6) is unique in law and thus the solution started at x is equal in law to that of a diffusion on R started at x ; let e m be thespeed measure for this strong Markov process. Thus to show the equivalence17f the system (1.5)–(1.6) to the one given by (1.5) and (1.7), it suffices toshow that e m = m if and only if (1.6) holds, where m is given by (5.1) and γ = 1 /µ . Clearly both e m and m are equal to Lebesgue measure on R \ { } ,so it suffices to compare the atoms of e m and m at 0.Suppose (1.6) holds and γ = 1 /µ . Let A t = R t { } ( X s ) ds . Thus (1.6)asserts that A t = µ ℓ t . Let I = [ a, b ] = [ − , x = 0, and τ I the first timethat X leaves the interval I . Setting t = τ I and taking expectations startingfrom 0, we have E A τ I = 1 µ E ℓ τ I . Since ℓ t is the increasing part of the submartingale | X t − x | − | x | and X τ I is equal to either 1 or −
1, the right hand side is equal to1 µ E | X τ I | = 1 µ . On the other hand, by [5, (IV.2.11)], E A τ I = Z − g I (0 , y )1 { } ( y ) e m ( dy ) = e m ( { } ) . Thus e m = m if γ = 1 /µ .Now suppose we have a solution to the pair (1.5) and (1.7) and γ = 1 /µ ;we will show (1.6) holds. Let R > I = [ − R, R ], and τ I the first exit timefrom I . Set B t = µ ℓ t . For any x ∈ I , we have by [5, (IV.2.11)] that E x A τ I = Z − g I ( x, y )1 { } ( y ) m ( dy ) = γg I ( x, . (5.5)Taking expectations, E x B τ I = 1 µ E x [ | X τ I − x | − | x | ] . (5.6)Since X is a time change of a Brownian motion that exits I a.s., the distribu-tion of X τ I started at x is the same as that of a Brownian motion started at x upon exiting I . A simple computation shows that the right hand side of (5.6)agrees with the right hand side of (5.5). By the strong Markov property, E [ A τ I − A τ I ∧ t | F t ] = E X t A τ I = E X t B τ I = E [ B τ I − B τ I ∧ t | F t ]18lmost surely on the set ( t ≤ τ I ). Observe that if U t = E [ A τ I − A τ I ∧ t | F t ],then we can write U t = E [ A τ I − A τ I ∧ t | F t ] = E [ A τ I | F t ] − A τ I ∧ t and U t = E [ B τ I − B τ I ∧ t | F t ] = E [ B τ I | F t ] − B τ I ∧ t for t ≤ τ I . This expresses the supermartingale U as a martingale minusan increasing process in two different ways. By the uniqueness of the Doobdecomposition for supermartingales, we conclude A τ I ∧ t = B τ I ∧ t for t ≤ τ I .Since R is arbitrary, this establishes (1.6). (The argument that the potentialof an increasing process determines the process is well known.) Remark 5.4.
In the remainder of the paper we prove that there does notexist a strong solution to the pair (1.5) and (1.7) nor does pathwise unique-ness hold. In [11], the authors prove that there is no strong solution to thepair (1.5) and (1.6) and that pathwise uniqueness does not hold. Since wenow know there is an equivalence between the pair (1.5) and (1.7) and thepair (1.5) and (1.6), one could at this point use the argument of [11] in placeof the argument of this paper. Alternatively, in the paper of [11] one coulduse our argument in place of theirs to establish the non-existence of a strongsolution and that pathwise uniqueness does not hold.
Let f W be a Brownian motion adapted to a filtration {F t , t ≥ } , let ε ≤ γ ,and let X εt be the solution to dX εt = σ ε ( X εt ) d f W t , X ε = x , (6.1)where σ ε ( x ) = ( , | x | > ε ; p ε/γ, | x | ≤ ε. For each x the solution to the stochastic differential equation is pathwiseunique by [20] or [21]. We also know that if P xε is the law of X ε starting from19 , then ( X ε , P xε ) is a continuous regular strong Markov process on naturalscale. The speed measure of X ε will be m ε ( dy ) = dy + γε [ − ε,ε ] ( y ) dy. Let Y ε be the solution to dY εt = σ ε ( Y εt ) d f W t , Y ε = x . (6.2)Since σ ε ≤
1, then d h X ε i t ≤ dt . By the Burkholder-Davis-Gundy inequal-ities (see, e.g., [6, Section 12.5]), E | X εt − X εs | p ≤ c | t − s | p (6.3)for each p ≥
1, where the constant c depends on p . It follows (for example,by Theorems 8.1 and 32.1 of [6]) that the law of X ε is tight in C [0 , t ] for each t . The same is of course true for Y ε and f W , and so the triple ( X ε , Y ε , f W )is tight in ( C [0 , t ]) for each t > P εt be the transition probabilities for the Markov process X ε . Let C be the set of continuous functions on R that vanish at infinity and let L = { f ∈ C : | f ( x ) − f ( y ) | ≤ | x − y | , x, y ∈ R } , the set of Lipschitz functions with Lipschitz constant 1 that vanish at infinity.One of the main results of [3] (see Theorem 4.2) is that P εt maps L into L for each t and each ε < Theorem 6.1. If f ∈ C , then P εt f converges uniformly for each t ≥ . Ifwe denote the limit by P t f , then { P t } is a family of transition probabilitiesfor a continuous regular strong Markov process ( X, P x ) on natural scale withspeed measure given by (5.1) . For each x , P xε converges weakly to P x withrespect to C [0 , N ] for each N .Proof. Step 1. Let { g j } be a countable collection of C functions in L withcompact support such that the set of finite linear combinations of elementsof { g j } is dense in C with respect to the supremum norm.Let ε n be a sequence converging to 0. Suppose g j has support containedin [ − K, K ] with
K >
1. Since X εt is a Brownian motion outside [ − , | x | > K , then | P εt g j ( x ) | = | E x g j ( X εt ) | ≤ k g j k P x ( | X ε | hits | x | / t ) , ε < | x | → ∞ . Here k g j k is the supremumnorm of g j . By the equicontinuity of the P εt g j , using the diagonalizationmethod there exists a subsequence, which we continue to denote by ε n , suchthat P ε n t g j converges uniformly on R for every rational t ≥ j .We denote the limit by P t g j .Since g j ∈ C , P εt g j ( x ) − P εs g j ( x ) = E x g j ( X εt ) − E x g j ( X εs )= E x Z ts σ ε ( X εr ) g ′ j ( X εr ) d f W r + E x Z ts σ ε ( X εr ) g ′′ j ( X εr ) dr = E x Z ts σ ε ( X εr ) g ′′ j ( X εr ) dr, where we used Ito’s formula. Since σ ε is bounded by 1, we obtain | P εt g j ( x ) − P εs g j ( x ) | ≤ c j | t − s | , where the constant c j depends on g j . With this fact, we can deduce that P ε n t g j converges uniformly in C for every t ≥
0. We again call the limit P t g j . Since linear combinations of the g j ’s are dense in C , we conclude that P ε n t g converges uniformly to a limit, which we call P t g , whenever g ∈ C .We note that P t maps C into C . Step 2.
Each X εt is a Markov process, so P εs ( P εt g ) = P εs + t g . By the uniformconvergence and equicontinuity and the fact that P εs is a contraction, we seethat P s ( P t g ) = P s + t g whenever g ∈ C .Let s < s < · · · s j and let f , . . . f j be elements of L . Define inductively g j = f j , g j − = f j − ( P s j − s j − g j ), g j − = f j − ( P s j − − s j − g j − ), and so on.Define g εj analogously where we replace P t by P εt . By the Markov propertyapplied repeatedly, E x [ f ( X εs ) · · · f j ( X εs j )] = P εs g ε ( x ) . Suppose x is fixed for the moment and let f , · · · , f j ∈ L . Suppose thereis a subsequence ε n ′ of ε n such that X ε n ′ converges weakly, say to X , and let P ′ be the limit law with corresponding expectation E ′ . Using the uniformconvergence, the equicontinuity, and the fact that P εt maps L into L , weobtain E ′ [ f ( X s ) · · · f j ( X s j )] = P s g ( x ) . (6.4)21e can conclude several things from this. First, since the limit is thesame no matter what subsequence { ε n ′ } we use, then the full sequence P xε n converges weakly. This holds for each starting point x .Secondly, if we denote the weak limit of the P xε n by P x , then (6.4) holdswith E ′ replaced by E x . From this we deduce that ( X, P x ) is a Markovprocess with transition semigroup given by P t .Thirdly, since P x is the weak limit of probabilities on C [0 , ∞ ), we concludethat X under P x has continuous paths for each x . Step 3.
Since P t maps C into C and P t f ( x ) = E x f ( X t ) → f ( x ) by thecontinuity of paths if f ∈ C , we conclude by [6, Theorem 20.9] that ( X, P x )is in fact a strong Markov process.Suppose f , . . . , f j are in L and s < s < · · · < s j < t < u . Since X εt is amartingale, E xε h X εu j Y i =1 f i ( X εs i ) i = E x h X εt j Y i =1 f i ( X εs i ) i . Moreover, X εt and X εu are uniformly integrable due to (6.3). Passing to thelimit along the sequence ε n , we have the equality with X ε replaced by X and E xε replaced by E x . Since the collection of random variables of the form Q i f i ( X s i ) generate σ ( X r ; r ≤ t ), it follows that X is a martingale under P x for each x . Step 4.
Let δ, η >
0. Let I = [ q, r ] and I ∗ = [ q − δ, r + δ ]. In this step weshow that E τ I ( X ) = Z I g I (0 , y ) m ( dy ) . (6.5)First we obtain a uniform bound on τ I ∗ ( X ε ). If A εt = t ∧ τ I ∗ ( X ε ), then E [ A ε ∞ − A εt | F t ] = E X εt A ε ∞ ≤ sup x E x τ I ∗ ( X ε ) . The last term is equal tosup x Z I ∗ g I ∗ ( x, y ) (cid:16) γε I ∗ ( y ) (cid:17) dy. A simple calculation shows that this is bounded by c ( r − q + 2 δ ) + cγ ( r − q + 2 δ ) , c does not depend on r, q, δ , or ε . By Theorem I.6.10 of [4], we thendeduce that E τ I ∗ ( X ε ) = E ( A ε ∞ ) < c < ∞ , where c does not depend on ε . By Chebyshev’s inequality, for each t , P ( τ I ∗ ( X ε ) ≥ t ) ≤ c/t . Next we obtain an upper bound on E τ I ( X ) in terms of g I ∗ . We have P ( τ I ( X ) > t ) = P (sup s ≤ t | X s | ≤ r, inf s ≤ t | X s | ≥ q ) ≤ lim sup ε n → P (sup s ≤ t | X ε n s | ≤ r, inf s ≤ t | X s | ε ≥ q ) ≤ lim sup ε n → P ( τ I ∗ ( X ε n ) > t ) ≤ c/t . Choose u such that Z ∞ u P ( τ I ( X ) > t ) dt < η, Z ∞ u P ( τ I ∗ ( X ε n ) > t ) dt < η for each ε n .Let f and g be continuous functions taking values in [0 ,
1] such that f isequal to 1 on ( −∞ , r ] and 0 on [ r + δ, ∞ ) and g is equal to 1 on [ q, ∞ ) and0 on ( −∞ , q − δ ]. We have P (sup s ≤ t | X s | ≤ r, inf s ≤ t | X s ≥ q ) ≤ E [ f (sup s ≤ t | X s | ) g (inf s ≤ t | X s | )]= lim ε n → E [ f (sup s ≤ t | X ε n s | ) g (inf s ≤ t | X ε n s | )] . Then Z u P ( τ I ( X ) > t ) dt = Z u P (sup s ≤ t | X s | ≤ r, inf s ≤ t | X s | ≥ q ) dt ≤ Z u E [ f (sup s ≤ t | X s | ) g (inf s ≤ t | X s | )] dt = Z u lim ε n → E [ f (sup s ≤ t | X ε n s | ) g (inf s ≤ t | X ε n s | )] dt
23 lim ε n → Z u E [ f (sup s ≤ t | X ε n s | ) g (inf s ≤ t | X ε n s | )] dt ≤ lim sup ε n → Z u P (sup s ≤ t | X ε n s | ≤ r + δ, inf s ≤ t | X s | ≥ q − δ ) dt ≤ lim sup ε n → Z u P ( τ I ∗ ( X ε n ) ≥ t ) dt ≤ lim sup ε n → E τ I ∗ ( X ε n ) . Hence E τ I ( X ) ≤ Z u P ( τ I ( X ) > t ) dt + η ≤ lim sup ε n → E τ I ∗ ( X ε n ) + η. We now use the fact that η is arbitrary and let η →
0. Then E τ I ( X ) ≤ lim sup ε n → E τ I ∗ ( X ε n )= lim sup ε n → Z I ∗ g I ∗ (0 , y ) (cid:16) γε [ − ε,ε ] ( y ) (cid:17) dy = Z I ∗ g I ∗ (0 , y ) m ( dy ) . We next use the joint continuity of g [ − a,a ] ( x, y ) in the variables a, x and y .Letting δ →
0, we obtain E τ I ( X ) ≤ Z I g I (0 , y ) m ( dy ) . The lower bound for E τ I ( X ) is done similarly, and we obtain (6.5). Step 5.
Next we show that X is a regular strong Markov process. This meansthat if x = y , P x ( X t = y for some t ) >
0. To show this, assume without lossof generality that y < x . Suppose X starting from x does not hit y withpositive probability. Let z = x + 4 | x − y | . Since E x τ [ y,z ] < ∞ , then withprobability one X hits z and does so before hitting y . Hence T z = τ [ y,z ] < ∞ a.s. Choose t large so that P x ( τ [ y,z ] > t ) < /
16. By the optional stoppingtheorem, E x X T z ∧ t ≥ z P x ( T z ≤ t ) + y P x ( T z > t ) = z − ( z − y ) P x ( T z > t ) .
24y our choice of z , this is greater than x , which contradicts that X is amartingale. Hence X must hit y with positive probability.Therefore X is a regular continuous strong Markov process on the realline. Since it is a martingale, it is on natural scale. Since its speed measureis the same as that of X M by (6.5), we conclude from [5, Theorem IV.2.5]that X and X M have the same law. In particular, X is a martingale withspeed measure m . Step 6.
Since we obtain the same limit law no matter what sequence ε n westarted with, the full sequence P εt converges to P t and P xε converges weaklyto P x for each x .All of the above applies equally well to Y and its transition probabilitiesand laws.Recall that the sequence ( X ε , Y ε , f W ) is tight with respect to ( C [0 , N ]) for each N . Take a subsequence ( X ε n , Y ε n , f W ) that converges weakly, say tothe triple ( X, Y, W ), with respect to ( C [0 , N ]) for each N . The last task ofthis section is to prove that X and Y satisfy (5.2). Theorem 6.2. ( X, W ) and ( Y, W ) each satisfy (5.2) .Proof. We prove this for X as the proof for Y is exactly the same. Clearly W is a Brownian motion. Fix N . We will first show Z t ( X s =0) dX s = Z t ( X s =0) dW s (6.6)if t ≤ N. Let δ > g be a continuous function taking values in [0 ,
1] suchthat g ( x ) = 0 if | x | < δ and g ( x ) = 1 if | x | ≥ δ . Since g is bounded andcontinuous and ( X ε n , f W ) converges weakly to ( X, W ), then ( X ε n , f W , g ( X ε n ))converges weakly to ( X, W, g ( X )). Moreover, since g is 0 on ( − δ, δ ), then Z t g ( X ε n s ) d f W s = Z t g ( X ε n s ) dX ε n s (6.7)for ε n small enough. 25y Theorem 2.2 of [19], we have (cid:16) Z t g ( X ε n s ) d f W s , Z t g ( X ε n s ) dX ε n s (cid:17) converges weakly to (cid:16) Z t g ( X s ) dW s , Z t g ( X s ) dX s (cid:17) . Then E arctan (cid:16)(cid:12)(cid:12)(cid:12) Z t g ( X s ) dW s − Z t g ( X s ) dX s (cid:12)(cid:12)(cid:12)(cid:17) = lim n →∞ E arctan (cid:16)(cid:12)(cid:12)(cid:12) Z t g ( X ε n s ) d f W s − Z t g ( X ε n s ) dX ε n s (cid:12)(cid:12)(cid:12)(cid:17) = 0 , or Z t g ( X s ) dW s = Z t g ( X s ) dX s , a.s.Letting δ → X Mt = Z t ( X Ms =0) dX Ms . Since X M and X have the same law, the same is true if we replace X M by X . Combining with (6.6) proves (5.2). Let j ε ( s ) = ( , | X εs | ∈ [ − ε, ε ] or | Y εs | ∈ [ − ε, ε ] or both;0 , otherwise . Let J εt = Z t j εs ds. Set Z εt = X εt − Y εt , Z ε = 0, and define ψ ε ( x, y ) = σ ε ( x ) − σ ε ( y ). Then dZ εt = ψ ε ( X εt , Y εt ) d f W t . Let S = inf { t : | Z εt | ≥ ε } , (7.1) T i = inf { t ≥ S i : | Z εt | / ∈ [4 ε, b ] } ,S i +1 = inf { t ≥ T i : | Z εt | ≥ ε } , and U b = inf { t : | Z εt | = b } . Proposition 7.1.
For each n , P ( S n < U b ) ≤ (cid:16) − εb (cid:17) n . Proof.
Since X ε is a recurrent diffusion, R t [ − ε,ε ] ( X εs ) ds tends to infinity a.s.as t → ∞ . When x ∈ [ − ε, ε ], then | ψ ε ( x, y ) | ≥ cε , and we conclude that h Z ε i t → ∞ as t → ∞ .Let {F t } be the filtration generated by f W . Z εt + S n − Z εS n is a martingalestarted at 0 with respect to the regular conditional probability for the law of( X εt + S n , Y εt + S n ) given F S n . The conditional probability that it hits 4 ε before b if Z εS n = 6 ε is the same as the conditional probability it hits − ε before − b if Z εS n = − ε and is equal to b − εb − ε ≤ − εb . Since this is independent of ω , we have P (cid:16) | Z εt + S n − Z εS n | hits 4 ε before hitting b | F S n (cid:17) ≤ − εb . Let V n = inf { t > S n : | Z εt | = b } . Then P ( S n +1 < U b ) ≤ P ( S n < U b , T n +1 < V n )= E [ P ( T n +1 < V n | F S n ); S n < U b ] ≤ (cid:16) − εb (cid:17) P ( S n < U b ) . Our result follows by induction. 27 roposition 7.2.
There exists a constant c such that E J εT n ≤ c nε for each n .Proof. For t between times S n and T n we know that | Z εt | lies between 4 ε and b . Then at least one of X εt / ∈ [ − ε, ε ] and Y εt / ∈ [ − ε, ε ] holds. If exactly oneholds, then | ψ ε ( X εt , Y εt ) | ≥ − p ε/γ ≥ / ε is small enough. If bothhold, we can only say that d h Z ε i t ≥
0. In any case, d h Z ε i t ≥ dJ εt for S n ≤ t ≤ T n . Z εt is a martingale, and by Lemma 2.1 and an argument using regularconditional probabilities similar to those we have done earlier, E [ J εT n − J εS n ] ≤ E [ h Z ε i T n − h Z ε i S n ] ≤ b − ε )(2 ε ) = cε. (7.2)Between times T n and S n +1 it is possible that ψ ε ( X εt , Y εt ) can be 0 or itcan be larger than c p ε/γ . However if either X εt ∈ [ − ε, ε ] or Y εt ∈ [ − ε, ε ],then ψ ε ( X εt , Y εt ) ≥ c p ε/γ . Thus d h Z ε i t ≥ cε dJ εt for T n ≤ t ≤ S n +1 . By Lemma 2.1 E [ J εS n +1 − J εT n ] ≤ cε − E [ h Z ε i S n +1 − h Z ε i T n ] ≤ cε − (2 ε )(10 ε ) = cε. (7.3)Summing each of (7.2) and (7.3) over j from 1 to n and combining yieldsthe proposition. Proposition 7.3.
Let
K > and η > . There exists R depending on K and η such that P ( J ετ [ − R,R ] ( X ε ) < K ) ≤ η, ε ≤ / . roof. Fix ε ≤ /
2. We will see that our estimates are independent of ε .Note J εt ≥ H t = Z t [ − ε,ε ] ( X εs ) ds. Therefore to prove the proposition it is enough to prove that P ε ( H τ [ − R,R ] ( X ε ) < K ) ≤ η if R is large enough.Let I = [ − , E ε H τ I ( X ε ) ≥ Z − g I (0 , y ) γε [ − ε,ε ] ( y ) dy ≥ c . On the other hand, for any x ∈ I , E x H τ I ( X ε ) = Z I g I ( x, y ) γε [ − ε,ε ] ( y ) dy ≤ c . Combining this with E ε [ H τ I ( X ε ) − H t | F t ] ≤ E X εt ε H τ I ( X ε ) and Theorem I.6.10 of [4] (with B = c there), we see that E H τ I ( X ε ) ≤ c . Let α = 0, β i = inf { t > α i : | X εt | = 1 } and α i +1 = inf { t > β i : X εt = 0 } . Since X εt is a recurrent diffusion, each α i is finite a.s. and β i → ∞ as i → ∞ .Let V i = H β i − H α i . By the strong Markov property, under P ε the V i arei.i.d. random variables with mean larger than c and variance bounded by c , where c and c do not depend on ε as long as ε < /
2. Then P ε (cid:16) k X i =1 V i ≤ c k/ (cid:17) ≤ P ε (cid:16) k X i =1 ( V i − E V i ) ≥ c k/ (cid:17) ≤ Var ( P ki =1 V i )( c k/ ≤ c /c k. k large enough, we see that P ε (cid:16) k X i =1 V i ≤ K (cid:17) ≤ η/ . Using the fact that X εt is a martingale, starting at 1, the probabilityof hitting R before hitting 0 is 1 /R . Using the strong Markov property,the probability of | X | having no more than k downcrossings of [0 ,
1] beforeexiting [ − R, R ] is bounded by 1 − (cid:16) − R (cid:17) k . If we choose R large enough, this last quantity will be less than η/
2. Thus,except for an event of probability at most η , X εt will exit [ − ,
1] and returnto 0 at least k times before exiting [ − R, R ] and the total amount of timespent in [ − ε, ε ] before exiting [ − R, R ] will be at least K . Proposition 7.4.
Let η > , R > , and I = [ − R, R ] . There exists t depending on R and η such that P ε ( τ I ( X ε ) > t ) ≤ η, ε ≤ / . Proof. If ε ≤ E ε τ R ( X ε ) = Z I g I ( x, y ) m ε ( dy ) . A calculation shows this is bounded by cR + cR , where c does not dependon ε or R . Applying Chebyshev’s inequality, P ε ( τ I ( X ε ) > t ) ≤ E ε τ I ( X ε ) t , which is bounded by η if t ≥ c ( R + R ) /η .30 Pathwise uniqueness fails
We continue the notation of Section 7. The strategy of proving that pathwiseuniqueness does not hold owes a great deal to [2].
Theorem 8.1.
There exist three processes
X, Y , and W and a probabilitymeasure P such that W is a Brownian motion under P , X and Y are contin-uous martingales under P with speed measure m starting at 0, (5.2) holds for X , (5.2) holds when X is replaced by Y , and P ( X t = Y t for some t > > .Proof. Let ( X ε , Y ε , f W ) be defined as in (6.1) and (6.2) and choose a sequence ε n decreasing to 0 such that the triple converges weakly on C [0 , N ] × C [0 , N ] × C [0 , N ] for each N . By Theorems 6.1 and 6.2, the weak limit, ( X, Y, W ) issuch that X and Y are continuous martingales with speed measure m , W isa Brownian motion, and (5.2) holds for X and also when X is replaced by Y . Let b = 1 and let S n , T n , and U b be defined by (7.1). Let A ( ε, n ) be theevent where T n < U b . By Proposition 7.1 P ( A ( ε, n )) = P ( S n < U b ) ≤ (cid:16) − εb (cid:17) n . Choose n ≥ β/ε , where β is large enough so that the right hand side is lessthan 1 / ε sufficiently small.By Proposition 7.2, E J εT n ≤ c nε = c β. By Chebyshev’s inequality, P ( J εT n ≥ c β ) ≤ P ( J εT n ≥ E J εT n ) ≤ / . Let A ( ε, n ) be the event where J εT n ≥ c β .Take K = 10 c β . By Proposition 7.3, there exists R such that P ( J ετ [ − R,R ] ( X ε ) < K ) ≤ / . Let A ( ε, R, K ) be the event where J ετ [ R,R ] ( X ε ) < K .Choose t using Proposition 7.4, so that except for an event of proba-bility 1 / τ [ − R,R ] ( X ε ) ≤ t . Let A ( ε, R, t ) be the event where τ [ − R,R ] ( X ε ) ≤ t . 31et B ( ε ) = ( A ( ε, n ) ∪ A ( ε, n ) ∪ A ( ε, R, K ) ∪ A ( ε, R, t )) c . Note P ( B ( ε )) ≥ / B ( ε ). We have J εT n ≤ c β < K ≤ J ετ [ − R,R ] ( X ε ) . We conclude that T n < τ [ − R,R ] ( X ε ). Therefore, on the event B ( ε ), we seethat T n has occurred before time t . We also know that U b has occurredbefore time t . Hence, on B ( ε ), P (sup s ≤ t | Z εs | ≥ b ) ≥ / . Since Z ε = X ε − Y ε converges weakly to X − Y , then with probabilityat least 1 /
5, we have that sup s ≤ t | Z s | ≥ b/
2. This implies that X t = Y t forsome t , or pathwise uniqueness does not hold.We also can conclude that strong existence does not hold. The argumentwe use is similar to ones given in [8], [10], and [18]. Theorem 8.2.
Let W be a Brownian motion. There does not exist a contin-uous martingale X starting at 0 with speed measure m such that (5.2) holdsand such that X is measurable with respect to the filtration of W .Proof. Let W be a Brownian motion and suppose there did exist such aprocess X . Then there is a measurable map F : C [0 , ∞ ) → C [0 , ∞ ) suchthat X = F ( W ).Suppose Y is any other continuous martingale with speed measure m satisfying (5.2). Then by Theorem 4.1, the law of Y equals the law of X , andby Theorem 5.2, the joint law of ( Y, W ) is equal to the joint law of (
X, W ).Therefore Y also satisfies Y = F ( W ), and we get pathwise uniqueness since X = F ( W ) = Y . However, we know pathwise uniqueness does not hold. Weconclude that no such X can exist, that is, strong existence does not hold.32 eferences [1] M.T. Barlow, Skew Brownian motion and a one-dimensional stochasticdifferential equation. Stochastics (1988) 1–2.[2] M.T. Barlow, One-dimensional stochastic differential equation with nostrong solution. J. London Math. Soc. (1982) 335–345.[3] R.F. Bass, Markov processes with Lipschitz semigroups, Trans. Amer.Math. Soc. (1981), 307–320.[4] R.F. Bass,
Probabilistic Techniques in Analysis , New York, Springer,1995.[5] R.F. Bass,
Diffusions and Elliptic Operators , New York, Springer, 1997.[6] R.F. Bass,
Stochastic Processes , Cambridge University Press, Cam-bridge, 2011.[7] R.F. Bass, K. Burdzy, and Z.-Q. Chen, Pathwise uniqueness for a degen-erate stochastic differential equation,
Ann. Probab. (2007) 2385–2418.[8] A.S. Chernyi, On the uniqueness in law and the pathwise uniqueness forstochastic differential equations. Theory Probab. Appl. (2003) 406–419.[9] R. Chitashvili, On the nonexistence of a strong solution in the boundaryproblem for a sticky Brownian motion. Proc. A. Razmadze Math. Inst. (1997) 17–31.[10] H.J. Engelbert, On the theorem of T. Yamada and S. Watanabe.
Stochastics , (1991) 205–216.[11] H.J. Engelbert and G. Peskir, Stochastic differential equations for stickyBrownian motions. Probab. Statist. Group Manchester Research Report(5).[12] H.J. Engelbert and W. Schmidt, Strong Markov continuous local martin-gales and solutions of one-dimensional stochastic differential equations.III. Math. Nachr. (1991) 149–197.3313] K. Itˆo and H.P. McKean, Jr.,
Diffusion Processes and their SamplePaths . Springer-Verlag, Berlin, 1974.[14] I. Karatzas and S.E. Shreve,
Brownian Motion and Stochastic Calculus,2nd ed.
Springer-Verlag, New York, 1991.[15] I. Karatzas, A.N. Shiryaev, and M. Shkolnikov, On the one-sided Tanakaequation with drift.
Electron. Commun. Probab. (2011), 664–677.[16] F.B. Knight, Essentials of Brownian Motion and Diffusion . AmericanMathematical Society, Providence, R.I., 1981.[17] F.B. Knight, A reduction of continuous square-integrable martingales toBrownian motion.
Martingales , 19–31, Springer, Berlin, 1970[18] T.G. Kurtz, The Yamada-Watanabe-Engelbert theorem for generalstochastic equations and inequalities.
Electron. J. Prob. (2007) 951–965.[19] T.G. Kurtz and P. Protter, Weak limit theorems for stochastic integralsand stochastic differential equations. Ann. Probab. (1991) 1035–1070.[20] J.-F. Le Gall, Applications du temps local aux ´equations diff´erentiellesstochastiques unidimensionnelles. S´eminaire de Probabilit´es, XVII , 15–31, Springer, Berlin, 1983.[21] S. Nakao, On the pathwise uniqueness of solutions of one-dimensionalstochastic differential equations.
Osaka J. Math. (1972) 513–518.[22] D. Revuz and M. Yor, Continuous Martingales and Brownian Motion,3rd ed.
Springer-Verlag, Berlin, 1999.[23] J. Warren, Branching processes, the Ray-Knight theorem, and stickyBrownian motion.
S´eminaire de Probabilit´es XXXI , 1–15, Berlin,Springer, 1997.[24] J. Warren, On the joining of sticky Brownian motion.
S´eminaire deProbabilit´es XXXIII , 257–266, Berlin, Springer, 1999.
Richard F. Bass
Department of Mathematics 34niversity of ConnecticutStorrs, CT 06269-3009, USA [email protected]@uconn.edu